Note
Go to the end to download the full example code.
Mixtures of logit with Monte-Carlo 2000 draws
Estimation of a mixtures of logit models where the integral is approximated using MonteCarlo integration.
- author:
Michel Bierlaire, EPFL
- date:
Thu Apr 13 21:04:47 2023
import biogeme.biogeme_logging as blog
from biogeme.expressions import bioDraws
from b07estimation_specification import get_biogeme
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b07estimation_monte_carlo.py')
Example b07estimation_monte_carlo.py
R = 2000
the_draws = bioDraws('b_time_rnd', 'NORMAL')
the_biogeme = get_biogeme(the_draws=the_draws, number_of_draws=R)
the_biogeme.modelName = 'b07estimation_monte_carlo'
Biogeme parameters read from biogeme.toml.
results = the_biogeme.estimate()
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
*** Initial values of the parameters are obtained from the file __b07estimation_monte_carlo.iter
Cannot read file __b07estimation_monte_carlo.iter. Statement is ignored.
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Iter. asc_car asc_train b_cost b_time b_time_s Function Relgrad Radius Rho
0 1 -1 -1 -1 2 9.8e+03 0.15 1 0.23 +
1 0 -0.79 0 -2 2.3 9e+03 0.065 1 0.32 +
2 0.41 -0.65 -1 -2 2.2 8.7e+03 0.033 1 0.61 +
3 0.41 -0.65 -1 -2 2.2 8.7e+03 0.033 0.5 -1.4 -
4 0.41 -0.65 -1 -2 2.2 8.7e+03 0.033 0.25 0.043 -
5 0.16 -0.44 -0.75 -2.3 2 8.6e+03 0.015 0.25 0.65 +
6 0.33 -0.46 -0.94 -2.3 1.7 8.6e+03 0.019 0.25 0.31 +
7 0.33 -0.46 -0.94 -2.3 1.7 8.6e+03 0.019 0.12 -0.41 -
8 0.2 -0.33 -0.9 -2.2 1.7 8.6e+03 0.0096 0.12 0.45 +
9 0.33 -0.38 -0.85 -2.1 1.6 8.6e+03 0.0095 0.12 0.38 +
10 0.33 -0.38 -0.85 -2.1 1.6 8.6e+03 0.0095 0.062 -0.26 -
11 0.33 -0.38 -0.85 -2.1 1.6 8.6e+03 0.0095 0.031 0.0023 -
12 0.3 -0.41 -0.82 -2.2 1.6 8.6e+03 0.0052 0.031 0.54 +
13 0.27 -0.38 -0.85 -2.1 1.6 8.6e+03 0.0045 0.031 0.67 +
14 0.24 -0.41 -0.88 -2.1 1.5 8.6e+03 0.0064 0.031 0.41 +
15 0.27 -0.39 -0.85 -2.1 1.5 8.6e+03 0.0025 0.031 0.85 +
16 0.24 -0.4 -0.87 -2.1 1.5 8.6e+03 0.0025 0.31 0.93 ++
17 0.24 -0.4 -0.87 -2.1 1.5 8.6e+03 0.0025 0.16 0.098 -
18 0.24 -0.47 -0.84 -2 1.3 8.6e+03 0.0051 0.16 0.34 +
19 0.24 -0.47 -0.84 -2 1.3 8.6e+03 0.0051 0.078 0.09 -
20 0.18 -0.43 -0.83 -1.9 1.3 8.6e+03 0.0036 0.078 0.39 +
21 0.18 -0.43 -0.83 -1.9 1.3 8.6e+03 0.0036 0.039 -0.11 -
22 0.19 -0.46 -0.87 -1.9 1.2 8.6e+03 0.0016 0.039 0.6 +
23 0.19 -0.46 -0.87 -1.9 1.2 8.6e+03 0.0016 0.02 -0.11 -
24 0.18 -0.46 -0.85 -1.9 1.2 8.6e+03 0.0023 0.02 0.44 +
25 0.18 -0.46 -0.84 -1.9 1.2 8.6e+03 0.00049 0.02 0.6 +
26 0.18 -0.46 -0.84 -1.9 1.2 8.6e+03 0.00049 0.0098 -3.5 -
27 0.18 -0.46 -0.84 -1.9 1.2 8.6e+03 0.00049 0.0049 -0.93 -
28 0.19 -0.46 -0.85 -1.9 1.2 8.6e+03 0.00051 0.0049 0.31 +
29 0.18 -0.46 -0.85 -1.9 1.2 8.6e+03 0.00022 0.0049 0.7 +
30 0.18 -0.46 -0.85 -1.9 1.2 8.6e+03 0.00015 0.0049 0.74 +
31 0.18 -0.47 -0.85 -1.9 1.2 8.6e+03 0.0002 0.0049 0.49 +
32 0.18 -0.47 -0.85 -1.9 1.2 8.6e+03 9.5e-05 0.0049 0.57 +
Results saved in file b07estimation_monte_carlo.html
Results saved in file b07estimation_monte_carlo.pickle
print(results.short_summary())
Results for model b07estimation_monte_carlo
Nbr of parameters: 5
Sample size: 10719
Excluded data: 9
Final log likelihood: -8569.718
Akaike Information Criterion: 17149.44
Bayesian Information Criterion: 17185.83
pandas_results = results.get_estimated_parameters()
pandas_results
Total running time of the script: (7 minutes 31.644 seconds)