Mixtures of logit with Monte-Carlo 2000 Halton draws

Estimation of a mixtures of logit models where the integral is approximated using MonteCarlo integration with Halton draws.

author:

Michel Bierlaire, EPFL

date:

Mon Dec 11 08:10:40 2023

import biogeme.biogeme_logging as blog
from biogeme.expressions import bioDraws
from b07estimation_specification import get_biogeme
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b07estimation_monte_carlo_halton.py')
Example b07estimation_monte_carlo_halton.py
R = 2000
the_draws = bioDraws('b_time_rnd', 'NORMAL_HALTON2')
the_biogeme = get_biogeme(the_draws=the_draws, number_of_draws=R)
the_biogeme.modelName = 'b07estimation_monte_carlo_halton'
Biogeme parameters read from biogeme.toml.
results = the_biogeme.estimate()
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
*** Initial values of the parameters are obtained from the file __b07estimation_monte_carlo_halton.iter
Cannot read file __b07estimation_monte_carlo_halton.iter. Statement is ignored.
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Iter.         asc_car       asc_train          b_cost          b_time        b_time_s     Function    Relgrad   Radius      Rho
    0               1              -1              -1              -1               2      9.8e+03       0.15        1     0.23    +
    1               0           -0.79               0              -2             2.3        9e+03      0.065        1     0.32    +
    2            0.41           -0.65              -1              -2             2.2      8.7e+03      0.033        1     0.61    +
    3            0.41           -0.65              -1              -2             2.2      8.7e+03      0.033      0.5     -1.4    -
    4            0.41           -0.65              -1              -2             2.2      8.7e+03      0.033     0.25    0.042    -
    5            0.16           -0.44           -0.75            -2.3               2      8.6e+03      0.015     0.25     0.65    +
    6            0.33           -0.46           -0.94            -2.3             1.7      8.6e+03      0.019     0.25     0.31    +
    7            0.33           -0.46           -0.94            -2.3             1.7      8.6e+03      0.019     0.12    -0.44    -
    8             0.2           -0.33            -0.9            -2.2             1.7      8.6e+03     0.0096     0.12     0.44    +
    9            0.33           -0.38           -0.84            -2.1             1.6      8.6e+03     0.0097     0.12     0.36    +
   10            0.33           -0.38           -0.84            -2.1             1.6      8.6e+03     0.0097    0.062    -0.16    -
   11            0.33           -0.38           -0.84            -2.1             1.6      8.6e+03     0.0097    0.031    0.015    -
   12             0.3           -0.41           -0.81            -2.2             1.6      8.6e+03     0.0053    0.031     0.55    +
   13            0.27           -0.38           -0.84            -2.1             1.6      8.6e+03     0.0044    0.031     0.68    +
   14            0.24           -0.41           -0.87            -2.1             1.5      8.6e+03     0.0064    0.031     0.43    +
   15            0.27            -0.4           -0.86            -2.1             1.5      8.6e+03      0.003    0.031     0.72    +
   16            0.24            -0.4           -0.84            -2.1             1.5      8.6e+03     0.0024    0.031     0.73    +
   17            0.24           -0.42           -0.87              -2             1.5      8.6e+03     0.0019    0.031     0.77    +
   18             0.2           -0.43           -0.84              -2             1.4      8.6e+03     0.0031    0.031     0.58    +
   19            0.23           -0.43           -0.85              -2             1.4      8.6e+03     0.0017    0.031     0.82    +
   20             0.2           -0.43           -0.86              -2             1.4      8.6e+03     0.0022    0.031     0.68    +
   21            0.21           -0.44           -0.86              -2             1.3      8.6e+03     0.0013    0.031     0.79    +
   22            0.19           -0.44           -0.84            -1.9             1.3      8.6e+03     0.0018    0.031     0.67    +
   23             0.2           -0.45           -0.85            -1.9             1.3      8.6e+03      0.001    0.031     0.66    +
   24            0.18           -0.46           -0.85            -1.9             1.2      8.6e+03     0.0011    0.031     0.63    +
   25            0.18           -0.46           -0.85            -1.9             1.2      8.6e+03     0.0011    0.016    -0.45    -
   26            0.19           -0.46           -0.84            -1.9             1.2      8.6e+03    0.00037    0.016     0.43    +
   27            0.19           -0.46           -0.84            -1.9             1.2      8.6e+03    0.00037   0.0078    -0.13    -
   28            0.18           -0.46           -0.85            -1.9             1.2      8.6e+03     0.0002   0.0078     0.83    +
   29            0.18           -0.47           -0.84            -1.9             1.2      8.6e+03    0.00018   0.0078     0.66    +
   30            0.18           -0.47           -0.84            -1.9             1.2      8.6e+03    0.00018   0.0039     -2.4    -
   31            0.18           -0.47           -0.84            -1.9             1.2      8.6e+03    6.9e-05   0.0039     0.78    -
Results saved in file b07estimation_monte_carlo_halton.html
Results saved in file b07estimation_monte_carlo_halton.pickle
print(results.short_summary())
Results for model b07estimation_monte_carlo_halton
Nbr of parameters:              5
Sample size:                    10719
Excluded data:                  9
Final log likelihood:           -8570.963
Akaike Information Criterion:   17151.93
Bayesian Information Criterion: 17188.33
pandas_results = results.get_estimated_parameters()
pandas_results
Value Rob. Std err Rob. t-test Rob. p-value
asc_car 0.181419 0.035183 5.156434 2.516975e-07
asc_train -0.466020 0.047991 -9.710632 0.000000e+00
b_cost -0.846907 0.057766 -14.661000 0.000000e+00
b_time -1.867054 0.076066 -24.545161 0.000000e+00
b_time_s 1.213239 0.087554 13.856997 0.000000e+00


Total running time of the script: (6 minutes 39.726 seconds)

Gallery generated by Sphinx-Gallery