.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/swissmetro/plot_b24halton_mixture.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_swissmetro_plot_b24halton_mixture.py: Mixture of logit with Halton draws ================================== Example of a mixture of logit models, using quasi Monte-Carlo integration with Halton draws (base 5). The mixing distribution is normal. :author: Michel Bierlaire, EPFL :date: Wed Apr 12 18:21:13 2023 .. GENERATED FROM PYTHON SOURCE LINES 14-22 .. code-block:: Python import biogeme.biogeme_logging as blog import biogeme.biogeme as bio from biogeme import models from biogeme.expressions import Beta, bioDraws, MonteCarlo, log from biogeme.parameters import Parameters .. GENERATED FROM PYTHON SOURCE LINES 23-24 See the data processing script: :ref:`swissmetro_data`. .. GENERATED FROM PYTHON SOURCE LINES 24-41 .. code-block:: Python from swissmetro_data import ( database, CHOICE, CAR_AV_SP, TRAIN_AV_SP, TRAIN_TT_SCALED, TRAIN_COST_SCALED, SM_TT_SCALED, SM_COST_SCALED, CAR_TT_SCALED, CAR_CO_SCALED, SM_AV, ) logger = blog.get_screen_logger(level=blog.INFO) logger.info('Example b24halton_mixture.py') .. rst-class:: sphx-glr-script-out .. code-block:: none Example b24halton_mixture.py .. GENERATED FROM PYTHON SOURCE LINES 42-43 Parameters to be estimated. .. GENERATED FROM PYTHON SOURCE LINES 43-48 .. code-block:: Python ASC_CAR = Beta('ASC_CAR', 0, None, None, 0) ASC_TRAIN = Beta('ASC_TRAIN', 0, None, None, 0) ASC_SM = Beta('ASC_SM', 0, None, None, 1) B_COST = Beta('B_COST', 0, None, None, 0) .. GENERATED FROM PYTHON SOURCE LINES 49-51 Define a random parameter, normally distributed, designed to be used for Monte-Carlo simulation. .. GENERATED FROM PYTHON SOURCE LINES 51-53 .. code-block:: Python B_TIME = Beta('B_TIME', 0, None, None, 0) .. GENERATED FROM PYTHON SOURCE LINES 54-55 It is advised not to use 0 as starting value for the following parameter. .. GENERATED FROM PYTHON SOURCE LINES 55-56 .. code-block:: Python B_TIME_S = Beta('B_TIME_S', 1, None, None, 0) .. GENERATED FROM PYTHON SOURCE LINES 57-59 Define a random parameter with a normal distribution, designed to be used for quasi Monte-Carlo simulation with Halton draws (base 5). .. GENERATED FROM PYTHON SOURCE LINES 59-61 .. code-block:: Python B_TIME_RND = B_TIME + B_TIME_S * bioDraws('b_time_rnd', 'NORMAL_HALTON5') .. GENERATED FROM PYTHON SOURCE LINES 62-63 Definition of the utility functions. .. GENERATED FROM PYTHON SOURCE LINES 63-67 .. code-block:: Python V1 = ASC_TRAIN + B_TIME_RND * TRAIN_TT_SCALED + B_COST * TRAIN_COST_SCALED V2 = ASC_SM + B_TIME_RND * SM_TT_SCALED + B_COST * SM_COST_SCALED V3 = ASC_CAR + B_TIME_RND * CAR_TT_SCALED + B_COST * CAR_CO_SCALED .. GENERATED FROM PYTHON SOURCE LINES 68-69 Associate utility functions with the numbering of alternatives. .. GENERATED FROM PYTHON SOURCE LINES 69-71 .. code-block:: Python V = {1: V1, 2: V2, 3: V3} .. GENERATED FROM PYTHON SOURCE LINES 72-73 Associate the availability conditions with the alternatives. .. GENERATED FROM PYTHON SOURCE LINES 73-75 .. code-block:: Python av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP} .. GENERATED FROM PYTHON SOURCE LINES 76-77 Conditional on b_time_rnd, we have a logit model (called the kernel) .. GENERATED FROM PYTHON SOURCE LINES 77-79 .. code-block:: Python prob = models.logit(V, av, CHOICE) .. GENERATED FROM PYTHON SOURCE LINES 80-81 We integrate over b_time_rnd using Monte-Carlo. .. GENERATED FROM PYTHON SOURCE LINES 81-83 .. code-block:: Python logprob = log(MonteCarlo(prob)) .. GENERATED FROM PYTHON SOURCE LINES 84-85 These notes will be included as such in the report file. .. GENERATED FROM PYTHON SOURCE LINES 85-90 .. code-block:: Python USER_NOTES = ( 'Example of a mixture of logit models with three alternatives, ' 'approximated using Monte-Carlo integration with Halton draws.' ) .. GENERATED FROM PYTHON SOURCE LINES 91-94 As the objective is to illustrate the syntax, we calculate the Monte-Carlo approximation with a small number of draws. .. GENERATED FROM PYTHON SOURCE LINES 94-99 .. code-block:: Python the_biogeme = bio.BIOGEME( database, logprob, user_notes=USER_NOTES, number_of_draws=100, seed=1223 ) the_biogeme.modelName = 'b24halton_mixture' .. rst-class:: sphx-glr-script-out .. code-block:: none Biogeme parameters read from biogeme.toml. .. GENERATED FROM PYTHON SOURCE LINES 100-101 Estimate the parameters .. GENERATED FROM PYTHON SOURCE LINES 101-103 .. code-block:: Python results = the_biogeme.estimate() .. rst-class:: sphx-glr-script-out .. code-block:: none As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds" *** Initial values of the parameters are obtained from the file __b24halton_mixture.iter Cannot read file __b24halton_mixture.iter. Statement is ignored. The number of draws (100) is low. The results may not be meaningful. As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds" Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds] ** Optimization: BFGS with trust region for simple bounds Iter. ASC_CAR ASC_TRAIN B_COST B_TIME B_TIME_S Function Relgrad Radius Rho 0 1 -1 -1 -1 2 6.1e+03 0.16 1 0.25 + 1 0 -0.73 -0.4 -2 3 5.5e+03 0.049 1 0.36 + 2 0.51 -0.94 -1.4 -2.3 2.6 5.4e+03 0.054 1 0.39 + 3 0.51 -0.94 -1.4 -2.3 2.6 5.4e+03 0.054 0.5 -0.15 - 4 0.0062 -0.44 -1.1 -2.8 2.6 5.3e+03 0.03 0.5 0.5 + 5 0.33 -0.089 -1.6 -2.6 2.5 5.3e+03 0.046 0.5 0.13 + 6 0.33 -0.089 -1.6 -2.6 2.5 5.3e+03 0.046 0.25 -0.18 - 7 0.26 -0.34 -1.4 -2.9 2.3 5.2e+03 0.022 0.25 0.65 + 8 0.23 -0.3 -1.2 -2.6 2.2 5.2e+03 0.0087 0.25 0.52 + 9 0.23 -0.3 -1.2 -2.6 2.2 5.2e+03 0.0087 0.12 -2.2 - 10 0.23 -0.3 -1.2 -2.6 2.2 5.2e+03 0.0087 0.062 -0.13 - 11 0.29 -0.36 -1.3 -2.6 2.1 5.2e+03 0.0072 0.062 0.31 + 12 0.23 -0.3 -1.4 -2.6 2.1 5.2e+03 0.0065 0.062 0.58 + 13 0.21 -0.36 -1.3 -2.6 2 5.2e+03 0.011 0.062 0.32 + 14 0.22 -0.33 -1.3 -2.5 2 5.2e+03 0.0034 0.62 0.94 ++ 15 0.22 -0.33 -1.3 -2.5 2 5.2e+03 0.0034 0.31 -0.026 - 16 0.13 -0.39 -1.3 -2.3 1.7 5.2e+03 0.0039 0.31 0.58 + 17 0.13 -0.39 -1.3 -2.3 1.7 5.2e+03 0.0039 0.16 -6.3 - 18 0.13 -0.39 -1.3 -2.3 1.7 5.2e+03 0.0039 0.078 -4.7 - 19 0.13 -0.39 -1.3 -2.3 1.7 5.2e+03 0.0039 0.039 -2.2 - 20 0.13 -0.39 -1.3 -2.3 1.7 5.2e+03 0.0039 0.02 -0.15 - 21 0.15 -0.41 -1.3 -2.3 1.7 5.2e+03 0.00097 0.02 0.41 + 22 0.14 -0.39 -1.3 -2.3 1.7 5.2e+03 0.00075 0.02 0.29 + 23 0.15 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00083 0.02 0.41 + 24 0.15 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00083 0.0098 -0.089 - 25 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00036 0.0098 0.46 + 26 0.14 -0.41 -1.3 -2.3 1.7 5.2e+03 0.00064 0.0098 0.28 + 27 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00031 0.0098 0.28 + 28 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00031 0.0049 -0.24 - 29 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00031 0.0024 -0.83 - 30 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00047 0.0024 0.1 + 31 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00023 0.024 0.94 ++ 32 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00023 0.012 -8.4 - 33 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00023 0.0061 -2.8 - 34 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 0.00023 0.0031 -0.17 - 35 0.14 -0.4 -1.3 -2.3 1.7 5.2e+03 2.8e-05 0.0031 0.79 - Results saved in file b24halton_mixture.html Results saved in file b24halton_mixture.pickle .. GENERATED FROM PYTHON SOURCE LINES 104-106 .. code-block:: Python print(results.short_summary()) .. rst-class:: sphx-glr-script-out .. code-block:: none Results for model b24halton_mixture Nbr of parameters: 5 Sample size: 6768 Excluded data: 3960 Final log likelihood: -5215.687 Akaike Information Criterion: 10441.37 Bayesian Information Criterion: 10475.47 .. GENERATED FROM PYTHON SOURCE LINES 107-109 .. code-block:: Python pandas_results = results.get_estimated_parameters() pandas_results .. raw:: html
Value Rob. Std err Rob. t-test Rob. p-value
ASC_CAR 0.136579 0.051823 2.635494 8.401497e-03
ASC_TRAIN -0.403435 0.065704 -6.140162 8.243723e-10
B_COST -1.284767 0.086275 -14.891565 0.000000e+00
B_TIME -2.258401 0.117811 -19.169680 0.000000e+00
B_TIME_S 1.660487 0.133546 12.433865 0.000000e+00


.. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 12.833 seconds) .. _sphx_glr_download_auto_examples_swissmetro_plot_b24halton_mixture.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_b24halton_mixture.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_b24halton_mixture.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_b24halton_mixture.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_