.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/swissmetro/plot_b17a_lognormal_mixture.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_swissmetro_plot_b17a_lognormal_mixture.py: .. _plot_b17a_lognormal_mixture: 17a. Mixture with lognormal distribution ======================================== Example of a mixture of logit models, using Monte-Carlo integration. The mixing distribution is distributed as a log normal. Michel Bierlaire, EPFL Thu Jun 26 2025, 15:31:41 .. GENERATED FROM PYTHON SOURCE LINES 13-25 .. code-block:: Python from IPython.core.display_functions import display import biogeme.biogeme_logging as blog from biogeme.biogeme import BIOGEME from biogeme.expressions import Beta, Draws, MonteCarlo, exp, log from biogeme.models import logit from biogeme.results_processing import ( EstimationResults, get_pandas_estimated_parameters, ) .. GENERATED FROM PYTHON SOURCE LINES 26-27 See the data processing script: :ref:`swissmetro_data`. .. GENERATED FROM PYTHON SOURCE LINES 27-44 .. code-block:: Python from swissmetro_data import ( CAR_AV_SP, CAR_CO_SCALED, CAR_TT_SCALED, CHOICE, SM_AV, SM_COST_SCALED, SM_TT_SCALED, TRAIN_AV_SP, TRAIN_COST_SCALED, TRAIN_TT_SCALED, database, ) logger = blog.get_screen_logger(level=blog.INFO) logger.info('Example b17lognormal_mixture.py') .. rst-class:: sphx-glr-script-out .. code-block:: none Example b17lognormal_mixture.py .. GENERATED FROM PYTHON SOURCE LINES 45-46 Parameters to be estimated. .. GENERATED FROM PYTHON SOURCE LINES 46-51 .. code-block:: Python asc_car = Beta('asc_car', 0, None, None, 0) asc_train = Beta('asc_train', 0, None, None, 0) asc_sm = Beta('asc_sm', 0, None, None, 1) b_cost = Beta('b_cost', 0, None, None, 0) .. GENERATED FROM PYTHON SOURCE LINES 52-54 Define a random parameter, normally distributed, designed to be used for Monte-Carlo simulation. .. GENERATED FROM PYTHON SOURCE LINES 54-56 .. code-block:: Python b_time = Beta('b_time', 0, None, None, 0) .. GENERATED FROM PYTHON SOURCE LINES 57-58 It is advised not to use 0 as starting value for the following parameter. .. GENERATED FROM PYTHON SOURCE LINES 58-60 .. code-block:: Python b_time_s = Beta('b_time_s', 1, -2, 2, 0) .. GENERATED FROM PYTHON SOURCE LINES 61-63 Define a random parameter, log normally distributed, designed to be used for Monte-Carlo simulation. .. GENERATED FROM PYTHON SOURCE LINES 63-65 .. code-block:: Python b_time_rnd = -exp(b_time + b_time_s * Draws('b_time_rnd', 'NORMAL')) .. GENERATED FROM PYTHON SOURCE LINES 66-67 Definition of the utility functions. .. GENERATED FROM PYTHON SOURCE LINES 67-71 .. code-block:: Python v_train = asc_train + b_time_rnd * TRAIN_TT_SCALED + b_cost * TRAIN_COST_SCALED v_swissmetro = asc_sm + b_time_rnd * SM_TT_SCALED + b_cost * SM_COST_SCALED v_car = asc_car + b_time_rnd * CAR_TT_SCALED + b_cost * CAR_CO_SCALED .. GENERATED FROM PYTHON SOURCE LINES 72-73 Associate utility functions with the numbering of alternatives. .. GENERATED FROM PYTHON SOURCE LINES 73-75 .. code-block:: Python v = {1: v_train, 2: v_swissmetro, 3: v_car} .. GENERATED FROM PYTHON SOURCE LINES 76-77 Associate the availability conditions with the alternatives. .. GENERATED FROM PYTHON SOURCE LINES 77-79 .. code-block:: Python av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP} .. GENERATED FROM PYTHON SOURCE LINES 80-81 Conditional to b_time_rnd, we have a logit model (called the kernel). .. GENERATED FROM PYTHON SOURCE LINES 81-83 .. code-block:: Python conditional_probability = logit(v, av, CHOICE) .. GENERATED FROM PYTHON SOURCE LINES 84-85 We integrate over b_time_rnd using Monte-Carlo. .. GENERATED FROM PYTHON SOURCE LINES 85-87 .. code-block:: Python log_probability = log(MonteCarlo(conditional_probability)) .. GENERATED FROM PYTHON SOURCE LINES 88-91 As the objective is to illustrate the syntax, we calculate the Monte-Carlo approximation with a small number of draws. .. GENERATED FROM PYTHON SOURCE LINES 91-94 .. code-block:: Python the_biogeme = BIOGEME(database, log_probability, number_of_draws=10_000, seed=1223) the_biogeme.model_name = 'b17a_lognormal_mixture' .. rst-class:: sphx-glr-script-out .. code-block:: none Biogeme parameters read from biogeme.toml. .. GENERATED FROM PYTHON SOURCE LINES 95-96 Estimate the parameters. .. GENERATED FROM PYTHON SOURCE LINES 96-103 .. code-block:: Python try: results = EstimationResults.from_yaml_file( filename=f'saved_results/{the_biogeme.model_name}.yaml' ) except FileNotFoundError: results = the_biogeme.estimate() .. rst-class:: sphx-glr-script-out .. code-block:: none *** Initial values of the parameters are obtained from the file __b17a_lognormal_mixture.iter Cannot read file __b17a_lognormal_mixture.iter. Statement is ignored. Starting values for the algorithm: {} As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the algorithm from "automatic" to "simple_bounds" in the TOML file. Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds] ** Optimization: BFGS with trust region for simple bounds Iter. asc_train b_time b_time_s b_cost asc_car Function Relgrad Radius Rho 0 0 0 1 0 0 5.7e+03 0.096 0.5 -0.00064 - 1 -0.5 0.5 1.5 -0.5 0.5 5.4e+03 0.042 0.5 0.39 + 2 -0.34 0.69 1.5 -1 0 5.3e+03 0.04 0.5 0.34 + 3 -0.34 0.69 1.5 -1 0 5.3e+03 0.04 0.25 -0.63 - 4 -0.39 0.44 1.3 -1.2 0.25 5.2e+03 0.017 0.25 0.49 + 5 -0.39 0.61 1.3 -1.5 0.084 5.2e+03 0.0097 0.25 0.25 + 6 -0.39 0.61 1.3 -1.5 0.084 5.2e+03 0.0097 0.12 -2.4 - 7 -0.39 0.61 1.3 -1.5 0.084 5.2e+03 0.0097 0.062 0.017 - 8 -0.32 0.55 1.2 -1.4 0.15 5.2e+03 0.0067 0.062 0.58 + 9 -0.39 0.61 1.2 -1.4 0.17 5.2e+03 0.0065 0.062 0.15 + 10 -0.33 0.57 1.3 -1.3 0.17 5.2e+03 0.0029 0.062 0.39 + 11 -0.33 0.57 1.3 -1.3 0.17 5.2e+03 0.0029 0.031 -1.2 - 12 -0.36 0.57 1.2 -1.4 0.2 5.2e+03 0.0036 0.031 0.23 + 13 -0.34 0.58 1.2 -1.4 0.17 5.2e+03 0.0014 0.031 0.61 + 14 -0.34 0.58 1.2 -1.4 0.17 5.2e+03 0.0014 0.016 -3.7 - 15 -0.34 0.58 1.2 -1.4 0.17 5.2e+03 0.0014 0.0078 -0.65 - 16 -0.35 0.57 1.2 -1.4 0.17 5.2e+03 0.00074 0.0078 0.49 + 17 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 0.00022 0.0078 0.65 + 18 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 0.00022 0.0039 -0.37 - 19 -0.35 0.58 1.2 -1.4 0.18 5.2e+03 0.00023 0.0039 0.34 + 20 -0.35 0.58 1.2 -1.4 0.18 5.2e+03 0.00023 0.002 -0.78 - 21 -0.35 0.57 1.2 -1.4 0.17 5.2e+03 0.0002 0.002 0.13 + 22 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 0.00013 0.002 0.51 + 23 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 0.00013 0.00098 -0.59 - 24 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 0.00013 0.00049 -0.037 - 25 -0.35 0.57 1.2 -1.4 0.17 5.2e+03 3.6e-05 0.00049 0.65 + 26 -0.35 0.57 1.2 -1.4 0.17 5.2e+03 3.6e-05 0.00024 0.074 - 27 -0.35 0.57 1.2 -1.4 0.17 5.2e+03 1.6e-05 0.00024 0.61 + 28 -0.35 0.57 1.2 -1.4 0.17 5.2e+03 2e-05 0.00024 0.41 + 29 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 7.6e-06 0.00024 0.37 + 30 -0.35 0.58 1.2 -1.4 0.17 5.2e+03 4.8e-06 0.00024 0.43 + Optimization algorithm has converged. Relative gradient: 4.8025028162905605e-06 Cause of termination: Relative gradient = 4.8e-06 <= 6.1e-06 Number of function evaluations: 70 Number of gradient evaluations: 39 Number of hessian evaluations: 0 Algorithm: BFGS with trust region for simple bound constraints Number of iterations: 31 Proportion of Hessian calculation: 0/19 = 0.0% Optimization time: 0:01:21.466994 Calculate second derivatives and BHHH File b17a_lognormal_mixture.html has been generated. File b17a_lognormal_mixture.yaml has been generated. .. GENERATED FROM PYTHON SOURCE LINES 104-106 .. code-block:: Python print(results.short_summary()) .. rst-class:: sphx-glr-script-out .. code-block:: none Results for model b17a_lognormal_mixture Nbr of parameters: 5 Sample size: 6768 Excluded data: 3960 Final log likelihood: -5231.272 Akaike Information Criterion: 10472.54 Bayesian Information Criterion: 10506.64 .. GENERATED FROM PYTHON SOURCE LINES 107-109 .. code-block:: Python pandas_results = get_pandas_estimated_parameters(estimation_results=results) display(pandas_results) .. rst-class:: sphx-glr-script-out .. code-block:: none Name Value Robust std err. Robust t-stat. Robust p-value 0 asc_train -0.346430 0.073266 -4.728357 2.263435e-06 1 b_time 0.575014 0.071294 8.065422 6.661338e-16 2 b_time_s 1.239151 0.128334 9.655646 0.000000e+00 3 b_cost -1.381117 0.097788 -14.123575 0.000000e+00 4 asc_car 0.173944 0.062414 2.786920 5.321162e-03 .. rst-class:: sphx-glr-timing **Total running time of the script:** (3 minutes 0.119 seconds) .. _sphx_glr_download_auto_examples_swissmetro_plot_b17a_lognormal_mixture.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_b17a_lognormal_mixture.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_b17a_lognormal_mixture.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_b17a_lognormal_mixture.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_