.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/bayesian_swissmetro/plot_b06_unif_mixture.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_bayesian_swissmetro_plot_b06_unif_mixture.py: 6. Mixture of logit models: uniform distribution ================================================ Example of a uniform mixture of logit models, using Bayesian inference. Michel Bierlaire, EPFL Thu Nov 20 2025, 11:30:45 .. GENERATED FROM PYTHON SOURCE LINES 12-20 .. code-block:: Python import biogeme.biogeme_logging as blog from IPython.core.display_functions import display from biogeme.bayesian_estimation import BayesianResults, get_pandas_estimated_parameters from biogeme.biogeme import BIOGEME from biogeme.expressions import Beta, DistributedParameter, Draws from biogeme.models import loglogit .. GENERATED FROM PYTHON SOURCE LINES 21-22 See the data processing script: :ref:`swissmetro_data`. .. GENERATED FROM PYTHON SOURCE LINES 22-39 .. code-block:: Python from swissmetro_data import ( CAR_AV_SP, CAR_CO_SCALED, CAR_TT_SCALED, CHOICE, SM_AV, SM_COST_SCALED, SM_TT_SCALED, TRAIN_AV_SP, TRAIN_COST_SCALED, TRAIN_TT_SCALED, database, ) logger = blog.get_screen_logger(level=blog.INFO) logger.info('Example b06_unif_mixture.py') .. rst-class:: sphx-glr-script-out .. code-block:: none Example b06_unif_mixture.py .. GENERATED FROM PYTHON SOURCE LINES 40-41 Parameters to be estimated. .. GENERATED FROM PYTHON SOURCE LINES 41-46 .. code-block:: Python asc_car = Beta('asc_car', 0, None, None, 0) asc_train = Beta('asc_train', 0, None, None, 0) asc_sm = Beta('asc_sm', 0, None, None, 1) b_cost = Beta('b_cost', 0, None, None, 0) .. GENERATED FROM PYTHON SOURCE LINES 47-49 Define a random parameter, uniformly distributed, designed to be used for Monte-Carlo simulation. .. GENERATED FROM PYTHON SOURCE LINES 49-54 .. code-block:: Python b_time = Beta('b_time', 0, None, None, 0) b_time_s = Beta('b_time_s', 1, None, None, 0) b_time_eps = Draws('b_time_eps', 'UNIFORMSYM') b_time_rnd = DistributedParameter('b_time_rnd', b_time + b_time_s * b_time_eps) .. GENERATED FROM PYTHON SOURCE LINES 55-56 Definition of the utility functions. .. GENERATED FROM PYTHON SOURCE LINES 56-60 .. code-block:: Python v_train = asc_train + b_time_rnd * TRAIN_TT_SCALED + b_cost * TRAIN_COST_SCALED v_swissmetro = asc_sm + b_time_rnd * SM_TT_SCALED + b_cost * SM_COST_SCALED v_car = asc_car + b_time_rnd * CAR_TT_SCALED + b_cost * CAR_CO_SCALED .. GENERATED FROM PYTHON SOURCE LINES 61-62 Associate utility functions with the numbering of alternatives. .. GENERATED FROM PYTHON SOURCE LINES 62-64 .. code-block:: Python v = {1: v_train, 2: v_swissmetro, 3: v_car} .. GENERATED FROM PYTHON SOURCE LINES 65-66 Associate the availability conditions with the alternatives. .. GENERATED FROM PYTHON SOURCE LINES 66-68 .. code-block:: Python av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP} .. GENERATED FROM PYTHON SOURCE LINES 69-73 In order to obtain the loglikelihood, we would first calculate the kernel conditional on b_time_rnd, and the integrate over b_time_rnd using Monte-Carlo. However, when performing Bayesian estimation, the random parameters will be explicitly simulated. Therefore, what the algorithm needs is the *conditional* log likelihood, which is simply a (log) logit here. .. GENERATED FROM PYTHON SOURCE LINES 73-76 .. code-block:: Python conditional_log_likelihood = loglogit(v, av, CHOICE) .. GENERATED FROM PYTHON SOURCE LINES 77-78 Create the Biogeme object. .. GENERATED FROM PYTHON SOURCE LINES 78-81 .. code-block:: Python the_biogeme = BIOGEME(database, conditional_log_likelihood) the_biogeme.model_name = 'b06_unif_mixture' .. rst-class:: sphx-glr-script-out .. code-block:: none Biogeme parameters read from biogeme.toml. .. GENERATED FROM PYTHON SOURCE LINES 82-83 Estimate the parameters. .. GENERATED FROM PYTHON SOURCE LINES 83-90 .. code-block:: Python try: results = BayesianResults.from_netcdf( filename=f'saved_results/{the_biogeme.model_name}.nc' ) except FileNotFoundError: results = the_biogeme.bayesian_estimation() .. rst-class:: sphx-glr-script-out .. code-block:: none Loaded NetCDF file size: 1.8 GB load finished in 9239 ms (9.24 s) .. GENERATED FROM PYTHON SOURCE LINES 91-93 .. code-block:: Python print(results.short_summary()) .. rst-class:: sphx-glr-script-out .. code-block:: none posterior_predictive_loglike finished in 238 ms expected_log_likelihood finished in 11 ms best_draw_log_likelihood finished in 10 ms /Users/bierlair/python_envs/venv313/lib/python3.13/site-packages/arviz/stats/stats.py:1667: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail. See http://arxiv.org/abs/1507.04544 for details warnings.warn( waic_res finished in 640 ms waic finished in 641 ms /Users/bierlair/python_envs/venv313/lib/python3.13/site-packages/arviz/stats/stats.py:797: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations. warnings.warn( loo_res finished in 23889 ms (23.89 s) loo finished in 23889 ms (23.89 s) Sample size 6768 Sampler NUTS Number of chains 4 Number of draws per chain 2000 Total number of draws 8000 Acceptance rate target 0.9 Run time 0:04:58.051755 Posterior predictive log-likelihood (sum of log mean p) -4187.76 Expected log-likelihood E[log L(Y|θ)] -4536.31 Best-draw log-likelihood (posterior upper bound) -4297.64 WAIC (Widely Applicable Information Criterion) -5095.76 WAIC Standard Error 50.49 Effective number of parameters (p_WAIC) 907.99 LOO (Leave-One-Out Cross-Validation) -5205.64 LOO Standard Error 52.53 Effective number of parameters (p_LOO) 1017.88 .. GENERATED FROM PYTHON SOURCE LINES 94-96 .. code-block:: Python pandas_results = get_pandas_estimated_parameters(estimation_results=results) display(pandas_results) .. rst-class:: sphx-glr-script-out .. code-block:: none Diagnostics computation took 89.8 seconds (cached). Name Value (mean) Value (median) ... R hat ESS (bulk) ESS (tail) 0 asc_train -0.384435 -0.384729 ... 1.000585 4352.480733 5123.475119 1 b_time -2.333551 -2.330126 ... 1.001665 1539.728156 2949.566052 2 b_time_s 1.451534 2.801582 ... 1.530867 7.185682 28.846727 3 b_cost -1.281045 -1.279362 ... 0.999880 6857.447568 5415.130446 4 asc_car 0.147908 0.148199 ... 1.000774 3179.022325 4611.038479 [5 rows x 12 columns] .. rst-class:: sphx-glr-timing **Total running time of the script:** (2 minutes 3.929 seconds) .. _sphx_glr_download_auto_examples_bayesian_swissmetro_plot_b06_unif_mixture.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_b06_unif_mixture.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_b06_unif_mixture.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_b06_unif_mixture.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_