Note
Go to the end to download the full example code.
6. Mixture of logit models: uniform distribution¶
Example of a uniform mixture of logit models, using Bayesian inference.
Michel Bierlaire, EPFL Thu Nov 20 2025, 11:30:45
import biogeme.biogeme_logging as blog
from IPython.core.display_functions import display
from biogeme.bayesian_estimation import BayesianResults, get_pandas_estimated_parameters
from biogeme.biogeme import BIOGEME
from biogeme.expressions import Beta, DistributedParameter, Draws
from biogeme.models import loglogit
See the data processing script: Data preparation for Swissmetro.
from swissmetro_data import (
CAR_AV_SP,
CAR_CO_SCALED,
CAR_TT_SCALED,
CHOICE,
SM_AV,
SM_COST_SCALED,
SM_TT_SCALED,
TRAIN_AV_SP,
TRAIN_COST_SCALED,
TRAIN_TT_SCALED,
database,
)
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b06_unif_mixture.py')
Example b06_unif_mixture.py
Parameters to be estimated.
asc_car = Beta('asc_car', 0, None, None, 0)
asc_train = Beta('asc_train', 0, None, None, 0)
asc_sm = Beta('asc_sm', 0, None, None, 1)
b_cost = Beta('b_cost', 0, None, None, 0)
Define a random parameter, uniformly distributed, designed to be used for Monte-Carlo simulation.
b_time = Beta('b_time', 0, None, None, 0)
b_time_s = Beta('b_time_s', 1, None, None, 0)
b_time_eps = Draws('b_time_eps', 'UNIFORMSYM')
b_time_rnd = DistributedParameter('b_time_rnd', b_time + b_time_s * b_time_eps)
Definition of the utility functions.
v_train = asc_train + b_time_rnd * TRAIN_TT_SCALED + b_cost * TRAIN_COST_SCALED
v_swissmetro = asc_sm + b_time_rnd * SM_TT_SCALED + b_cost * SM_COST_SCALED
v_car = asc_car + b_time_rnd * CAR_TT_SCALED + b_cost * CAR_CO_SCALED
Associate utility functions with the numbering of alternatives.
v = {1: v_train, 2: v_swissmetro, 3: v_car}
Associate the availability conditions with the alternatives.
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
In order to obtain the loglikelihood, we would first calculate the kernel conditional on b_time_rnd, and the integrate over b_time_rnd using Monte-Carlo. However, when performing Bayesian estimation, the random parameters will be explicitly simulated. Therefore, what the algorithm needs is the conditional log likelihood, which is simply a (log) logit here.
conditional_log_likelihood = loglogit(v, av, CHOICE)
Create the Biogeme object.
the_biogeme = BIOGEME(database, conditional_log_likelihood)
the_biogeme.model_name = 'b06_unif_mixture'
Biogeme parameters read from biogeme.toml.
Estimate the parameters.
try:
results = BayesianResults.from_netcdf(
filename=f'saved_results/{the_biogeme.model_name}.nc'
)
except FileNotFoundError:
results = the_biogeme.bayesian_estimation()
Loaded NetCDF file size: 1.8 GB
load finished in 9239 ms (9.24 s)
print(results.short_summary())
posterior_predictive_loglike finished in 238 ms
expected_log_likelihood finished in 11 ms
best_draw_log_likelihood finished in 10 ms
/Users/bierlair/python_envs/venv313/lib/python3.13/site-packages/arviz/stats/stats.py:1667: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail.
See http://arxiv.org/abs/1507.04544 for details
warnings.warn(
waic_res finished in 640 ms
waic finished in 641 ms
/Users/bierlair/python_envs/venv313/lib/python3.13/site-packages/arviz/stats/stats.py:797: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations.
warnings.warn(
loo_res finished in 23889 ms (23.89 s)
loo finished in 23889 ms (23.89 s)
Sample size 6768
Sampler NUTS
Number of chains 4
Number of draws per chain 2000
Total number of draws 8000
Acceptance rate target 0.9
Run time 0:04:58.051755
Posterior predictive log-likelihood (sum of log mean p) -4187.76
Expected log-likelihood E[log L(Y|θ)] -4536.31
Best-draw log-likelihood (posterior upper bound) -4297.64
WAIC (Widely Applicable Information Criterion) -5095.76
WAIC Standard Error 50.49
Effective number of parameters (p_WAIC) 907.99
LOO (Leave-One-Out Cross-Validation) -5205.64
LOO Standard Error 52.53
Effective number of parameters (p_LOO) 1017.88
pandas_results = get_pandas_estimated_parameters(estimation_results=results)
display(pandas_results)
Diagnostics computation took 89.8 seconds (cached).
Name Value (mean) Value (median) ... R hat ESS (bulk) ESS (tail)
0 asc_train -0.384435 -0.384729 ... 1.000585 4352.480733 5123.475119
1 b_time -2.333551 -2.330126 ... 1.001665 1539.728156 2949.566052
2 b_time_s 1.451534 2.801582 ... 1.530867 7.185682 28.846727
3 b_cost -1.281045 -1.279362 ... 0.999880 6857.447568 5415.130446
4 asc_car 0.147908 0.148199 ... 1.000774 3179.022325 4611.038479
[5 rows x 12 columns]
Total running time of the script: (2 minutes 3.929 seconds)