Note
Go to the end to download the full example code.
Mixture of logit modelsΒΆ
Example of a uniform mixture of logit models, using Monte-Carlo integration.
Michel Bierlaire, EPFL Fri Jun 20 2025, 10:43:05
import biogeme.biogeme_logging as blog
from IPython.core.display_functions import display
from biogeme.biogeme import BIOGEME
from biogeme.expressions import Beta, Draws, MonteCarlo, log
from biogeme.models import logit
from biogeme.results_processing import get_pandas_estimated_parameters
See the data processing script: Data preparation for Swissmetro.
from swissmetro_data import (
CAR_AV_SP,
CAR_CO_SCALED,
CAR_TT_SCALED,
CHOICE,
SM_AV,
SM_COST_SCALED,
SM_TT_SCALED,
TRAIN_AV_SP,
TRAIN_COST_SCALED,
TRAIN_TT_SCALED,
database,
)
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b06unif_mixture.py')
Example b06unif_mixture.py
Parameters to be estimated.
asc_car = Beta('asc_car', 0, None, None, 0)
asc_train = Beta('asc_train', 0, None, None, 0)
asc_sm = Beta('asc_sm', 0, None, None, 1)
b_cost = Beta('b_cost', 0, None, None, 0)
Define a random parameter, uniformly distributed, designed to be used for Monte-Carlo simulation.
b_time = Beta('b_time', 0, None, None, 0)
b_time_s = Beta('b_time_s', 1, None, None, 0)
b_time_rnd = b_time + b_time_s * Draws('b_time_rnd', 'UNIFORMSYM')
Definition of the utility functions.
v_train = asc_train + b_time_rnd * TRAIN_TT_SCALED + b_cost * TRAIN_COST_SCALED
v_swissmetro = asc_sm + b_time_rnd * SM_TT_SCALED + b_cost * SM_COST_SCALED
v_car = asc_car + b_time_rnd * CAR_TT_SCALED + b_cost * CAR_CO_SCALED
Associate utility functions with the numbering of alternatives.
v = {1: v_train, 2: v_swissmetro, 3: v_car}
Associate the availability conditions with the alternatives.
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
Conditional to b_time_rnd, we have a logit model (called the kernel).
conditional_probability = logit(v, av, CHOICE)
# We integrate over b_time_rnd using Monte-Carlo
log_probability = log(MonteCarlo(conditional_probability))
Create the Biogeme object.
the_biogeme = BIOGEME(database, log_probability, number_of_draws=10000, seed=1223)
the_biogeme.model_name = 'b06unif_mixture'
Biogeme parameters read from biogeme.toml.
Estimate the parameters
results = the_biogeme.estimate()
*** Initial values of the parameters are obtained from the file __b06unif_mixture.iter
Parameter values restored from __b06unif_mixture.iter
Starting values for the algorithm: {'asc_train': -0.38621982814142186, 'b_time': -2.3162946542469043, 'b_time_s': 2.8685519705398463, 'b_cost': -1.277277310692449, 'asc_car': 0.14391416536251292}
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the algorithm from "automatic" to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Optimization algorithm has converged.
Relative gradient: 3.1141664438696853e-06
Cause of termination: Relative gradient = 3.1e-06 <= 6.1e-06
Number of function evaluations: 1
Number of gradient evaluations: 1
Number of hessian evaluations: 0
Algorithm: BFGS with trust region for simple bound constraints
Number of iterations: 0
Optimization time: 0:00:04.747721
Calculate second derivatives and BHHH
File b06unif_mixture~00.html has been generated.
File b06unif_mixture~00.yaml has been generated.
print(results.short_summary())
Results for model b06unif_mixture
Nbr of parameters: 5
Sample size: 6768
Excluded data: 3960
Final log likelihood: -5215.805
Akaike Information Criterion: 10441.61
Bayesian Information Criterion: 10475.71
pandas_results = get_pandas_estimated_parameters(estimation_results=results)
display(pandas_results)
Name Value Robust std err. Robust t-stat. Robust p-value
0 asc_train -0.386220 0.066029 -5.849265 4.937510e-09
1 b_time -2.316295 0.125946 -18.391184 0.000000e+00
2 b_time_s 2.868552 0.199638 14.368781 0.000000e+00
3 b_cost -1.277277 0.086562 -14.755635 0.000000e+00
4 asc_car 0.143914 0.053299 2.700124 6.931367e-03
Total running time of the script: (2 minutes 12.071 seconds)