Note
Go to the end to download the full example code.
Discrete mixture with panel dataΒΆ
- Example of a discrete mixture of logit models, also called latent_old
class model. The datafile is organized as panel data.
Michel Bierlaire, EPFL Sat Jun 21 2025, 17:16:14
from IPython.core.display_functions import display
See the data processing script: Panel data preparation for Swissmetro.
from swissmetro_panel import (
CAR_AV_SP,
CAR_CO_SCALED,
CAR_TT_SCALED,
CHOICE,
SM_AV,
SM_COST_SCALED,
SM_TT_SCALED,
TRAIN_AV_SP,
TRAIN_COST_SCALED,
TRAIN_TT_SCALED,
database,
)
import biogeme.biogeme_logging as blog
from biogeme.biogeme import BIOGEME
from biogeme.expressions import (
Beta,
Draws,
ExpressionOrNumeric,
MonteCarlo,
PanelLikelihoodTrajectory,
log,
)
from biogeme.models import logit
from biogeme.results_processing import get_pandas_estimated_parameters
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b15panel_discrete.py')
Example b15panel_discrete.py
Parameters to be estimated. One version for each latent_old class.
NUMBER_OF_CLASSES = 2
b_cost = [Beta(f'b_cost_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)]
Define a random parameter, normally distributed across individuals, designed to be used for Monte-Carlo simulation
b_time = [Beta(f'b_time_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)]
It is advised not to use 0 as starting value for the following parameter.
b_time_s = [
Beta(f'b_time_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
b_time_rnd: list[ExpressionOrNumeric] = [
b_time[i] + b_time_s[i] * Draws(f'b_time_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
We do the same for the constants, to address serial correlation.
asc_car = [
Beta(f'asc_car_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_car_s = [
Beta(f'asc_car_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_car_rnd = [
asc_car[i] + asc_car_s[i] * Draws(f'asc_car_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
asc_train = [
Beta(f'asc_train_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_train_s = [
Beta(f'asc_train_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_train_rnd = [
asc_train[i] + asc_train_s[i] * Draws(f'asc_train_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
asc_sm = [Beta(f'asc_sm_class{i}', 0, None, None, 1) for i in range(NUMBER_OF_CLASSES)]
asc_sm_s = [
Beta(f'asc_sm_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_sm_rnd = [
asc_sm[i] + asc_sm_s[i] * Draws(f'asc_sm_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
Class membership probability.
score_class_0 = Beta('score_class_0', -1.7, None, None, 0)
probability_class_0 = logit({0: score_class_0, 1: 0}, None, 0)
probability_class_1 = logit({0: score_class_0, 1: 0}, None, 1)
In class 0, it is assumed that the time coefficient is zero.
b_time_rnd[0] = 0
Utility functions.
v_train_per_class = [
asc_train_rnd[i] + b_time_rnd[i] * TRAIN_TT_SCALED + b_cost[i] * TRAIN_COST_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_swissmetro_per_class = [
asc_sm_rnd[i] + b_time_rnd[i] * SM_TT_SCALED + b_cost[i] * SM_COST_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_car_per_class = [
asc_car_rnd[i] + b_time_rnd[i] * CAR_TT_SCALED + b_cost[i] * CAR_CO_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_per_class = [
{1: v_train_per_class[i], 2: v_swissmetro_per_class[i], 3: v_car_per_class[i]}
for i in range(NUMBER_OF_CLASSES)
]
Associate the availability conditions with the alternatives.
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
The choice model is a discrete mixture of logit, with availability conditions We calculate the conditional probability for each class.
choice_probability_per_class = [
PanelLikelihoodTrajectory(logit(v_per_class[i], av, CHOICE))
for i in range(NUMBER_OF_CLASSES)
]
Conditional to the random variables, likelihood for the individual.
choice_probability = (
probability_class_0 * choice_probability_per_class[0]
+ probability_class_1 * choice_probability_per_class[1]
)
We integrate over the random variables using Monte-Carlo.
log_probability = log(MonteCarlo(choice_probability))
The model is complex, and there are numerical issues when calculating the second derivatives. Therefore, we instruct Biogeme not to evaluate the second derivatives. As a consequence, the statistics reported after estimation are based on the BHHH matrix instead of the Rao-Cramer bound.
the_biogeme = BIOGEME(
database,
log_probability,
number_of_draws=10_000,
calculating_second_derivatives='never',
seed=1223,
)
the_biogeme.model_name = 'b15panel_discrete'
Biogeme parameters read from biogeme.toml.
Flattening database [(6768, 38)].
Database flattened [(752, 362)]
Flattening database [(6768, 38)].
Database flattened [(752, 362)]
Flattening database [(6768, 38)].
Database flattened [(752, 362)]
Flattening database [(6768, 38)].
Database flattened [(752, 362)]
Estimate the parameters.
results = the_biogeme.estimate()
*** Initial values of the parameters are obtained from the file __b15panel_discrete.iter
Parameter values restored from __b15panel_discrete.iter
Starting values for the algorithm: {'score_class_0': -1.7014394874216778, 'asc_train_class0': -1.1129654814935666, 'asc_train_s_class0': 2.9777189654755607, 'b_cost_class0': -1.2453598275614626, 'asc_sm_s_class0': -0.6494023078401076, 'asc_car_class0': -4.803572395743029, 'asc_car_s_class0': 5.856723420636426, 'asc_train_class1': -0.23322955409118845, 'asc_train_s_class1': 1.587097750926492, 'b_time_class1': -7.058183878190979, 'b_time_s_class1': 3.309940925323963, 'b_cost_class1': -4.805565161132811, 'asc_sm_s_class1': 1.9249661636122117, 'asc_car_class1': 1.0800786654541545, 'asc_car_s_class1': 2.736762951939844}
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the algorithm from "automatic" to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Optimization algorithm has converged.
Relative gradient: 4.254615198941616e-06
Cause of termination: Relative gradient = 4.3e-06 <= 6.1e-06
Number of function evaluations: 1
Number of gradient evaluations: 1
Number of hessian evaluations: 0
Algorithm: BFGS with trust region for simple bound constraints
Number of iterations: 0
Optimization time: 0:00:18.869581
Calculate BHHH
File b15panel_discrete~01.html has been generated.
File b15panel_discrete~01.yaml has been generated.
print(results.short_summary())
Results for model b15panel_discrete
Nbr of parameters: 15
Sample size: 752
Observations: 6768
Excluded data: 0
Final log likelihood: -3526.528
Akaike Information Criterion: 7083.055
Bayesian Information Criterion: 7152.396
pandas_results = get_pandas_estimated_parameters(estimation_results=results)
display(pandas_results)
Name Value BHHH std err. BHHH t-stat. BHHH p-value
0 score_class_0 -1.701439 0.232390 -7.321488 2.451372e-13
1 asc_train_class0 -1.112965 0.646722 -1.720932 8.526310e-02
2 asc_train_s_class0 2.977719 1.104505 2.695975 7.018286e-03
3 b_cost_class0 -1.245360 0.559681 -2.225123 2.607297e-02
4 asc_sm_s_class0 -0.649402 4.734620 -0.137160 8.909040e-01
5 asc_car_class0 -4.803572 1.674277 -2.869042 4.117166e-03
6 asc_car_s_class0 5.856723 1.758867 3.329827 8.689988e-04
7 asc_train_class1 -0.233230 0.284668 -0.819303 4.126134e-01
8 asc_train_s_class1 1.587098 0.504000 3.149002 1.638291e-03
9 b_time_class1 -7.058184 0.384275 -18.367525 0.000000e+00
10 b_time_s_class1 3.309941 0.398055 8.315296 0.000000e+00
11 b_cost_class1 -4.805565 0.256205 -18.756692 0.000000e+00
12 asc_sm_s_class1 1.924966 0.318254 6.048523 1.461802e-09
13 asc_car_class1 1.080079 0.218784 4.936733 7.944206e-07
14 asc_car_s_class1 2.736763 0.265978 10.289420 0.000000e+00
Total running time of the script: (1 minutes 4.429 seconds)