Note
Go to the end to download the full example code.
Mixture of logit with panel data
- Example of a mixture of logit models, using Monte-Carlo integration.
The datafile is organized as panel data, but a flat version is generated. It means that each row corresponds to one individual, and contains all observations associated with this individual.
- author:
Michel Bierlaire, EPFL
- date:
Sun Apr 9 18:14:16 2023
import numpy as np
import biogeme.biogeme_logging as blog
import biogeme.biogeme as bio
from biogeme import models
from biogeme.expressions import (
Beta,
Variable,
bioDraws,
MonteCarlo,
log,
exp,
bioMultSum,
)
from biogeme.parameters import Parameters
See the data processing script: Panel data preparation for Swissmetro.
from swissmetro_panel import (
flat_database,
SM_AV,
CAR_AV_SP,
TRAIN_AV_SP,
)
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b12panel_flat.py')
Example b12panel_flat.py
We set the seed so that the results are reproducible. This is not necessary in general.
np.random.seed(seed=90267)
The Pandas data structure is available as database.data. Use all the Pandas functions to invesigate the database print(database.data.describe())
Parameters to be estimated.
B_COST = Beta('B_COST', 0, None, None, 0)
Define a random parameter, normally distributed across individuals, designed to be used for Monte-Carlo simulation.
B_TIME = Beta('B_TIME', 0, None, None, 0)
It is advised not to use 0 as starting value for the following parameter.
B_TIME_S = Beta('B_TIME_S', 1, None, None, 0)
B_TIME_RND = B_TIME + B_TIME_S * bioDraws('b_time_rnd', 'NORMAL_ANTI')
We do the same for the constants, to address serial correlation.
ASC_CAR = Beta('ASC_CAR', 0, None, None, 0)
ASC_CAR_S = Beta('ASC_CAR_S', 1, None, None, 0)
ASC_CAR_RND = ASC_CAR + ASC_CAR_S * bioDraws('ASC_CAR_RND', 'NORMAL_ANTI')
ASC_TRAIN = Beta('ASC_TRAIN', 0, None, None, 0)
ASC_TRAIN_S = Beta('ASC_TRAIN_S', 1, None, None, 0)
ASC_TRAIN_RND = ASC_TRAIN + ASC_TRAIN_S * bioDraws('ASC_TRAIN_RND', 'NORMAL_ANTI')
ASC_SM = Beta('ASC_SM', 0, None, None, 1)
ASC_SM_S = Beta('ASC_SM_S', 1, None, None, 0)
ASC_SM_RND = ASC_SM + ASC_SM_S * bioDraws('ASC_SM_RND', 'NORMAL_ANTI')
In a flatten database, the names of the variables include the time or, here, the number of the question, as a prefix
Definition of the utility functions
V1 = [
ASC_TRAIN_RND
+ B_TIME_RND * Variable(f'{t}_TRAIN_TT_SCALED')
+ B_COST * Variable(f'{t}_TRAIN_COST_SCALED')
for t in range(1, 10)
]
V2 = [
ASC_SM_RND
+ B_TIME_RND * Variable(f'{t}_SM_TT_SCALED')
+ B_COST * Variable(f'{t}_SM_COST_SCALED')
for t in range(1, 10)
]
V3 = [
ASC_CAR_RND
+ B_TIME_RND * Variable(f'{t}_CAR_TT_SCALED')
+ B_COST * Variable(f'{t}_CAR_CO_SCALED')
for t in range(1, 10)
]
Associate utility functions with the numbering of alternatives.
V = [{1: V1[t], 2: V2[t], 3: V3[t]} for t in range(9)]
Associate the availability conditions with the alternatives.
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
Conditional on the random parameters, the likelihood of one observation is given by the logit model (called the kernel). The likelihood of all observations for one individual (the trajectory) is the product of the likelihood of each observation.
obsprob = [models.loglogit(V[t], av, Variable(f'{t+1}_CHOICE')) for t in range(9)]
condprobIndiv = exp(bioMultSum(obsprob))
We integrate over the random parameters using Monte-Carlo.
logprob = log(MonteCarlo(condprobIndiv))
As the objective is to illustrate the syntax, we calculate the Monte-Carlo approximation with a small number of draws.
the_biogeme = bio.BIOGEME(flat_database, logprob, number_of_draws=100, seed=1223)
the_biogeme.modelName = 'b12panel_flat'
Biogeme parameters read from biogeme.toml.
Estimate the parameters.
results = the_biogeme.estimate()
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
*** Initial values of the parameters are obtained from the file __b12panel_flat.iter
Cannot read file __b12panel_flat.iter. Statement is ignored.
The number of draws (100) is low. The results may not be meaningful.
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Iter. ASC_CAR ASC_CAR_S ASC_SM_S ASC_TRAIN ASC_TRAIN_S B_COST B_TIME B_TIME_S Function Relgrad Radius Rho
0 1 2 2 -1 2 -1 -1 2 4e+03 0.041 1 0.41 +
1 0 2.1 1.9 -1.6 1.7 -2 -2 2.9 3.8e+03 0.029 1 0.85 +
2 -1 3.1 0.85 -0.64 2.7 -3 -3 2.1 3.7e+03 0.046 1 0.35 +
3 -0.3 3.4 0.53 -1.6 2.5 -2.4 -4 2.5 3.7e+03 0.04 1 0.54 +
4 -0.3 3.4 0.53 -1.6 2.5 -2.4 -4 2.5 3.7e+03 0.04 0.5 -0.93 -
5 -0.3 3.4 0.53 -1.6 2.5 -2.4 -4 2.5 3.7e+03 0.04 0.25 -0.075 -
6 -0.34 3.1 0.78 -1.4 2.8 -2.6 -4.2 2.2 3.7e+03 0.032 0.25 0.37 +
7 -0.34 3.1 0.78 -1.4 2.8 -2.6 -4.2 2.2 3.7e+03 0.032 0.12 -0.00093 -
8 -0.22 3 0.65 -1.3 2.9 -2.8 -4.1 2.4 3.7e+03 0.013 0.12 0.72 +
9 -0.094 3.1 0.53 -1.1 2.8 -2.6 -4.2 2.5 3.7e+03 0.03 0.12 0.68 +
10 -0.091 3.1 0.52 -1.1 2.8 -2.6 -4.4 2.5 3.6e+03 0.024 0.12 0.4 +
11 -0.079 3.1 0.52 -1 2.8 -2.7 -4.4 2.6 3.6e+03 0.019 1.2 0.99 ++
12 0.13 3.3 0.56 -0.39 2.8 -2.8 -5.7 3.2 3.6e+03 0.043 1.2 0.34 +
13 0.13 3.3 0.56 -0.39 2.8 -2.8 -5.7 3.2 3.6e+03 0.043 0.62 -0.55 -
14 0.46 3.9 0.75 -0.29 2.5 -2.9 -5.5 3.2 3.6e+03 0.017 0.62 0.23 +
15 0.46 3.9 0.75 -0.29 2.5 -2.9 -5.5 3.2 3.6e+03 0.017 0.31 -0.75 -
16 0.23 3.6 0.83 -0.17 2.4 -3 -5.6 3.3 3.6e+03 0.0079 3.1 1.1 ++
17 0.23 3.6 0.83 -0.17 2.4 -3 -5.6 3.3 3.6e+03 0.0079 1.6 -12 -
18 0.23 3.6 0.83 -0.17 2.4 -3 -5.6 3.3 3.6e+03 0.0079 0.78 -12 -
19 0.23 3.6 0.83 -0.17 2.4 -3 -5.6 3.3 3.6e+03 0.0079 0.39 -2.7 -
20 0.23 3.6 0.83 -0.17 2.4 -3 -5.6 3.3 3.6e+03 0.0079 0.2 -0.17 -
21 0.3 3.6 0.79 -0.21 2.2 -3.1 -5.8 3.3 3.6e+03 0.025 0.2 0.31 +
22 0.3 3.6 0.79 -0.21 2.2 -3.1 -5.8 3.3 3.6e+03 0.025 0.098 -0.4 -
23 0.3 3.5 0.81 -0.17 2.2 -3.1 -5.8 3.4 3.6e+03 0.02 0.098 0.29 +
24 0.31 3.6 0.86 -0.074 2.1 -3 -5.8 3.4 3.6e+03 0.004 0.098 0.68 +
25 0.31 3.6 0.86 -0.074 2.1 -3 -5.8 3.4 3.6e+03 0.004 0.049 -2.8 -
26 0.31 3.6 0.86 -0.074 2.1 -3 -5.8 3.4 3.6e+03 0.004 0.024 -1 -
27 0.33 3.6 0.85 -0.05 2.2 -3.1 -5.9 3.4 3.6e+03 0.0095 0.024 0.22 +
28 0.34 3.6 0.85 -0.045 2.1 -3.1 -5.9 3.4 3.6e+03 0.005 0.024 0.8 +
29 0.34 3.6 0.86 -0.047 2.1 -3.1 -5.9 3.4 3.6e+03 0.002 0.24 0.91 ++
30 0.34 3.6 0.86 -0.047 2.1 -3.1 -5.9 3.4 3.6e+03 0.002 0.12 0.093 -
31 0.38 3.6 0.91 -0.024 2.1 -3.1 -6 3.5 3.6e+03 0.0058 0.12 0.55 +
32 0.38 3.6 0.91 -0.024 2.1 -3.1 -6 3.5 3.6e+03 0.0058 0.061 -1.3 -
33 0.38 3.6 0.91 -0.024 2.1 -3.1 -6 3.5 3.6e+03 0.0058 0.031 -0.051 -
34 0.38 3.6 0.9 0.007 2.1 -3.1 -6 3.5 3.6e+03 0.0016 0.031 0.56 +
35 0.39 3.7 0.89 -0.0041 2.1 -3.1 -6.1 3.5 3.6e+03 0.00031 0.031 0.79 +
36 0.39 3.7 0.89 -0.0041 2.1 -3.1 -6.1 3.5 3.6e+03 0.00031 0.015 -0.04 -
37 0.38 3.7 0.9 0.011 2.1 -3.1 -6.1 3.5 3.6e+03 0.0011 0.015 0.46 +
38 0.38 3.7 0.9 0.0057 2.1 -3.1 -6.1 3.5 3.6e+03 0.00073 0.015 0.64 +
39 0.38 3.7 0.89 0.011 2.2 -3.1 -6.1 3.5 3.6e+03 0.00025 0.015 0.27 +
40 0.38 3.7 0.89 0.011 2.2 -3.1 -6.1 3.5 3.6e+03 0.00025 0.0076 -0.5 -
41 0.38 3.7 0.9 0.012 2.1 -3.1 -6.1 3.5 3.6e+03 0.00017 0.0076 0.85 +
42 0.38 3.7 0.9 0.012 2.1 -3.1 -6.1 3.5 3.6e+03 0.00017 0.0038 -0.96 -
43 0.38 3.7 0.9 0.014 2.1 -3.1 -6.1 3.5 3.6e+03 0.00017 0.0038 0.36 +
44 0.38 3.7 0.9 0.014 2.1 -3.1 -6.1 3.5 3.6e+03 0.00011 0.0038 0.26 +
Results saved in file b12panel_flat.html
Results saved in file b12panel_flat.pickle
print(results.short_summary())
Results for model b12panel_flat
Nbr of parameters: 8
Sample size: 752
Excluded data: 0
Final log likelihood: -3621.455
Akaike Information Criterion: 7258.911
Bayesian Information Criterion: 7295.893
pandas_results = results.get_estimated_parameters()
pandas_results
Total running time of the script: (0 minutes 24.892 seconds)