Note
Go to the end to download the full example code.
Mixture with lognormal distribution
Example of a mixture of logit models, using Monte-Carlo integration. The mixing distribution is distributed as a log normal.
- author:
Michel Bierlaire, EPFL
- date:
Mon Apr 10 12:11:53 2023
import biogeme.biogeme_logging as blog
import biogeme.biogeme as bio
from biogeme import models
from biogeme.expressions import (
Beta,
exp,
log,
MonteCarlo,
bioDraws,
)
from biogeme.parameters import Parameters
See the data processing script: Data preparation for Swissmetro.
from swissmetro_data import (
database,
CHOICE,
SM_AV,
CAR_AV_SP,
TRAIN_AV_SP,
TRAIN_TT_SCALED,
TRAIN_COST_SCALED,
SM_TT_SCALED,
SM_COST_SCALED,
CAR_TT_SCALED,
CAR_CO_SCALED,
)
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b17lognormal_mixture.py')
Example b17lognormal_mixture.py
Parameters to be estimated.
ASC_CAR = Beta('ASC_CAR', 0, None, None, 0)
ASC_TRAIN = Beta('ASC_TRAIN', 0, None, None, 0)
ASC_SM = Beta('ASC_SM', 0, None, None, 1)
B_COST = Beta('B_COST', 0, None, None, 0)
Define a random parameter, normally distributed, designed to be used for Monte-Carlo simulation.
B_TIME = Beta('B_TIME', 0, None, None, 0)
It is advised not to use 0 as starting value for the following parameter.
B_TIME_S = Beta('B_TIME_S', 1, -2, 2, 0)
Define a random parameter, log normally distributed, designed to be used for Monte-Carlo simulation.
B_TIME_RND = -exp(B_TIME + B_TIME_S * bioDraws('b_time_rnd', 'NORMAL'))
Definition of the utility functions.
V1 = ASC_TRAIN + B_TIME_RND * TRAIN_TT_SCALED + B_COST * TRAIN_COST_SCALED
V2 = ASC_SM + B_TIME_RND * SM_TT_SCALED + B_COST * SM_COST_SCALED
V3 = ASC_CAR + B_TIME_RND * CAR_TT_SCALED + B_COST * CAR_CO_SCALED
Associate utility functions with the numbering of alternatives.
V = {1: V1, 2: V2, 3: V3}
Associate the availability conditions with the alternatives.
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
Conditional to b_time_rnd, we have a logit model (called the kernel).
prob = models.logit(V, av, CHOICE)
We integrate over b_time_rnd using Monte-Carlo.
logprob = log(MonteCarlo(prob))
As the objective is to illustrate the syntax, we calculate the Monte-Carlo approximation with a small number of draws.
the_biogeme = bio.BIOGEME(database, logprob, number_of_draws=100, seed=1223)
the_biogeme.modelName = '17lognormal_mixture'
Biogeme parameters read from biogeme.toml.
Estimate the parameters.
results = the_biogeme.estimate()
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
*** Initial values of the parameters are obtained from the file __17lognormal_mixture.iter
Cannot read file __17lognormal_mixture.iter. Statement is ignored.
The number of draws (100) is low. The results may not be meaningful.
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the name of the algorithm in the TOML file from "automatic" to "simple_bounds"
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Iter. ASC_CAR ASC_TRAIN B_COST B_TIME B_TIME_S Function Relgrad Radius Rho
0 0 0 0 0 1 5.7e+03 0.096 0.5 -0.0079 -
1 0.5 -0.5 -0.5 0.5 1.5 5.4e+03 0.042 0.5 0.38 +
2 0 -0.36 -1 0.67 1.5 5.3e+03 0.038 0.5 0.37 +
3 0 -0.36 -1 0.67 1.5 5.3e+03 0.038 0.25 -0.71 -
4 0.25 -0.38 -1.2 0.42 1.2 5.3e+03 0.018 0.25 0.45 +
5 0.091 -0.46 -1.5 0.62 1.2 5.2e+03 0.015 0.25 0.16 +
6 0.091 -0.46 -1.5 0.62 1.2 5.2e+03 0.015 0.12 -1.4 -
7 0.095 -0.33 -1.5 0.5 1.2 5.2e+03 0.011 0.12 0.23 +
8 0.22 -0.42 -1.4 0.53 1.2 5.2e+03 0.012 0.12 0.17 +
9 0.095 -0.36 -1.3 0.57 1.2 5.2e+03 0.01 0.12 0.12 +
10 0.095 -0.36 -1.3 0.57 1.2 5.2e+03 0.01 0.062 -2.2 -
11 0.15 -0.34 -1.4 0.51 1.2 5.2e+03 0.0071 0.062 0.44 +
12 0.15 -0.34 -1.4 0.51 1.2 5.2e+03 0.0071 0.031 0.083 -
13 0.13 -0.38 -1.3 0.54 1.2 5.2e+03 0.0021 0.031 0.77 +
14 0.13 -0.38 -1.3 0.54 1.2 5.2e+03 0.0021 0.016 -0.47 -
15 0.15 -0.38 -1.4 0.53 1.2 5.2e+03 0.0022 0.016 0.32 +
16 0.14 -0.38 -1.4 0.54 1.2 5.2e+03 0.00078 0.016 0.66 +
17 0.14 -0.38 -1.4 0.54 1.2 5.2e+03 0.00078 0.0078 -0.29 -
18 0.14 -0.37 -1.4 0.54 1.2 5.2e+03 0.0004 0.0078 0.47 +
19 0.14 -0.37 -1.4 0.54 1.2 5.2e+03 0.0004 0.0039 0.054 -
20 0.14 -0.38 -1.4 0.54 1.2 5.2e+03 0.00019 0.0039 0.69 +
21 0.14 -0.37 -1.4 0.54 1.2 5.2e+03 0.00023 0.0039 0.14 +
22 0.14 -0.37 -1.4 0.54 1.2 5.2e+03 0.00023 0.002 -1.2 -
23 0.15 -0.37 -1.4 0.54 1.2 5.2e+03 0.00018 0.002 0.24 +
24 0.15 -0.37 -1.4 0.54 1.2 5.2e+03 9.4e-05 0.002 0.63 +
Results saved in file 17lognormal_mixture.html
Results saved in file 17lognormal_mixture.pickle
print(results.short_summary())
Results for model 17lognormal_mixture
Nbr of parameters: 5
Sample size: 6768
Excluded data: 3960
Final log likelihood: -5239.843
Akaike Information Criterion: 10489.69
Bayesian Information Criterion: 10523.78
pandas_results = results.get_estimated_parameters()
pandas_results
Total running time of the script: (0 minutes 11.626 seconds)