Note
Go to the end to download the full example code.
15b. Discrete mixture with panel dataΒΆ
- Example of a discrete mixture of logit models, also called latent
class model. The datafile is organized as panel data. Compared to 15a. Discrete mixture with panel data, we integrate before the discrete mixture to show that it is equivalent.
Michel Bierlaire, EPFL Sat Jun 21 2025, 17:22:38
import biogeme.biogeme_logging as blog
from IPython.core.display_functions import display
from biogeme.biogeme import BIOGEME
from biogeme.expressions import (
Beta,
Draws,
ExpressionOrNumeric,
MonteCarlo,
PanelLikelihoodTrajectory,
log,
)
from biogeme.models import logit
from biogeme.results_processing import (
EstimationResults,
get_pandas_estimated_parameters,
)
See the data processing script: Panel data preparation for Swissmetro.
from swissmetro_panel import (
CAR_AV_SP,
CAR_CO_SCALED,
CAR_TT_SCALED,
CHOICE,
SM_AV,
SM_COST_SCALED,
SM_TT_SCALED,
TRAIN_AV_SP,
TRAIN_COST_SCALED,
TRAIN_TT_SCALED,
database,
)
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b15panel_discrete_bis.py')
Example b15panel_discrete_bis.py
Parameters to be estimated. One version for each latent class.
NUMBER_OF_CLASSES = 2
b_cost = [Beta(f'b_cost_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)]
Define a random parameter, normally distributed across individuals, designed to be used for Monte-Carlo simulation.
b_time = [Beta(f'b_time_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)]
It is advised not to use 0 as starting value for the following parameter.
b_time_s = [
Beta(f'b_time_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
b_time_rnd: list[ExpressionOrNumeric] = [
b_time[i] + b_time_s[i] * Draws(f'b_time_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
We do the same for the constants, to address serial correlation.
asc_car = [
Beta(f'asc_car_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_car_s = [
Beta(f'asc_car_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_car_rnd = [
asc_car[i] + asc_car_s[i] * Draws(f'asc_car_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
asc_train = [
Beta(f'asc_train_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_train_s = [
Beta(f'asc_train_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
ASC_TRAIN_RND = [
asc_train[i] + asc_train_s[i] * Draws(f'asc_train_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
asc_sm = [Beta(f'asc_sm_class{i}', 0, None, None, 1) for i in range(NUMBER_OF_CLASSES)]
asc_sm_s = [
Beta(f'asc_sm_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_sm_rnd = [
asc_sm[i] + asc_sm_s[i] * Draws(f'asc_sm_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
Class membership probability.
score_class_0 = Beta('score_class_0', 0, None, None, 0)
probability_class_0 = logit({0: score_class_0, 1: 0}, None, 0)
probability_class_1 = logit({0: score_class_0, 1: 0}, None, 1)
In class 0, it is assumed that the time coefficient is zero.
b_time_rnd[0] = 0
Utility functions.
v_train_per_class = [
ASC_TRAIN_RND[i] + b_time_rnd[i] * TRAIN_TT_SCALED + b_cost[i] * TRAIN_COST_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_swissmetro_per_class = [
asc_sm_rnd[i] + b_time_rnd[i] * SM_TT_SCALED + b_cost[i] * SM_COST_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_car_per_class = [
asc_car_rnd[i] + b_time_rnd[i] * CAR_TT_SCALED + b_cost[i] * CAR_CO_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_per_class = [
{1: v_train_per_class[i], 2: v_swissmetro_per_class[i], 3: v_car_per_class[i]}
for i in range(NUMBER_OF_CLASSES)
]
Associate the availability conditions with the alternatives
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
The choice model is a discrete mixture of logit, with availability conditions We calculate the conditional probability for each class.
choice_probability_per_class = [
MonteCarlo(PanelLikelihoodTrajectory(logit(v_per_class[i], av, CHOICE)))
for i in range(NUMBER_OF_CLASSES)
]
Conditional to the random variables, likelihood for the individual.
choice_probability = (
probability_class_0 * choice_probability_per_class[0]
+ probability_class_1 * choice_probability_per_class[1]
)
We integrate over the random variables using Monte-Carlo.
log_probability = log(choice_probability)
The model is complex, and there are numerical issues when calculating the second derivatives. Therefore, we instruct Biogeme not to evaluate the second derivatives. As a consequence, the statistics reported after estimation are based on the BHHH matrix instead of the Rao-Cramer bound.
the_biogeme = BIOGEME(
database,
log_probability,
number_of_draws=5_000,
seed=1223,
calculating_second_derivatives='never',
)
the_biogeme.model_name = 'b15b_panel_discrete'
Biogeme parameters read from biogeme.toml.
Estimate the parameters.
try:
results = EstimationResults.from_yaml_file(
filename=f'saved_results/{the_biogeme.model_name}.yaml'
)
except FileNotFoundError:
results = the_biogeme.estimate()
Flattening database [(6768, 38)].
Database flattened [(752, 362)]
*** Initial values of the parameters are obtained from the file __b15b_panel_discrete.iter
Cannot read file __b15b_panel_discrete.iter. Statement is ignored.
Starting values for the algorithm: {}
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the algorithm from "automatic" to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Iter. score_class_0 asc_train_class asc_train_s_cla b_cost_class0 asc_sm_s_class0 asc_car_class0 asc_car_s_class asc_train_class asc_train_s_cla b_time_class1 b_time_s_class1 b_cost_class1 asc_sm_s_class1 asc_car_class1 asc_car_s_class Function Relgrad Radius Rho
0 -1 -1 2 -1 2 1 2 -1 2 -1 2 -1 2 1 2 4.1e+03 0.033 1 0.45 +
1 -2 -1.6 2 -2 3 0 2.6 -2 1 -2 3 -2 1.2 0 1 3.8e+03 0.024 1 0.58 +
2 -1.1 -0.57 2.7 -1.7 3.3 -0.98 3.6 -1 2 -3 2.2 -3 2.2 0.75 2 3.7e+03 0.054 1 0.42 +
3 -2.1 -0.68 2.7 -1.6 3.3 -1.2 3.6 -1.9 1.5 -4 2.6 -3.4 1.2 0.41 1.4 3.6e+03 0.016 1 0.64 +
4 -1.6 -0.54 2.7 -1.5 3.3 -1.3 3.7 -0.92 2.3 -4.6 2.4 -3.5 1.9 0.0054 2.4 3.6e+03 0.031 1 0.37 +
5 -2.2 -0.54 2.6 -1.4 3.3 -1.4 3.8 -1.2 2.1 -5.6 2.1 -4 1.4 0.59 2.5 3.5e+03 0.0072 1 0.67 +
6 -2.2 -0.54 2.6 -1.4 3.3 -1.4 3.8 -1.2 2.1 -5.6 2.1 -4 1.4 0.59 2.5 3.5e+03 0.0072 0.5 0.003 -
7 -1.9 -0.46 2.6 -1.3 3.2 -1.4 3.9 -0.71 2.4 -5.6 2.6 -4.2 1.3 0.61 2.5 3.5e+03 0.018 0.5 0.49 +
8 -1.9 -0.47 2.5 -1.2 3.2 -1.5 4 -0.77 2.1 -6.1 2.8 -4.3 1.2 0.63 2.6 3.5e+03 0.0033 0.5 0.88 +
9 -1.9 -0.47 2.5 -1.2 3.2 -1.5 4 -0.77 2.1 -6.1 2.8 -4.3 1.2 0.63 2.6 3.5e+03 0.0033 0.25 -0.04 -
10 -1.8 -0.45 2.5 -1.1 3.1 -1.6 4.1 -0.55 2.1 -6.1 2.9 -4.5 1.3 0.88 2.9 3.5e+03 0.014 0.25 0.29 +
11 -1.9 -0.47 2.4 -1.1 3.1 -1.7 4.1 -0.59 2 -6.3 2.9 -4.5 1.3 0.76 2.8 3.5e+03 0.0024 2.5 0.94 ++
12 -1.9 -0.47 2.4 -1.1 3.1 -1.7 4.1 -0.59 2 -6.3 2.9 -4.5 1.3 0.76 2.8 3.5e+03 0.0024 0.98 -10 -
13 -1.9 -0.47 2.4 -1.1 3.1 -1.7 4.1 -0.59 2 -6.3 2.9 -4.5 1.3 0.76 2.8 3.5e+03 0.0024 0.49 -4.1 -
14 -1.9 -0.47 2.4 -1.1 3.1 -1.7 4.1 -0.59 2 -6.3 2.9 -4.5 1.3 0.76 2.8 3.5e+03 0.0024 0.24 -0.8 -
15 -1.9 -0.47 2.4 -1.1 3.1 -1.7 4.1 -0.59 2 -6.3 2.9 -4.5 1.3 0.76 2.8 3.5e+03 0.0024 0.12 0.057 -
16 -1.8 -0.45 2.4 -1.1 3.1 -1.8 4.1 -0.47 2 -6.4 3 -4.5 1.3 0.74 2.9 3.5e+03 0.0086 0.12 0.3 +
17 -1.9 -0.46 2.4 -1.1 3 -1.8 4.2 -0.49 1.9 -6.5 3 -4.6 1.3 0.77 2.9 3.5e+03 0.002 1.2 0.98 ++
18 -1.9 -0.46 2.4 -1.1 3 -1.8 4.2 -0.49 1.9 -6.5 3 -4.6 1.3 0.77 2.9 3.5e+03 0.002 0.61 -4.9 -
19 -1.9 -0.46 2.4 -1.1 3 -1.8 4.2 -0.49 1.9 -6.5 3 -4.6 1.3 0.77 2.9 3.5e+03 0.002 0.3 -0.36 -
20 -1.9 -0.5 2.3 -1.1 2.9 -1.9 4.3 -0.45 1.8 -6.6 3.3 -4.6 1.3 0.88 3 3.5e+03 0.0024 0.3 0.5 +
21 -1.9 -0.5 2.3 -1.1 2.9 -1.9 4.3 -0.45 1.8 -6.6 3.3 -4.6 1.3 0.88 3 3.5e+03 0.0024 0.15 -0.89 -
22 -1.9 -0.5 2.3 -1.1 2.9 -1.9 4.3 -0.45 1.8 -6.6 3.3 -4.6 1.3 0.88 3 3.5e+03 0.0024 0.076 -0.062 -
23 -1.8 -0.57 2.2 -1 2.8 -2 4.4 -0.37 1.9 -6.7 3.2 -4.7 1.4 0.8 3 3.5e+03 0.004 0.076 0.54 +
24 -1.9 -0.58 2.2 -1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.84 3 3.5e+03 0.0014 0.76 1.3 ++
25 -1.9 -0.58 2.2 -1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.84 3 3.5e+03 0.0014 0.38 -2 -
26 -1.9 -0.58 2.2 -1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.84 3 3.5e+03 0.0014 0.19 -0.63 -
27 -1.9 -0.58 2.2 -1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.84 3 3.5e+03 0.0014 0.095 -0.28 -
28 -1.9 -0.58 2.2 -1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.84 3 3.5e+03 0.0014 0.048 -0.4 -
29 -1.9 -0.58 2.2 -1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.84 3 3.5e+03 0.0014 0.024 -0.16 -
30 -1.8 -0.61 2.2 -1.1 2.7 -2.1 4.4 -0.4 1.8 -6.8 3.2 -4.7 1.5 0.86 3 3.5e+03 0.0013 0.024 0.39 +
31 -1.8 -0.6 2.2 -1.1 2.7 -2.1 4.4 -0.37 1.9 -6.8 3.2 -4.7 1.4 0.88 3 3.5e+03 0.0025 0.024 0.8 +
32 -1.8 -0.6 2.2 -1.1 2.7 -2.2 4.5 -0.38 1.8 -6.8 3.2 -4.7 1.4 0.88 3 3.5e+03 0.0012 0.024 0.82 +
33 -1.8 -0.6 2.1 -1.1 2.6 -2.2 4.5 -0.37 1.8 -6.8 3.2 -4.7 1.5 0.88 3 3.5e+03 0.0019 0.24 0.98 ++
34 -1.8 -0.63 2.1 -1.1 2.5 -2.4 4.6 -0.39 1.8 -6.9 3.3 -4.7 1.5 0.93 2.9 3.5e+03 0.0012 0.24 0.65 +
35 -1.8 -0.63 2.1 -1.1 2.5 -2.4 4.6 -0.39 1.8 -6.9 3.3 -4.7 1.5 0.93 2.9 3.5e+03 0.0012 0.12 -0.4 -
36 -1.8 -0.64 2.1 -1.1 2.4 -2.5 4.7 -0.33 1.8 -7 3.2 -4.8 1.5 1 3 3.5e+03 0.0017 0.12 0.22 +
37 -1.7 -0.68 2.1 -1.1 2.4 -2.7 4.7 -0.35 1.8 -7 3.2 -4.7 1.6 0.9 3 3.5e+03 0.0019 0.12 0.64 +
38 -1.8 -0.72 2.1 -1.1 2.3 -2.8 4.7 -0.39 1.8 -7 3.3 -4.8 1.6 0.94 3 3.5e+03 0.0009 0.12 0.45 +
39 -1.8 -0.74 2.1 -1.1 2.3 -2.9 4.8 -0.33 1.8 -7 3.3 -4.8 1.6 0.99 2.9 3.5e+03 0.0011 0.12 0.83 +
40 -1.8 -0.76 2.1 -1.1 2.2 -3 4.8 -0.34 1.8 -7 3.3 -4.8 1.7 0.99 3 3.5e+03 0.0011 0.12 0.48 +
41 -1.8 -0.76 2.1 -1.1 2.2 -3 4.8 -0.34 1.8 -7 3.3 -4.8 1.7 0.99 3 3.5e+03 0.0011 0.06 0.01 -
42 -1.8 -0.77 2.1 -1.2 2.2 -3.1 4.8 -0.34 1.8 -7 3.3 -4.8 1.6 0.98 2.9 3.5e+03 0.0014 0.06 0.43 +
43 -1.8 -0.78 2.1 -1.2 2.2 -3.1 4.9 -0.32 1.8 -7 3.3 -4.8 1.6 0.98 2.9 3.5e+03 0.00041 0.6 1.1 ++
44 -1.7 -0.95 2.4 -1.2 1.8 -3.7 5 -0.4 1.8 -7 3.4 -4.6 1.7 1 2.9 3.5e+03 0.004 0.6 0.13 +
45 -1.7 -0.95 2.4 -1.2 1.8 -3.7 5 -0.4 1.8 -7 3.4 -4.6 1.7 1 2.9 3.5e+03 0.004 0.3 0.028 -
46 -1.8 -1 2.5 -1.3 1.6 -4 5.1 -0.32 1.7 -6.9 3.3 -4.8 1.7 0.91 2.9 3.5e+03 0.0012 0.3 0.5 +
47 -1.8 -1 2.5 -1.3 1.6 -4 5.1 -0.32 1.7 -6.9 3.3 -4.8 1.7 0.91 2.9 3.5e+03 0.0012 0.15 -0.4 -
48 -1.7 -0.99 2.6 -1.2 1.6 -4 5.1 -0.4 1.8 -7 3.2 -4.9 1.6 1.1 2.9 3.5e+03 0.0031 0.15 0.12 +
49 -1.8 -0.98 2.6 -1.2 1.6 -4.1 5.2 -0.32 1.8 -7 3.3 -4.7 1.7 1.1 2.9 3.5e+03 0.00077 0.15 0.72 +
50 -1.8 -0.98 2.6 -1.2 1.6 -4.1 5.2 -0.32 1.8 -7 3.3 -4.7 1.7 1.1 2.9 3.5e+03 0.00077 0.074 -2.3 -
51 -1.7 -1 2.6 -1.2 1.5 -4.1 5.2 -0.36 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00063 0.074 0.3 +
52 -1.7 -1 2.7 -1.2 1.5 -4.1 5.2 -0.3 1.8 -7 3.3 -4.8 1.7 1.1 2.9 3.5e+03 0.00095 0.074 0.51 +
53 -1.7 -1 2.7 -1.2 1.5 -4.1 5.2 -0.3 1.8 -7 3.3 -4.8 1.7 1.1 2.9 3.5e+03 0.00095 0.037 -1.5 -
54 -1.7 -1 2.7 -1.2 1.5 -4.1 5.2 -0.3 1.8 -7 3.3 -4.8 1.7 1.1 2.9 3.5e+03 0.00095 0.019 0.075 -
55 -1.7 -1 2.7 -1.2 1.4 -4.2 5.3 -0.31 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00055 0.019 0.6 +
56 -1.7 -1 2.7 -1.2 1.4 -4.2 5.3 -0.31 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0002 0.019 0.86 +
57 -1.7 -1 2.8 -1.2 1.4 -4.2 5.3 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00015 0.019 0.61 +
58 -1.7 -1 2.8 -1.2 1.4 -4.2 5.3 -0.31 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00027 0.019 0.88 +
59 -1.7 -1 2.8 -1.2 1.4 -4.2 5.3 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00019 0.19 0.94 ++
60 -1.8 -1 2.9 -1.2 1.3 -4.4 5.5 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0006 0.19 0.81 +
61 -1.8 -1 2.9 -1.2 1.3 -4.4 5.5 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0006 0.093 0.061 -
62 -1.7 -1.1 2.9 -1.2 1.2 -4.5 5.5 -0.3 1.7 -7 3.3 -4.7 1.6 1 2.9 3.5e+03 0.00044 0.093 0.24 +
63 -1.7 -0.98 2.9 -1.2 1.2 -4.6 5.6 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00068 0.093 0.43 +
64 -1.8 -1 2.9 -1.1 1.1 -4.6 5.7 -0.33 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00018 0.093 0.6 +
65 -1.8 -1 2.9 -1.1 1.1 -4.6 5.7 -0.33 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00018 0.047 -0.013 -
66 -1.7 -0.99 2.9 -1.2 1.1 -4.6 5.7 -0.34 1.8 -7 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00028 0.047 0.36 +
67 -1.8 -0.99 2.9 -1.2 1.1 -4.7 5.8 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 6.1e-05 0.047 0.68 +
68 -1.8 -0.99 2.9 -1.2 1.1 -4.7 5.8 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 6.1e-05 0.023 0.031 -
69 -1.8 -0.99 2.9 -1.2 1.1 -4.7 5.8 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 4.6e-05 0.023 0.78 +
70 -1.8 -1 2.9 -1.2 1.1 -4.7 5.8 -0.33 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00012 0.023 0.73 +
71 -1.8 -1 2.9 -1.2 1 -4.7 5.8 -0.33 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00013 0.23 0.98 ++
72 -1.8 -1 3 -1.2 0.87 -4.8 6 -0.33 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00014 2.3 1.6 ++
73 -1.8 -0.97 3.1 -1.2 0.47 -5 6.4 -0.34 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00026 2.3 0.51 +
74 -1.8 -0.98 3.1 -1.2 0.4 -5.1 6.3 -0.33 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00017 23 1.2 ++
75 -1.8 -1 3.1 -1.2 0.21 -5.1 6.2 -0.31 1.7 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00011 23 0.55 +
76 -1.8 -1 3.1 -1.2 0.21 -5.1 6.2 -0.31 1.7 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00011 0.19 -1 -
77 -1.8 -1 3.1 -1.2 0.023 -5.1 6.2 -0.33 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00021 0.19 0.8 +
78 -1.8 -1 3.1 -1.2 0.01 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00015 0.19 0.23 +
79 -1.8 -1.1 3.1 -1.2 0.044 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00011 0.19 0.86 +
80 -1.8 -1.1 3.1 -1.2 0.044 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00011 0.025 -0.14 -
81 -1.8 -1.1 3.1 -1.2 0.02 -4.9 6.1 -0.31 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 6.8e-05 0.025 0.17 +
82 -1.8 -1 3.1 -1.2 0.038 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 1.6e-05 0.25 0.93 ++
83 -1.8 -1 3.1 -1.2 0.038 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 1.6e-05 0.0077 -0.069 -
84 -1.8 -1 3.1 -1.2 0.035 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 1.6e-05 0.0077 0.4 +
85 -1.8 -1 3.1 -1.2 0.038 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 1.1e-05 0.0077 0.67 +
86 -1.8 -1 3.1 -1.2 0.038 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 3.7e-06 0.0077 0.97 +
Optimization algorithm has converged.
Relative gradient: 3.7146548244803204e-06
Cause of termination: Relative gradient = 3.7e-06 <= 6.1e-06
Number of function evaluations: 206
Number of gradient evaluations: 119
Number of hessian evaluations: 0
Algorithm: BFGS with trust region for simple bound constraints
Number of iterations: 87
Proportion of Hessian calculation: 0/59 = 0.0%
Optimization time: 0:02:46.438991
Calculate BHHH
File b15b_panel_discrete.html has been generated.
File b15b_panel_discrete.yaml has been generated.
print(results.short_summary())
Results for model b15b_panel_discrete
Nbr of parameters: 15
Sample size: 752
Observations: 6768
Excluded data: 0
Final log likelihood: -3524.886
Akaike Information Criterion: 7079.772
Bayesian Information Criterion: 7149.113
pandas_results = get_pandas_estimated_parameters(estimation_results=results)
display(pandas_results)
Name Value BHHH std err. BHHH t-stat. BHHH p-value
0 score_class_0 -1.759797 0.226251 -7.778059 7.327472e-15
1 asc_train_class0 -1.047791 0.682861 -1.534413 1.249281e-01
2 asc_train_s_class0 3.111723 0.644742 4.826307 1.390883e-06
3 b_cost_class0 -1.172936 0.536434 -2.186545 2.877578e-02
4 asc_sm_s_class0 0.036134 6.380518 0.005663 9.954815e-01
5 asc_car_class0 -4.871666 1.632494 -2.984186 2.843334e-03
6 asc_car_s_class0 6.081961 1.860904 3.268283 1.082021e-03
7 asc_train_class1 -0.320005 0.284519 -1.124724 2.607062e-01
8 asc_train_s_class1 1.756384 0.457892 3.835804 1.251542e-04
9 b_time_class1 -6.956054 0.367233 -18.941823 0.000000e+00
10 b_time_s_class1 3.280766 0.353985 9.268099 0.000000e+00
11 b_cost_class1 -4.755738 0.253059 -18.792970 0.000000e+00
12 asc_sm_s_class1 1.698679 0.333539 5.092890 3.526467e-07
13 asc_car_class1 1.020440 0.215737 4.730027 2.244901e-06
14 asc_car_s_class1 2.874721 0.260571 11.032396 0.000000e+00
Total running time of the script: (2 minutes 59.219 seconds)