Note
Go to the end to download the full example code.
15a. Discrete mixture with panel dataΒΆ
- Example of a discrete mixture of logit models, also called latent
class model. The datafile is organized as panel data.
Michel Bierlaire, EPFL Sat Jun 21 2025, 17:16:14
import biogeme.biogeme_logging as blog
from IPython.core.display_functions import display
from biogeme.biogeme import BIOGEME
from biogeme.expressions import (
Beta,
Draws,
ExpressionOrNumeric,
MonteCarlo,
PanelLikelihoodTrajectory,
log,
)
from biogeme.models import logit
from biogeme.results_processing import (
EstimationResults,
get_pandas_estimated_parameters,
)
See the data processing script: Panel data preparation for Swissmetro.
from swissmetro_panel import (
CAR_AV_SP,
CAR_CO_SCALED,
CAR_TT_SCALED,
CHOICE,
SM_AV,
SM_COST_SCALED,
SM_TT_SCALED,
TRAIN_AV_SP,
TRAIN_COST_SCALED,
TRAIN_TT_SCALED,
database,
)
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Example b15panel_discrete.py')
Example b15panel_discrete.py
Parameters to be estimated. One version for each latent class.
NUMBER_OF_CLASSES = 2
b_cost = [Beta(f'b_cost_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)]
Define a random parameter, normally distributed across individuals, designed to be used for Monte-Carlo simulation
b_time = [Beta(f'b_time_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)]
It is advised not to use 0 as starting value for the following parameter.
b_time_s = [
Beta(f'b_time_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
b_time_rnd: list[ExpressionOrNumeric] = [
b_time[i] + b_time_s[i] * Draws(f'b_time_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
We do the same for the constants, to address serial correlation.
asc_car = [
Beta(f'asc_car_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_car_s = [
Beta(f'asc_car_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_car_rnd = [
asc_car[i] + asc_car_s[i] * Draws(f'asc_car_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
asc_train = [
Beta(f'asc_train_class{i}', 0, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_train_s = [
Beta(f'asc_train_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_train_rnd = [
asc_train[i] + asc_train_s[i] * Draws(f'asc_train_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
asc_sm = [Beta(f'asc_sm_class{i}', 0, None, None, 1) for i in range(NUMBER_OF_CLASSES)]
asc_sm_s = [
Beta(f'asc_sm_s_class{i}', 1, None, None, 0) for i in range(NUMBER_OF_CLASSES)
]
asc_sm_rnd = [
asc_sm[i] + asc_sm_s[i] * Draws(f'asc_sm_rnd_class{i}', 'NORMAL_ANTI')
for i in range(NUMBER_OF_CLASSES)
]
Class membership probability.
score_class_0 = Beta('score_class_0', -1.7, None, None, 0)
probability_class_0 = logit({0: score_class_0, 1: 0}, None, 0)
probability_class_1 = logit({0: score_class_0, 1: 0}, None, 1)
In class 0, it is assumed that the time coefficient is zero.
b_time_rnd[0] = 0
Utility functions.
v_train_per_class = [
asc_train_rnd[i] + b_time_rnd[i] * TRAIN_TT_SCALED + b_cost[i] * TRAIN_COST_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_swissmetro_per_class = [
asc_sm_rnd[i] + b_time_rnd[i] * SM_TT_SCALED + b_cost[i] * SM_COST_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_car_per_class = [
asc_car_rnd[i] + b_time_rnd[i] * CAR_TT_SCALED + b_cost[i] * CAR_CO_SCALED
for i in range(NUMBER_OF_CLASSES)
]
v_per_class = [
{1: v_train_per_class[i], 2: v_swissmetro_per_class[i], 3: v_car_per_class[i]}
for i in range(NUMBER_OF_CLASSES)
]
Associate the availability conditions with the alternatives.
av = {1: TRAIN_AV_SP, 2: SM_AV, 3: CAR_AV_SP}
The choice model is a discrete mixture of logit, with availability conditions We calculate the conditional probability for each class.
choice_probability_per_class = [
PanelLikelihoodTrajectory(logit(v_per_class[i], av, CHOICE))
for i in range(NUMBER_OF_CLASSES)
]
Conditional to the random variables, likelihood for the individual.
choice_probability = (
probability_class_0 * choice_probability_per_class[0]
+ probability_class_1 * choice_probability_per_class[1]
)
We integrate over the random variables using Monte-Carlo.
log_probability = log(MonteCarlo(choice_probability))
The model is complex, and there are numerical issues when calculating the second derivatives. Therefore, we instruct Biogeme not to evaluate the second derivatives. As a consequence, the statistics reported after estimation are based on the BHHH matrix instead of the Rao-Cramer bound.
the_biogeme = BIOGEME(
database,
log_probability,
number_of_draws=5_000,
calculating_second_derivatives='never',
seed=1223,
)
the_biogeme.model_name = 'b15a_panel_discrete'
Biogeme parameters read from biogeme.toml.
Estimate the parameters.
try:
results = EstimationResults.from_yaml_file(
filename=f'saved_results/{the_biogeme.model_name}.yaml'
)
except FileNotFoundError:
results = the_biogeme.estimate()
Flattening database [(6768, 38)].
Database flattened [(752, 362)]
*** Initial values of the parameters are obtained from the file __b15a_panel_discrete.iter
Cannot read file __b15a_panel_discrete.iter. Statement is ignored.
Starting values for the algorithm: {}
As the model is rather complex, we cancel the calculation of second derivatives. If you want to control the parameters, change the algorithm from "automatic" to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: BFGS with trust region for simple bounds
Iter. score_class_0 asc_train_class asc_train_s_cla b_cost_class0 asc_sm_s_class0 asc_car_class0 asc_car_s_class asc_train_class asc_train_s_cla b_time_class1 b_time_s_class1 b_cost_class1 asc_sm_s_class1 asc_car_class1 asc_car_s_class Function Relgrad Radius Rho
0 -2.7 -1 2 -1 2 1 2 -1 2 -1 2 -1 2 1 2 4e+03 0.035 1 0.42 +
1 -3.1 -1 2 -1.1 2.1 0.88 2 -2 1.7 -2 3 -2 1.8 0 1.8 3.8e+03 0.028 1 0.84 +
2 -4.1 -0.03 3 -0.1 3.1 -0.12 3 -1 2.7 -3 2.8 -3 0.77 -1 2.8 3.7e+03 0.039 1 0.41 +
3 -4.1 -0.03 3 -0.1 3.1 -0.12 3 -1 2.7 -3 2.8 -3 0.77 -1 2.8 3.7e+03 0.039 0.5 -0.15 -
4 -4.3 -0.46 2.8 -0.55 3.2 -0.32 3.4 -1.5 2.2 -3.5 2.3 -3.5 0.27 -0.5 3.3 3.6e+03 0.026 0.5 0.53 +
5 -3.8 -0.34 2.9 -0.4 3.3 -0.44 3.5 -1 2.7 -4 2.8 -4 0.77 0 2.8 3.6e+03 0.036 0.5 0.6 +
6 -3.6 -0.36 2.8 -0.39 3.3 -0.47 3.5 -1.5 2.4 -4.5 3 -3.5 0.78 0.16 2.8 3.6e+03 0.014 0.5 0.63 +
7 -3.1 -0.31 2.8 -0.41 3.3 -0.58 3.5 -1 2.9 -5 2.7 -4 0.94 0.24 3.1 3.6e+03 0.021 0.5 0.7 +
8 -2.8 -0.32 2.7 -0.43 3.2 -0.67 3.6 -0.94 2.4 -5.5 2.7 -4.3 0.7 0.35 2.6 3.5e+03 0.0084 0.5 0.63 +
9 -2.4 -0.31 2.7 -0.46 3.2 -0.72 3.6 -0.74 2.5 -5.6 2.9 -4.1 0.85 0.44 3.1 3.5e+03 0.011 0.5 0.72 +
10 -2.3 -0.3 2.7 -0.57 3.2 -0.85 3.7 -0.74 2.2 -6.1 3.1 -4.3 0.82 0.55 3.1 3.5e+03 0.0036 5 0.96 ++
11 -2.3 -0.3 2.7 -0.57 3.2 -0.85 3.7 -0.74 2.2 -6.1 3.1 -4.3 0.82 0.55 3.1 3.5e+03 0.0036 2.1 -29 -
12 -2.3 -0.3 2.7 -0.57 3.2 -0.85 3.7 -0.74 2.2 -6.1 3.1 -4.3 0.82 0.55 3.1 3.5e+03 0.0036 1.1 -7 -
13 -2.3 -0.3 2.7 -0.57 3.2 -0.85 3.7 -0.74 2.2 -6.1 3.1 -4.3 0.82 0.55 3.1 3.5e+03 0.0036 0.53 -1.5 -
14 -2.3 -0.3 2.7 -0.57 3.2 -0.85 3.7 -0.74 2.2 -6.1 3.1 -4.3 0.82 0.55 3.1 3.5e+03 0.0036 0.27 -0.018 -
15 -2.1 -0.26 2.7 -0.65 3.1 -0.94 3.7 -0.47 2.3 -6.2 3.1 -4.5 0.87 0.58 3.1 3.5e+03 0.008 0.27 0.56 +
16 -2.1 -0.27 2.6 -0.77 3.1 -1.1 3.8 -0.5 2.1 -6.5 3.1 -4.6 0.93 0.68 3.1 3.5e+03 0.0024 2.7 1 ++
17 -2.1 -0.27 2.6 -0.77 3.1 -1.1 3.8 -0.5 2.1 -6.5 3.1 -4.6 0.93 0.68 3.1 3.5e+03 0.0024 1.2 -16 -
18 -2.1 -0.27 2.6 -0.77 3.1 -1.1 3.8 -0.5 2.1 -6.5 3.1 -4.6 0.93 0.68 3.1 3.5e+03 0.0024 0.6 -2.5 -
19 -2.1 -0.27 2.6 -0.77 3.1 -1.1 3.8 -0.5 2.1 -6.5 3.1 -4.6 0.93 0.68 3.1 3.5e+03 0.0024 0.3 -0.21 -
20 -1.8 -0.28 2.5 -0.95 3 -1.3 3.9 -0.33 2.1 -6.8 3.1 -4.6 1.1 0.64 3.1 3.5e+03 0.0024 0.3 0.26 +
21 -1.8 -0.36 2.4 -1.1 2.8 -1.6 4 -0.31 1.9 -6.6 3.2 -4.8 1.2 0.95 3.1 3.5e+03 0.0096 0.3 0.33 +
22 -1.8 -0.36 2.4 -1.1 2.8 -1.6 4 -0.31 1.9 -6.6 3.2 -4.8 1.2 0.95 3.1 3.5e+03 0.0096 0.15 -0.6 -
23 -2 -0.51 2.3 -0.94 2.7 -1.7 4.2 -0.46 1.8 -6.8 3.1 -4.7 1.4 0.79 2.9 3.5e+03 0.0075 0.15 0.13 +
24 -1.8 -0.49 2.3 -0.96 2.6 -1.8 4.2 -0.36 1.8 -6.7 3.2 -4.7 1.5 0.79 3 3.5e+03 0.003 0.15 0.8 +
25 -1.8 -0.49 2.2 -1 2.5 -1.9 4.3 -0.34 1.9 -6.8 3.1 -4.8 1.4 0.94 3 3.5e+03 0.002 0.15 0.29 +
26 -1.8 -0.54 2.2 -1 2.5 -2.1 4.3 -0.41 1.8 -6.9 3.2 -4.7 1.5 0.86 3 3.5e+03 0.0041 0.15 0.77 +
27 -1.8 -0.56 2.2 -1.1 2.4 -2.2 4.4 -0.36 1.8 -6.9 3.3 -4.8 1.5 0.93 3 3.5e+03 0.00055 1.5 1 ++
28 -1.8 -1 2 -1.1 1.8 -3.7 5.1 -0.23 1.8 -7.2 3.3 -5.1 1.9 0.97 3 3.5e+03 0.0031 1.5 0.21 +
29 -1.8 -1 2 -1.1 1.8 -3.7 5.1 -0.23 1.8 -7.2 3.3 -5.1 1.9 0.97 3 3.5e+03 0.0031 0.76 -1.8 -
30 -1.8 -1 2 -1.1 1.8 -3.7 5.1 -0.23 1.8 -7.2 3.3 -5.1 1.9 0.97 3 3.5e+03 0.0031 0.38 -0.49 -
31 -1.6 -0.94 2.2 -1.1 1.7 -4 5.2 -0.34 1.8 -7.1 3.4 -4.7 1.5 1.1 2.8 3.5e+03 0.0021 0.38 0.21 +
32 -1.6 -0.94 2.2 -1.1 1.7 -4 5.2 -0.34 1.8 -7.1 3.4 -4.7 1.5 1.1 2.8 3.5e+03 0.0021 0.19 -0.014 -
33 -1.7 -0.93 2.3 -1.2 1.7 -4 5.2 -0.28 1.8 -7.1 3.3 -4.7 1.7 1.1 3 3.5e+03 0.0017 0.19 0.75 +
34 -1.7 -0.93 2.3 -1.2 1.7 -4 5.2 -0.28 1.8 -7.1 3.3 -4.7 1.7 1.1 3 3.5e+03 0.0017 0.094 -0.32 -
35 -1.8 -0.95 2.4 -1.2 1.7 -4 5.2 -0.37 1.8 -7 3.4 -4.8 1.7 1 2.9 3.5e+03 0.0011 0.094 0.14 +
36 -1.7 -0.97 2.4 -1.2 1.7 -4.1 5.2 -0.36 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0014 0.94 1 ++
37 -1.7 -0.97 2.4 -1.2 1.7 -4.1 5.2 -0.36 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0014 0.47 -6.7 -
38 -1.7 -0.97 2.4 -1.2 1.7 -4.1 5.2 -0.36 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0014 0.24 -4.7 -
39 -1.7 -0.97 2.4 -1.2 1.7 -4.1 5.2 -0.36 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0014 0.12 -2.6 -
40 -1.7 -0.97 2.4 -1.2 1.7 -4.1 5.2 -0.36 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0014 0.059 -1.2 -
41 -1.7 -0.97 2.5 -1.2 1.6 -4.1 5.2 -0.31 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00089 0.059 0.46 +
42 -1.8 -0.99 2.5 -1.2 1.6 -4.1 5.2 -0.3 1.8 -7 3.3 -4.8 1.7 1.1 2.9 3.5e+03 0.0019 0.059 0.44 +
43 -1.7 -1 2.6 -1.2 1.5 -4.1 5.2 -0.34 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00036 0.59 0.94 ++
44 -1.7 -1 2.6 -1.2 1.5 -4.1 5.2 -0.34 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00036 0.3 -3.2 -
45 -1.7 -1 2.6 -1.2 1.5 -4.1 5.2 -0.34 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00036 0.15 -1.4 -
46 -1.7 -1 2.6 -1.2 1.5 -4.1 5.2 -0.34 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00036 0.074 -0.9 -
47 -1.8 -1 2.7 -1.2 1.5 -4.2 5.3 -0.29 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0014 0.074 0.19 +
48 -1.7 -1 2.7 -1.2 1.4 -4.2 5.3 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0011 0.074 0.62 +
49 -1.7 -1 2.8 -1.2 1.4 -4.2 5.3 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00094 0.074 0.24 +
50 -1.8 -1 2.8 -1.2 1.3 -4.3 5.4 -0.3 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00032 0.074 0.68 +
51 -1.8 -1 2.8 -1.2 1.3 -4.3 5.4 -0.3 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00032 0.037 -0.61 -
52 -1.7 -1 2.8 -1.2 1.3 -4.3 5.4 -0.31 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0003 0.037 0.49 +
53 -1.7 -1 2.8 -1.2 1.3 -4.4 5.4 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00015 0.037 0.55 +
54 -1.7 -1 2.8 -1.2 1.3 -4.4 5.4 -0.3 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00031 0.037 0.15 +
55 -1.7 -1 2.8 -1.2 1.3 -4.4 5.5 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00012 0.37 0.91 ++
56 -1.8 -1 2.9 -1.2 1.2 -4.7 5.7 -0.3 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0004 0.37 0.65 +
57 -1.8 -1 2.9 -1.2 1.2 -4.7 5.7 -0.3 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0004 0.13 -0.25 -
58 -1.8 -1 2.9 -1.2 1.2 -4.7 5.7 -0.3 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.0004 0.066 -0.19 -
59 -1.7 -1 2.9 -1.2 1.1 -4.7 5.7 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00018 0.066 0.54 +
60 -1.7 -1 2.9 -1.2 1.2 -4.7 5.7 -0.32 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00021 0.066 0.44 +
61 -1.8 -0.99 2.9 -1.2 1.1 -4.7 5.8 -0.33 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00018 0.066 0.18 +
62 -1.8 -0.99 2.9 -1.2 1 -4.7 5.9 -0.31 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00024 0.066 0.5 +
63 -1.8 -0.98 3 -1.2 0.99 -4.8 5.9 -0.33 1.7 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00034 0.066 0.57 +
64 -1.8 -1 2.9 -1.2 0.92 -4.8 5.9 -0.34 1.8 -6.9 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00015 0.066 0.12 +
65 -1.8 -0.98 3 -1.2 0.86 -4.8 6 -0.31 1.7 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00018 0.066 0.72 +
66 -1.8 -0.98 3 -1.2 0.79 -4.8 6 -0.33 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 5.2e-05 0.66 0.92 ++
67 -1.8 -0.97 3.1 -1.2 0.6 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 7.4e-05 6.6 1.6 ++
68 -1.8 -0.97 3.1 -1.2 0.6 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 7.4e-05 0.42 -4 -
69 -1.8 -0.95 3.1 -1.1 0.18 -5 6.2 -0.35 1.8 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 9.5e-05 4.2 0.91 ++
70 -1.8 -1 3.1 -1.2 0.074 -5 6.2 -0.33 1.7 -6.9 3.3 -4.7 1.7 1 2.9 3.5e+03 0.00011 42 1.3 ++
71 -1.8 -1 3.1 -1.2 0.046 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00017 42 0.27 +
72 -1.8 -1 3.1 -1.2 0.046 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00017 0.083 -1.4 -
73 -1.8 -1 3.1 -1.2 0.046 -4.9 6.1 -0.33 1.8 -6.9 3.3 -4.8 1.7 1 2.9 3.5e+03 0.00017 0.042 0.011 -
74 -1.8 -1.1 3.1 -1.2 0.033 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 4.7e-05 0.042 0.82 +
75 -1.8 -1 3.1 -1.2 0.042 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 1.7e-05 0.042 0.72 +
76 -1.8 -1 3.1 -1.2 0.042 -4.9 6.1 -0.32 1.8 -7 3.3 -4.8 1.7 1 2.9 3.5e+03 5.1e-06 0.042 0.46 +
Optimization algorithm has converged.
Relative gradient: 5.130375656584178e-06
Cause of termination: Relative gradient = 5.1e-06 <= 6.1e-06
Number of function evaluations: 180
Number of gradient evaluations: 103
Number of hessian evaluations: 0
Algorithm: BFGS with trust region for simple bound constraints
Number of iterations: 77
Proportion of Hessian calculation: 0/51 = 0.0%
Optimization time: 0:02:24.118658
Calculate BHHH
File b15a_panel_discrete.html has been generated.
File b15a_panel_discrete.yaml has been generated.
print(results.short_summary())
Results for model b15a_panel_discrete
Nbr of parameters: 15
Sample size: 752
Observations: 6768
Excluded data: 0
Final log likelihood: -3524.886
Akaike Information Criterion: 7079.772
Bayesian Information Criterion: 7149.113
pandas_results = get_pandas_estimated_parameters(estimation_results=results)
display(pandas_results)
Name Value BHHH std err. BHHH t-stat. BHHH p-value
0 score_class_0 -1.759444 0.226120 -7.781031 7.105427e-15
1 asc_train_class0 -1.049639 0.682672 -1.537545 1.241599e-01
2 asc_train_s_class0 3.111880 0.640508 4.858454 1.183061e-06
3 b_cost_class0 -1.172640 0.535508 -2.189773 2.854071e-02
4 asc_sm_s_class0 0.029721 6.383987 0.004656 9.962854e-01
5 asc_car_class0 -4.864336 1.630491 -2.983356 2.851058e-03
6 asc_car_s_class0 6.075086 1.857124 3.271233 1.070798e-03
7 asc_train_class1 -0.320075 0.284611 -1.124607 2.607558e-01
8 asc_train_s_class1 1.756651 0.457972 3.835721 1.251964e-04
9 b_time_class1 -6.956658 0.367302 -18.939904 0.000000e+00
10 b_time_s_class1 3.281167 0.353407 9.284384 0.000000e+00
11 b_cost_class1 -4.756216 0.253114 -18.790787 0.000000e+00
12 asc_sm_s_class1 1.699272 0.333530 5.094815 3.490818e-07
13 asc_car_class1 1.020660 0.215778 4.730134 2.243718e-06
14 asc_car_s_class1 2.874639 0.260598 11.030949 0.000000e+00
Total running time of the script: (2 minutes 37.191 seconds)