Note
Go to the end to download the full example code.
biogeme.biogeme¶
Examples of use of several functions.
This is designed for programmers who need examples of use of the functions of the module. The examples are designed to illustrate the syntax. They do not correspond to any meaningful model.
Michel Bierlaire Sun Jun 29 2025, 01:14:48
import pandas as pd
from IPython.core.display_functions import display
import biogeme.biogeme_logging as blog
from biogeme.biogeme import BIOGEME
from biogeme.calculator import evaluate_formula
from biogeme.database import Database
from biogeme.expressions import Beta, Variable, exp
from biogeme.function_output import FunctionOutput
from biogeme.results_processing import get_pandas_estimated_parameters
from biogeme.second_derivatives import SecondDerivativesMode
from biogeme.tools import CheckDerivativesResults
from biogeme.tools.files import files_of_type
from biogeme.validation import ValidationResult
from biogeme.version import get_text
Version of Biogeme.
print(get_text())
biogeme 3.3.1 [2025-09-03]
Home page: http://biogeme.epfl.ch
Submit questions to https://groups.google.com/d/forum/biogeme
Michel Bierlaire, Transport and Mobility Laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL)
Logger.
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Logger initialized')
Logger initialized
Definition of a database
df = pd.DataFrame(
{
'Person': [1, 1, 1, 2, 2],
'Exclude': [0, 0, 1, 0, 1],
'Variable1': [1, 2, 3, 4, 5],
'Variable2': [10, 20, 30, 40, 50],
'Choice': [1, 2, 3, 1, 2],
'Av1': [0, 1, 1, 1, 1],
'Av2': [1, 1, 1, 1, 1],
'Av3': [0, 1, 1, 1, 1],
}
)
my_data = Database('test', df)
Data
display(my_data.dataframe)
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
0 1.0 0.0 1.0 10.0 1.0 0.0 1.0 0.0
1 1.0 0.0 2.0 20.0 2.0 1.0 1.0 1.0
2 1.0 1.0 3.0 30.0 3.0 1.0 1.0 1.0
3 2.0 0.0 4.0 40.0 1.0 1.0 1.0 1.0
4 2.0 1.0 5.0 50.0 2.0 1.0 1.0 1.0
Definition of various expressions.
Variable1 = Variable('Variable1')
Variable2 = Variable('Variable2')
beta1 = Beta('beta1', -1.0, -3, 3, 0)
beta2 = Beta('beta2', 2.0, -3, 10, 0)
likelihood = -(beta1**2) * Variable1 - exp(beta2 * beta1) * Variable2 - beta2**4
simul = beta1 / Variable1 + beta2 / Variable2
dict_of_expressions = {'log_like': likelihood, 'beta1': beta1, 'simul': simul}
Creation of the BIOGEME object.
my_biogeme = BIOGEME(my_data, dict_of_expressions)
my_biogeme.model_name = 'simple_example'
print(my_biogeme)
Biogeme parameters read from biogeme.toml.
simple_example: database [test]{'log_like': (((UnaryMinus(PowerConstant(<Beta name=beta1 value=-1.0 status=0>, 2.0)) * <Variable name=Variable1>) - (exp((<Beta name=beta2 value=2.0 status=0> * <Beta name=beta1 value=-1.0 status=0>)) * <Variable name=Variable2>)) - PowerConstant(<Beta name=beta2 value=2.0 status=0>, 4.0)), 'beta1': <Beta name=beta1 value=-1.0 status=0>, 'simul': ((<Beta name=beta1 value=-1.0 status=0> / <Variable name=Variable1>) + (<Beta name=beta2 value=2.0 status=0> / <Variable name=Variable2>))}
The data is stored in the Biogeme object.
display(my_biogeme.database.dataframe)
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
0 1.0 0.0 1.0 10.0 1.0 0.0 1.0 0.0
1 1.0 0.0 2.0 20.0 2.0 1.0 1.0 1.0
2 1.0 1.0 3.0 30.0 3.0 1.0 1.0 1.0
3 2.0 0.0 4.0 40.0 1.0 1.0 1.0 1.0
4 2.0 1.0 5.0 50.0 2.0 1.0 1.0 1.0
Log likelihood with the initial values of the parameters.
my_biogeme.calculate_init_likelihood()
-115.30029248549191
Calculate the log-likelihood with a different value of the parameters. We retrieve the current value and add 1 to each of them.
x = my_biogeme.expressions_registry.free_betas_init_values
x_plus = {key: value + 1.0 for key, value in x.items()}
print(x_plus)
{'beta1': 0.0, 'beta2': 3.0}
log_likelihood_x_plus = evaluate_formula(
model_elements=my_biogeme.model_elements,
the_betas=x_plus,
second_derivatives_mode=SecondDerivativesMode.NEVER,
numerically_safe=False,
)
print(log_likelihood_x_plus)
-555.0
Calculate the log-likelihood function and its derivatives.
the_function_output: FunctionOutput = my_biogeme.function_evaluator.evaluate(
the_betas=x_plus,
gradient=True,
hessian=True,
bhhh=True,
)
print(f'f = {the_function_output.function}')
f = -555.0
print(f'g = {the_function_output.gradient}')
g = [-450. -540.]
pd.DataFrame(the_function_output.hessian)
pd.DataFrame(the_function_output.bhhh)
Check numerically the derivatives’ implementation. The analytical derivatives are compared to the numerical derivatives obtains by finite differences.
check_results: CheckDerivativesResults = my_biogeme.check_derivatives(verbose=True)
Comparing first derivatives
x Gradient FinDiff Difference
beta1 -10.6006 -10.6006 -9.40832e-07
beta2 -139.7 -139.7 3.80828e-06
Comparing second derivatives
Row Col Hessian FinDiff Difference
beta1 beta1 -111.201 -111.201 -9.27991e-07
beta2 beta1 20.3003 20.3003 1.42409e-06
beta1 beta2 20.3003 20.3003 -2.4484e-07
beta2 beta2 -260.3 -260.3 3.34428e-06
print(f'f = {check_results.function}')
f = -115.30029248549191
print(f'g = {check_results.analytical_gradient}')
g = [ -10.60058497 -139.69970751]
display(pd.DataFrame(check_results.analytical_hessian))
0 1
0 -111.201170 20.300292
1 20.300292 -260.300292
display(pd.DataFrame(check_results.finite_differences_gradient))
0
0 -10.600584
1 -139.699711
display(pd.DataFrame(check_results.finite_differences_hessian))
0 1
0 -111.201169 20.300293
1 20.300291 -260.300296
Estimation¶
Estimation of the parameters, with bootstrapping
results = my_biogeme.estimate(run_bootstrap=True)
*** Initial values of the parameters are obtained from the file __simple_example.iter
Parameter values restored from __simple_example.iter
Starting values for the algorithm: {'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
As the model is not too complex, we activate the calculation of second derivatives. To change this behavior, modify the algorithm to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Optimization algorithm has converged.
Relative gradient: 2.1439291582969576e-09
Cause of termination: Relative gradient = 2.1e-09 <= 6.1e-06
Number of function evaluations: 1
Number of gradient evaluations: 1
Number of hessian evaluations: 0
Algorithm: Newton with trust region for simple bound constraints
Number of iterations: 0
Optimization time: 0:00:00.001324
Calculate second derivatives and BHHH
Re-estimate the model 100 times for bootstrapping
Bootstraps: 0%| | 0/100 [00:00<?, ?it/s]
0%| | 0/100 [00:00<?, ?it/s]
1%| | 1/100 [00:01<02:22, 1.44s/it]
3%|▎ | 3/100 [00:01<00:41, 2.34it/s]
6%|▌ | 6/100 [00:01<00:17, 5.29it/s]
8%|▊ | 8/100 [00:01<00:12, 7.10it/s]
10%|█ | 10/100 [00:01<00:10, 8.98it/s]
12%|█▏ | 12/100 [00:02<00:08, 10.60it/s]
14%|█▍ | 14/100 [00:02<00:07, 12.14it/s]
16%|█▌ | 16/100 [00:02<00:06, 13.41it/s]
18%|█▊ | 18/100 [00:02<00:05, 14.41it/s]
20%|██ | 20/100 [00:02<00:05, 15.26it/s]
22%|██▏ | 22/100 [00:02<00:04, 15.72it/s]
24%|██▍ | 24/100 [00:02<00:04, 16.24it/s]
26%|██▌ | 26/100 [00:02<00:04, 16.62it/s]
28%|██▊ | 28/100 [00:02<00:04, 16.90it/s]
30%|███ | 30/100 [00:03<00:04, 17.12it/s]
32%|███▏ | 32/100 [00:03<00:03, 17.09it/s]
34%|███▍ | 34/100 [00:03<00:03, 17.22it/s]
37%|███▋ | 37/100 [00:03<00:03, 19.83it/s]
39%|███▉ | 39/100 [00:03<00:03, 18.93it/s]
42%|████▏ | 42/100 [00:03<00:03, 17.02it/s]
46%|████▌ | 46/100 [00:03<00:03, 17.16it/s]
50%|█████ | 50/100 [00:04<00:02, 17.24it/s]
54%|█████▍ | 54/100 [00:04<00:02, 18.98it/s]
58%|█████▊ | 58/100 [00:04<00:02, 18.49it/s]
62%|██████▏ | 62/100 [00:04<00:02, 18.09it/s]
66%|██████▌ | 66/100 [00:05<00:01, 17.75it/s]
70%|███████ | 70/100 [00:05<00:01, 19.50it/s]
74%|███████▍ | 74/100 [00:05<00:01, 18.44it/s]
76%|███████▌ | 76/100 [00:05<00:01, 17.79it/s]
80%|████████ | 80/100 [00:05<00:01, 17.55it/s]
84%|████████▍ | 84/100 [00:06<00:00, 17.39it/s]
88%|████████▊ | 88/100 [00:06<00:00, 19.46it/s]
92%|█████████▏| 92/100 [00:06<00:00, 18.70it/s]
94%|█████████▍| 94/100 [00:06<00:00, 18.65it/s]
96%|█████████▌| 96/100 [00:06<00:00, 17.91it/s]
98%|█████████▊| 98/100 [00:06<00:00, 18.11it/s]
100%|██████████| 100/100 [00:06<00:00, 17.77it/s]
100%|██████████| 100/100 [00:06<00:00, 14.43it/s]
File simple_example~01.html has been generated.
File simple_example~01.yaml has been generated.
estimated_parameters = get_pandas_estimated_parameters(estimation_results=results)
display(estimated_parameters)
Name Value Bootstrap std err. Bootstrap t-stat. Bootstrap p-value
0 beta1 -1.273264 0.012234 -104.079084 0.0
1 beta2 1.248769 0.052923 23.595900 0.0
If the model has already been estimated, it is possible to recycle the estimation results. In that case, the other arguments are ignored, and the results are whatever is in the file.
recycled_results = my_biogeme.estimate(recycle=True, run_bootstrap=True)
Several files .yaml are available for this model: ['simple_example.yaml', 'simple_example~00.yaml', 'simple_example~01.yaml']. The file simple_example~01.yaml is used to load the results.
Bootstraps: 0%| | 0/100 [00:06<?, ?it/s]
Estimation results read from simple_example~01.yaml. There is no guarantee that they correspond to the specified model.
print(recycled_results.short_summary())
Results for model simple_example
Nbr of parameters: 2
Sample size: 5
Excluded data: 0
Final log likelihood: -67.06549
Akaike Information Criterion: 138.131
Bayesian Information Criterion: 137.3499
recycled_parameters = get_pandas_estimated_parameters(
estimation_results=recycled_results
)
display(recycled_parameters)
Name Value Bootstrap std err. Bootstrap t-stat. Bootstrap p-value
0 beta1 -1.273264 0.012234 -104.079084 0.0
1 beta2 1.248769 0.052923 23.595900 0.0
Simulation¶
Simulate with the estimated values for the parameters.
display(results.get_beta_values())
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
simulation_with_estimated_betas = my_biogeme.simulate(results.get_beta_values())
display(simulation_with_estimated_betas)
log_like beta1 simul
0 -6.092234 -1.273264 -1.148387
1 -9.752666 -1.273264 -0.574194
2 -13.413098 -1.273264 -0.382796
3 -17.073530 -1.273264 -0.287097
4 -20.733962 -1.273264 -0.229677
Confidence intervals. First, we extract the values of betas from the bootstrapping draws.
draws_from_betas = results.get_betas_for_sensitivity_analysis()
for draw in draws_from_betas:
print(draw)
{'beta1': -1.2504079848560654, 'beta2': 1.346111255894925}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2471072737805293, 'beta2': 1.3600596419678224}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2649797740194353, 'beta2': 1.2842631510962423}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2732639874527638, 'beta2': 1.2487688498678546}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2732639874991436, 'beta2': 1.2487688117902658}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2873325027193596, 'beta2': 1.1875194143603178}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2690260418393766, 'beta2': 1.2669669182885877}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2732639874991436, 'beta2': 1.2487688117902658}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2777126118096789, 'beta2': 1.229556814569854}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.277712593402649, 'beta2': 1.2295586250129056}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2732640117680483, 'beta2': 1.2487690786447256}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2823937666074936, 'beta2': 1.2091966201417774}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2925579457232135, 'beta2': 1.1643223694950584}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2573978837738118, 'beta2': 1.3165118213080087}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.298103506705179, 'beta2': 1.1393434542527645}
{'beta1': -1.2649797740398232, 'beta2': 1.2842631494771497}
{'beta1': -1.2504079848560654, 'beta2': 1.346111255894925}
{'beta1': -1.2823937314558564, 'beta2': 1.2091955251477786}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.2873324933528516, 'beta2': 1.187519309859021}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2649797777101763, 'beta2': 1.2842632913325391}
{'beta1': -1.304007541673126, 'beta2': 1.1122455742287705}
{'beta1': -1.25383482258777, 'beta2': 1.3316118845742497}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.26497977397441, 'beta2': 1.2842631509636993}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2649797777101763, 'beta2': 1.2842632913325391}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2823937318606777, 'beta2': 1.2091955303777047}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2732639720980743, 'beta2': 1.2487693919282534}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2981035067539988, 'beta2': 1.1393434544494636}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2611085873267327, 'beta2': 1.3007519381922703}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.287332492839996, 'beta2': 1.1875193084801916}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2649798555111198, 'beta2': 1.2842643064389756}
Then, we calculate the confidence intervals. The default interval size is 0.9. Here, we use a different one.
left, right = my_biogeme.confidence_intervals(draws_from_betas, interval_size=0.95)
display(left)
0%| | 0/100 [00:00<?, ?it/s]
4%|▍ | 4/100 [00:00<00:02, 38.62it/s]
9%|▉ | 9/100 [00:00<00:02, 39.43it/s]
14%|█▍ | 14/100 [00:00<00:02, 40.22it/s]
19%|█▉ | 19/100 [00:00<00:02, 40.13it/s]
24%|██▍ | 24/100 [00:00<00:01, 40.11it/s]
29%|██▉ | 29/100 [00:00<00:01, 39.43it/s]
33%|███▎ | 33/100 [00:00<00:01, 39.01it/s]
38%|███▊ | 38/100 [00:00<00:01, 39.62it/s]
43%|████▎ | 43/100 [00:01<00:01, 40.15it/s]
48%|████▊ | 48/100 [00:01<00:01, 40.24it/s]
53%|█████▎ | 53/100 [00:01<00:01, 40.57it/s]
58%|█████▊ | 58/100 [00:01<00:01, 39.85it/s]
63%|██████▎ | 63/100 [00:01<00:00, 40.15it/s]
68%|██████▊ | 68/100 [00:01<00:00, 39.99it/s]
73%|███████▎ | 73/100 [00:01<00:00, 39.66it/s]
78%|███████▊ | 78/100 [00:01<00:00, 39.89it/s]
83%|████████▎ | 83/100 [00:02<00:00, 39.89it/s]
87%|████████▋ | 87/100 [00:02<00:00, 36.30it/s]
91%|█████████ | 91/100 [00:02<00:00, 35.32it/s]
95%|█████████▌| 95/100 [00:02<00:00, 34.96it/s]
99%|█████████▉| 99/100 [00:02<00:00, 36.11it/s]
100%|██████████| 100/100 [00:02<00:00, 38.69it/s]
log_like beta1 simul
0 -6.654739 -1.295469 -1.180349
1 -10.092196 -1.295469 -0.590174
2 -13.576352 -1.295469 -0.393450
3 -17.474367 -1.295469 -0.295087
4 -21.403557 -1.295469 -0.236070
display(right)
log_like beta1 simul
0 -5.686797 -1.252036 -1.118113
1 -9.619739 -1.252036 -0.559057
2 -13.413098 -1.252036 -0.372704
3 -16.968836 -1.252036 -0.279528
4 -20.404569 -1.252036 -0.223623
Validation¶
The validation consists in organizing the data into several slices of about the same size, randomly defined. Each slide is considered as a validation dataset. The model is then re-estimated using all the data except the slice, and the estimated model is applied on the validation set (i.e. the slice). The value of the log likelihood for each observation in the validation set is reported in a dataframe. As this is done for each slice, the output is a list of dataframes, each corresponding to one of these exercises.
validation_results: list[ValidationResult] = my_biogeme.validate(results, slices=5)
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.2 46 0.0022 10 1 ++
1 -1.3 1.2 46 6e-07 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.2 50 0.00051 10 1 ++
1 -1.3 1.2 50 3.1e-08 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.3 61 0.0028 10 0.99 ++
1 -1.3 1.3 61 8e-07 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.2 54 0.0015 10 1 ++
1 -1.3 1.2 54 2.6e-07 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.3 57 0.00035 10 1 ++
1 -1.3 1.3 57 1.3e-08 10 1 ++
for one_validation_result in validation_results:
print(
f'Log likelihood for {one_validation_result.validation_modeling_elements.sample_size} '
f'validation data: {one_validation_result.simulated_values.iloc[0].sum()}'
)
Log likelihood for 1 validation data: -22.555245733904922
Log likelihood for 1 validation data: -18.713287155490335
Log likelihood for 1 validation data: -8.737896466562866
Log likelihood for 1 validation data: -15.069157781386547
Log likelihood for 1 validation data: -11.656146972777968
The following tools is used to obtain the list of files with a given extension in the local directory.
display(files_of_type(extension='yaml', name=my_biogeme.model_name))
['simple_example.yaml', 'simple_example~00.yaml', 'simple_example~01.yaml']
Total running time of the script: (0 minutes 10.865 seconds)