Note
Go to the end to download the full example code.
biogeme.biogeme¶
Examples of use of several functions.
This is designed for programmers who need examples of use of the functions of the module. The examples are designed to illustrate the syntax. They do not correspond to any meaningful model.
Michel Bierlaire Sun Jun 29 2025, 01:14:48
import pandas as pd
from IPython.core.display_functions import display
import biogeme.biogeme_logging as blog
from biogeme.biogeme import BIOGEME
from biogeme.database import Database
from biogeme.expressions import Beta, Variable, exp
from biogeme.function_output import FunctionOutput
from biogeme.jax_calculator import evaluate_formula
from biogeme.results_processing import get_pandas_estimated_parameters
from biogeme.second_derivatives import SecondDerivativesMode
from biogeme.tools import CheckDerivativesResults
from biogeme.tools.files import files_of_type
from biogeme.validation import ValidationResult
from biogeme.version import get_text
Version of Biogeme.
print(get_text())
biogeme 3.3.3a0 [2025-12-25]
Home page: http://biogeme.epfl.ch
Submit questions to https://groups.google.com/d/forum/biogeme
Michel Bierlaire, Transport and Mobility Laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL)
Logger.
logger = blog.get_screen_logger(level=blog.INFO)
logger.info('Logger initialized')
Logger initialized
Definition of a database
df = pd.DataFrame(
{
'Person': [1, 1, 1, 2, 2],
'Exclude': [0, 0, 1, 0, 1],
'Variable1': [1, 2, 3, 4, 5],
'Variable2': [10, 20, 30, 40, 50],
'Choice': [1, 2, 3, 1, 2],
'Av1': [0, 1, 1, 1, 1],
'Av2': [1, 1, 1, 1, 1],
'Av3': [0, 1, 1, 1, 1],
}
)
my_data = Database('test', df)
Data
display(my_data.dataframe)
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
0 1.0 0.0 1.0 10.0 1.0 0.0 1.0 0.0
1 1.0 0.0 2.0 20.0 2.0 1.0 1.0 1.0
2 1.0 1.0 3.0 30.0 3.0 1.0 1.0 1.0
3 2.0 0.0 4.0 40.0 1.0 1.0 1.0 1.0
4 2.0 1.0 5.0 50.0 2.0 1.0 1.0 1.0
Definition of various expressions.
Variable1 = Variable('Variable1')
Variable2 = Variable('Variable2')
beta1 = Beta('beta1', -1.0, -3, 3, 0)
beta2 = Beta('beta2', 2.0, -3, 10, 0)
likelihood = -(beta1**2) * Variable1 - exp(beta2 * beta1) * Variable2 - beta2**4
simul = beta1 / Variable1 + beta2 / Variable2
dict_of_expressions = {'log_like': likelihood, 'beta1': beta1, 'simul': simul}
Creation of the BIOGEME object.
my_biogeme = BIOGEME(my_data, dict_of_expressions)
my_biogeme.model_name = 'simple_example'
print(my_biogeme)
Default values of the Biogeme parameters are used.
File biogeme.toml has been created
simple_example: database [test]{'log_like': (((UnaryMinus(PowerConstant(<Beta name=beta1 value=-1.0 status=0>, 2.0)) * <Variable name=Variable1>) - (exp((<Beta name=beta2 value=2.0 status=0> * <Beta name=beta1 value=-1.0 status=0>)) * <Variable name=Variable2>)) - PowerConstant(<Beta name=beta2 value=2.0 status=0>, 4.0)), 'beta1': <Beta name=beta1 value=-1.0 status=0>, 'simul': ((<Beta name=beta1 value=-1.0 status=0> / <Variable name=Variable1>) + (<Beta name=beta2 value=2.0 status=0> / <Variable name=Variable2>))}
The data is stored in the Biogeme object.
display(my_biogeme.database.dataframe)
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
0 1.0 0.0 1.0 10.0 1.0 0.0 1.0 0.0
1 1.0 0.0 2.0 20.0 2.0 1.0 1.0 1.0
2 1.0 1.0 3.0 30.0 3.0 1.0 1.0 1.0
3 2.0 0.0 4.0 40.0 1.0 1.0 1.0 1.0
4 2.0 1.0 5.0 50.0 2.0 1.0 1.0 1.0
Log likelihood with the initial values of the parameters.
my_biogeme.calculate_init_likelihood()
-115.30029248549191
Calculate the log-likelihood with a different value of the parameters. We retrieve the current value and add 1 to each of them.
x = my_biogeme.expressions_registry.free_betas_init_values
x_plus = {key: value + 1.0 for key, value in x.items()}
print(x_plus)
{'beta1': 0.0, 'beta2': 3.0}
log_likelihood_x_plus = evaluate_formula(
model_elements=my_biogeme.model_elements,
the_betas=x_plus,
second_derivatives_mode=SecondDerivativesMode.NEVER,
numerically_safe=False,
)
print(log_likelihood_x_plus)
-555.0
Calculate the log-likelihood function and its derivatives.
the_function_output: FunctionOutput = my_biogeme.function_evaluator.evaluate(
the_betas=x_plus,
gradient=True,
hessian=True,
bhhh=True,
)
print(f'f = {the_function_output.function}')
f = -555.0
print(f'g = {the_function_output.gradient}')
g = [-450. -540.]
pd.DataFrame(the_function_output.hessian)
pd.DataFrame(the_function_output.bhhh)
Check numerically the derivatives’ implementation. The analytical derivatives are compared to the numerical derivatives obtains by finite differences.
check_results: CheckDerivativesResults = my_biogeme.check_derivatives(verbose=True)
Comparing first derivatives
x Gradient FinDiff Difference
beta1 -10.6006 -10.6006 -9.40832e-07
beta2 -139.7 -139.7 3.80828e-06
Comparing second derivatives
Row Col Hessian FinDiff Difference
beta1 beta1 -111.201 -111.201 -9.27991e-07
beta2 beta1 20.3003 20.3003 1.42409e-06
beta1 beta2 20.3003 20.3003 -2.4484e-07
beta2 beta2 -260.3 -260.3 3.34428e-06
print(f'f = {check_results.function}')
f = -115.30029248549191
print(f'g = {check_results.analytical_gradient}')
g = [ -10.60058497 -139.69970751]
display(pd.DataFrame(check_results.analytical_hessian))
0 1
0 -111.201170 20.300292
1 20.300292 -260.300292
display(pd.DataFrame(check_results.finite_differences_gradient))
0
0 -10.600584
1 -139.699711
display(pd.DataFrame(check_results.finite_differences_hessian))
0 1
0 -111.201169 20.300293
1 20.300291 -260.300296
Estimation¶
Estimation of the parameters, with bootstrapping
results = my_biogeme.estimate(run_bootstrap=True)
*** Initial values of the parameters are obtained from the file __simple_example.iter
Cannot read file __simple_example.iter. Statement is ignored.
Starting values for the algorithm: {}
As the model is not too complex, we activate the calculation of second derivatives. To change this behavior, modify the algorithm to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.2 1.4 70 0.056 10 1.1 ++
1 -1.3 1.3 67 0.021 1e+02 1 ++
2 -1.3 1.2 67 8.1e-05 1e+03 1 ++
3 -1.3 1.2 67 1.2e-09 1e+03 1 ++
Optimization algorithm has converged.
Relative gradient: 1.247036390410727e-09
Cause of termination: Relative gradient = 1.2e-09 <= 6.1e-06
Number of function evaluations: 13
Number of gradient evaluations: 9
Number of hessian evaluations: 4
Algorithm: Newton with trust region for simple bound constraints
Number of iterations: 4
Proportion of Hessian calculation: 4/4 = 100.0%
Optimization time: 0:00:00.014195
Calculate second derivatives and BHHH
Re-estimate the model 100 times for bootstrapping
Bootstraps: 0%| | 0/100 [00:00<?, ?it/s]
0%| | 0/100 [00:00<?, ?it/s]
1%| | 1/100 [00:01<03:14, 1.97s/it]
3%|▎ | 3/100 [00:02<00:56, 1.72it/s]
5%|▌ | 5/100 [00:02<00:29, 3.20it/s]
7%|▋ | 7/100 [00:02<00:19, 4.87it/s]
9%|▉ | 9/100 [00:02<00:13, 6.64it/s]
13%|█▎ | 13/100 [00:02<00:07, 11.34it/s]
16%|█▌ | 16/100 [00:02<00:06, 13.81it/s]
19%|█▉ | 19/100 [00:03<00:06, 13.46it/s]
21%|██ | 21/100 [00:03<00:05, 14.11it/s]
23%|██▎ | 23/100 [00:03<00:05, 14.57it/s]
26%|██▌ | 26/100 [00:03<00:04, 17.12it/s]
28%|██▊ | 28/100 [00:03<00:04, 16.96it/s]
31%|███ | 31/100 [00:03<00:03, 18.95it/s]
34%|███▍ | 34/100 [00:03<00:03, 19.93it/s]
37%|███▋ | 37/100 [00:03<00:03, 17.05it/s]
40%|████ | 40/100 [00:04<00:03, 18.98it/s]
43%|████▎ | 43/100 [00:04<00:03, 18.23it/s]
45%|████▌ | 45/100 [00:04<00:03, 17.72it/s]
47%|████▋ | 47/100 [00:04<00:03, 17.36it/s]
49%|████▉ | 49/100 [00:04<00:02, 17.22it/s]
51%|█████ | 51/100 [00:04<00:02, 17.12it/s]
53%|█████▎ | 53/100 [00:04<00:02, 17.82it/s]
55%|█████▌ | 55/100 [00:04<00:02, 17.37it/s]
57%|█████▋ | 57/100 [00:05<00:02, 17.18it/s]
59%|█████▉ | 59/100 [00:05<00:02, 16.89it/s]
63%|██████▎ | 63/100 [00:05<00:01, 21.03it/s]
66%|██████▌ | 66/100 [00:05<00:01, 18.71it/s]
68%|██████▊ | 68/100 [00:05<00:01, 16.46it/s]
72%|███████▏ | 72/100 [00:05<00:01, 16.34it/s]
76%|███████▌ | 76/100 [00:06<00:01, 18.47it/s]
80%|████████ | 80/100 [00:06<00:01, 17.70it/s]
84%|████████▍ | 84/100 [00:06<00:00, 17.27it/s]
88%|████████▊ | 88/100 [00:06<00:00, 16.96it/s]
92%|█████████▏| 92/100 [00:07<00:00, 16.71it/s]
96%|█████████▌| 96/100 [00:07<00:00, 16.47it/s]
99%|█████████▉| 99/100 [00:07<00:00, 17.97it/s]
100%|██████████| 100/100 [00:07<00:00, 13.23it/s]
File simple_example.html has been generated.
File simple_example.yaml has been generated.
estimated_parameters = get_pandas_estimated_parameters(estimation_results=results)
display(estimated_parameters)
Name Value Bootstrap std err. Bootstrap t-stat. Bootstrap p-value
0 beta1 -1.273264 0.015096 -84.346236 0.0
1 beta2 1.248769 0.065946 18.936107 0.0
If the model has already been estimated, it is possible to recycle the estimation results. In that case, the other arguments are ignored, and the results are whatever is in the file.
recycled_results = my_biogeme.estimate(recycle=True, run_bootstrap=True)
Bootstraps: 0%| | 0/100 [00:07<?, ?it/s]
Estimation results read from simple_example.yaml. There is no guarantee that they correspond to the specified model.
print(recycled_results.short_summary())
Results for model simple_example
Nbr of parameters: 2
Sample size: 5
Excluded data: 0
Final log likelihood: -67.06549
Akaike Information Criterion: 138.131
Bayesian Information Criterion: 137.3499
recycled_parameters = get_pandas_estimated_parameters(
estimation_results=recycled_results
)
display(recycled_parameters)
Name Value Bootstrap std err. Bootstrap t-stat. Bootstrap p-value
0 beta1 -1.273264 0.015096 -84.346236 0.0
1 beta2 1.248769 0.065946 18.936107 0.0
Simulation¶
Simulate with the estimated values for the parameters.
display(results.get_beta_values())
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
simulation_with_estimated_betas = my_biogeme.simulate(results.get_beta_values())
display(simulation_with_estimated_betas)
log_like beta1 simul
0 -6.092234 -1.273264 -1.148387
1 -9.752666 -1.273264 -0.574194
2 -13.413098 -1.273264 -0.382796
3 -17.073530 -1.273264 -0.287097
4 -20.733962 -1.273264 -0.229677
Confidence intervals. First, we extract the values of betas from the bootstrapping draws.
draws_from_betas = results.get_betas_for_sensitivity_analysis()
for draw in draws_from_betas:
print(draw)
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.2504079848560654, 'beta2': 1.346111255894925}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.3103131424440955, 'beta2': 1.0825801152308525}
{'beta1': -1.2504079848560654, 'beta2': 1.346111255894925}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.243923633974187, 'beta2': 1.373502065868922}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.298103506705179, 'beta2': 1.1393434542527645}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.282393736283639, 'beta2': 1.209195603860136}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.287332533620085, 'beta2': 1.1875198317198776}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2690260405244644, 'beta2': 1.2669668838392423}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.298103506705179, 'beta2': 1.1393434542527645}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2925579238256841, 'beta2': 1.1643222008322363}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.298103506705179, 'beta2': 1.1393434542527645}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2732639874527638, 'beta2': 1.2487688498678546}
{'beta1': -1.2573978799513064, 'beta2': 1.3165120810083486}
{'beta1': -1.269026034376409, 'beta2': 1.2669670272413898}
{'beta1': -1.273263987213694, 'beta2': 1.2487688099301162}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.250407998614899, 'beta2': 1.34611061137777}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
{'beta1': -1.2925579238256841, 'beta2': 1.1643222008322363}
{'beta1': -1.298103506705179, 'beta2': 1.1393434542527645}
{'beta1': -1.2777127986367816, 'beta2': 1.2295580953677736}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2573978884998094, 'beta2': 1.3165124665267176}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2777126132141698, 'beta2': 1.229557601161872}
{'beta1': -1.2538348107653448, 'beta2': 1.331612380127896}
{'beta1': -1.2649797664093623, 'beta2': 1.2842632599441375}
{'beta1': -1.2777126116342759, 'beta2': 1.2295568140731905}
{'beta1': -1.2981034537533955, 'beta2': 1.1393434624440584}
{'beta1': -1.243923633974187, 'beta2': 1.373502065868922}
{'beta1': -1.2611085120748595, 'beta2': 1.3007521979087608}
{'beta1': -1.298103506705179, 'beta2': 1.1393434542527645}
{'beta1': -1.2538348225872284, 'beta2': 1.3316118845639897}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.282393731448391, 'beta2': 1.2091955250922635}
{'beta1': -1.2925578214686664, 'beta2': 1.164322217510277}
{'beta1': -1.3243083744860236, 'beta2': 1.0128202499116878}
{'beta1': -1.2611085813010015, 'beta2': 1.3007517002098443}
{'beta1': -1.2649797742013058, 'beta2': 1.2842631765105266}
Then, we calculate the confidence intervals. The default interval size is 0.9. Here, we use a different one.
left, right = my_biogeme.confidence_intervals(draws_from_betas, interval_size=0.95)
display(left)
0%| | 0/100 [00:00<?, ?it/s]
5%|▌ | 5/100 [00:00<00:02, 40.82it/s]
10%|█ | 10/100 [00:00<00:02, 40.79it/s]
15%|█▌ | 15/100 [00:00<00:02, 40.68it/s]
20%|██ | 20/100 [00:00<00:01, 41.42it/s]
25%|██▌ | 25/100 [00:00<00:01, 41.62it/s]
30%|███ | 30/100 [00:00<00:01, 42.00it/s]
35%|███▌ | 35/100 [00:00<00:01, 42.52it/s]
40%|████ | 40/100 [00:00<00:01, 42.47it/s]
45%|████▌ | 45/100 [00:01<00:01, 41.01it/s]
50%|█████ | 50/100 [00:01<00:01, 40.65it/s]
55%|█████▌ | 55/100 [00:01<00:01, 40.33it/s]
60%|██████ | 60/100 [00:01<00:00, 40.58it/s]
65%|██████▌ | 65/100 [00:01<00:00, 40.68it/s]
70%|███████ | 70/100 [00:01<00:00, 39.72it/s]
75%|███████▌ | 75/100 [00:01<00:00, 40.21it/s]
80%|████████ | 80/100 [00:01<00:00, 40.49it/s]
85%|████████▌ | 85/100 [00:02<00:00, 40.72it/s]
90%|█████████ | 90/100 [00:02<00:00, 40.22it/s]
95%|█████████▌| 95/100 [00:02<00:00, 39.96it/s]
99%|█████████▉| 99/100 [00:02<00:00, 38.99it/s]
100%|██████████| 100/100 [00:02<00:00, 40.55it/s]
log_like beta1 simul
0 -6.704727 -1.298104 -1.184169
1 -10.126055 -1.298104 -0.592085
2 -13.634897 -1.298104 -0.394723
3 -17.540111 -1.298104 -0.296042
4 -21.503871 -1.298104 -0.236834
display(right)
log_like beta1 simul
0 -5.648832 -1.250408 -1.115797
1 -9.612592 -1.250408 -0.557898
2 -13.413098 -1.250408 -0.371932
3 -16.965345 -1.250408 -0.278949
4 -20.390037 -1.250408 -0.223159
Validation¶
The validation consists in organizing the data into several slices of about the same size, randomly defined. Each slide is considered as a validation dataset. The model is then re-estimated using all the data except the slice, and the estimated model is applied on the validation set (i.e. the slice). The value of the log likelihood for each observation in the validation set is reported in a dataframe. As this is done for each slice, the output is a list of dataframes, each corresponding to one of these exercises.
validation_results: list[ValidationResult] = my_biogeme.validate(results, slices=5)
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.2 50 0.00048 10 1 ++
1 -1.3 1.2 50 2.6e-08 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.2 54 0.00042 10 1 ++
1 -1.3 1.2 54 2e-08 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.3 57 0.00035 10 1 ++
1 -1.3 1.3 57 1.3e-08 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.3 61 0.00029 10 1 ++
1 -1.3 1.3 61 9.1e-09 10 1 ++
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Iter. beta1 beta2 Function Relgrad Radius Rho
0 -1.3 1.2 46 0.0081 10 1 ++
1 -1.3 1.2 46 7.9e-06 1e+02 1 ++
2 -1.3 1.2 46 7.9e-12 1e+02 1 ++
for one_validation_result in validation_results:
print(
f'Log likelihood for {one_validation_result.validation_modeling_elements.sample_size} '
f'validation data: {one_validation_result.simulated_values.iloc[0].sum()}'
)
Log likelihood for 1 validation data: -18.713287160476334
Log likelihood for 1 validation data: -15.069157785823553
Log likelihood for 1 validation data: -11.656146972778231
Log likelihood for 1 validation data: -8.737894617549633
Log likelihood for 1 validation data: -22.55524724604721
The following tools is used to obtain the list of files with a given extension in the local directory.
display(files_of_type(extension='yaml', name=my_biogeme.model_name))
['simple_example.yaml']
Total running time of the script: (0 minutes 11.339 seconds)