Note
Go to the end to download the full example code.
23b. Binary probit model¶
Bayesian estimation of a binary probit model. Two alternatives: Train and Car. All observations such that the Swissmetro was chosen haven been removed from the sample.
Michel Bierlaire, EPFL Sat Jun 28 2025, 12:43:40
from IPython.core.display_functions import display
from biogeme.bayesian_estimation import BayesianResults, get_pandas_estimated_parameters
from biogeme.biogeme import BIOGEME
from biogeme.expressions import Beta, Elem, NormalCdf, log
See the data processing script: Data preparation for Swissmetro (binary choice).
from swissmetro_binary import (
CAR_CO_SCALED,
CAR_TT_SCALED,
CHOICE,
TRAIN_COST_SCALED,
TRAIN_TT_SCALED,
database,
)
Parameters to be estimated.
asc_car = Beta('asc_car', 0, None, None, 0)
b_time_car = Beta('b_time_car', 0, None, None, 0)
b_time_train = Beta('b_time_train', 0, None, None, 0)
b_cost_car = Beta('b_cost_car', 0, None, None, 0)
b_cost_train = Beta('b_cost_train', 0, None, None, 0)
Definition of the utility functions. We estimate a binary probit model. There are only two alternatives.
v_train = b_time_train * TRAIN_TT_SCALED + b_cost_train * TRAIN_COST_SCALED
v_car = asc_car + b_time_car * CAR_TT_SCALED + b_cost_car * CAR_CO_SCALED
Associate choice probability with the numbering of alternatives.
log_probability_dict = {
1: log(NormalCdf(v_train - v_car)),
3: log(NormalCdf(v_car - v_train)),
}
Definition of the model. This is the contribution of each observation to the log likelihood function.
log_probability = Elem(log_probability_dict, CHOICE)
Create the Biogeme object
the_biogeme = BIOGEME(database, log_probability)
the_biogeme.model_name = 'b23b_binary_probit'
Estimate the parameters.
try:
results = BayesianResults.from_netcdf(
filename=f'saved_results/{the_biogeme.model_name}.nc'
)
except FileNotFoundError:
results = the_biogeme.bayesian_estimation()
load finished in 1694 ms (1.69 s)
print(results.short_summary())
posterior_predictive_loglike finished in 81 ms
/Users/bierlair/python_envs/venv313/lib/python3.13/site-packages/arviz/stats/stats.py:1667: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail.
See http://arxiv.org/abs/1507.04544 for details
warnings.warn(
waic_res finished in 227 ms
waic finished in 227 ms
/Users/bierlair/python_envs/venv313/lib/python3.13/site-packages/arviz/stats/stats.py:797: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.70 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations.
warnings.warn(
loo_res finished in 2741 ms (2.74 s)
loo finished in 2741 ms (2.74 s)
Sample size 2232
Sampler NUTS
Number of chains 4
Number of draws per chain 2000
Total number of draws 8000
Acceptance rate target 0.9
Run time 0:00:23.257579
Posterior predictive log-likelihood (sum of log mean p) -903.85
Expected log-likelihood E[log L(Y|θ)] -909.45
Best-draw log-likelihood (posterior upper bound) -906.99
WAIC (Widely Applicable Information Criterion) -916.19
WAIC Standard Error 35.20
Effective number of parameters (p_WAIC) 12.34
LOO (Leave-One-Out Cross-Validation) -916.44
LOO Standard Error 35.23
Effective number of parameters (p_LOO) 12.59
Get the results in a pandas table
pandas_results = get_pandas_estimated_parameters(
estimation_results=results,
)
display(pandas_results)
Name Value (mean) ... ESS (bulk) ESS (tail)
0 b_time_train -0.653109 ... 4825.614996 4613.403497
1 b_cost_train -0.982032 ... 6785.368984 5478.985695
2 asc_car -0.354862 ... 6005.651614 5353.254223
3 b_time_car -0.186682 ... 5797.479058 5001.277388
4 b_cost_car -0.531954 ... 5604.784029 5148.758257
[5 rows x 12 columns]
Total running time of the script: (0 minutes 13.081 seconds)