Logit

Estimation of a logit model using sampling of alternatives.

Michel Bierlaire Fri Jul 25 2025, 17:36:23

import pandas as pd
from IPython.core.display_functions import display

import biogeme.biogeme_logging as blog
from alternatives import ID_COLUMN, alternatives, partitions
from biogeme.biogeme import BIOGEME
from biogeme.results_processing import get_pandas_estimated_parameters
from biogeme.sampling_of_alternatives import (
    ChoiceSetsGeneration,
    GenerateModel,
    SamplingContext,
    generate_segment_size,
)
from compare import compare
from specification_sampling import V, combined_variables
    ID  rating  price  ...   rest_lon    distance  downtown
0    0       1      4  ...  42.220972   71.735518       1.0
1    1       2      2  ...  50.549434  106.267205       0.0
2    2       3      3  ...  97.830520  136.298409       0.0
3    3       4      1  ...  69.152206   85.941147       0.0
4    4       4      3  ...  89.145620   96.773021       0.0
..  ..     ...    ...  ...        ...         ...       ...
95  95       4      3  ...   9.511387   84.166441       0.0
96  96       1      1  ...  92.144641   95.601366       0.0
97  97       4      2  ...  27.657518   30.440555       1.0
98  98       4      4  ...  32.303213   45.027143       1.0
99  99       4      1  ...  13.672495   25.703295       1.0

[100 rows x 16 columns]
Number of asian restaurants: 33
logger = blog.get_screen_logger(level=blog.INFO)

The data file contains several columns associated with synthetic choices. Here we arbitrarily select logit_4.

CHOICE_COLUMN = 'logit_4'
SAMPLE_SIZE = 10
PARTITION = 'asian'
MODEL_NAME = f'logit_{PARTITION}_{SAMPLE_SIZE}_alt'
FILE_NAME = f'{MODEL_NAME}.dat'
OBS_FILE = 'obs_choice.dat'
the_partition = partitions.get(PARTITION)
if the_partition is None:
    raise ValueError(f'Unknown partition: {PARTITION}')
segment_sizes = generate_segment_size(SAMPLE_SIZE, the_partition.number_of_segments())
observations = pd.read_csv(OBS_FILE)
context = SamplingContext(
    the_partition=the_partition,
    sample_sizes=segment_sizes,
    individuals=observations,
    choice_column=CHOICE_COLUMN,
    alternatives=alternatives,
    id_column=ID_COLUMN,
    biogeme_file_name=FILE_NAME,
    utility_function=V,
    combined_variables=combined_variables,
)
logger.info(context.reporting())
Size of the choice set: 100
Main partition: 2 segment(s) of size 33, 67
Main sample: 10: 5/33, 5/67
the_data_generation = ChoiceSetsGeneration(context=context)
the_model_generation = GenerateModel(context=context)
biogeme_database = the_data_generation.sample_and_merge(recycle=False)
Generating 10 + 0 alternatives for 10000 observations

  0%|          | 0/10000 [00:00<?, ?it/s]
  1%|▏         | 133/10000 [00:00<00:07, 1327.65it/s]
  3%|▎         | 272/10000 [00:00<00:07, 1359.92it/s]
  4%|▍         | 412/10000 [00:00<00:06, 1374.06it/s]
  6%|▌         | 552/10000 [00:00<00:06, 1381.44it/s]
  7%|▋         | 691/10000 [00:00<00:06, 1381.40it/s]
  8%|▊         | 830/10000 [00:00<00:06, 1381.47it/s]
 10%|▉         | 970/10000 [00:00<00:06, 1385.73it/s]
 11%|█         | 1111/10000 [00:00<00:06, 1391.77it/s]
 13%|█▎        | 1253/10000 [00:00<00:06, 1399.59it/s]
 14%|█▍        | 1394/10000 [00:01<00:06, 1400.87it/s]
 15%|█▌        | 1537/10000 [00:01<00:06, 1406.94it/s]
 17%|█▋        | 1680/10000 [00:01<00:05, 1412.75it/s]
 18%|█▊        | 1822/10000 [00:01<00:05, 1409.07it/s]
 20%|█▉        | 1965/10000 [00:01<00:05, 1415.24it/s]
 21%|██        | 2109/10000 [00:01<00:05, 1420.05it/s]
 23%|██▎       | 2252/10000 [00:01<00:05, 1416.25it/s]
 24%|██▍       | 2395/10000 [00:01<00:05, 1417.77it/s]
 25%|██▌       | 2538/10000 [00:01<00:05, 1419.01it/s]
 27%|██▋       | 2681/10000 [00:01<00:05, 1419.35it/s]
 28%|██▊       | 2824/10000 [00:02<00:05, 1420.81it/s]
 30%|██▉       | 2967/10000 [00:02<00:04, 1420.65it/s]
 31%|███       | 3110/10000 [00:02<00:04, 1415.35it/s]
 33%|███▎      | 3252/10000 [00:02<00:04, 1412.87it/s]
 34%|███▍      | 3394/10000 [00:02<00:04, 1411.34it/s]
 35%|███▌      | 3536/10000 [00:02<00:04, 1409.61it/s]
 37%|███▋      | 3677/10000 [00:02<00:04, 1403.46it/s]
 38%|███▊      | 3818/10000 [00:02<00:04, 1399.97it/s]
 40%|███▉      | 3960/10000 [00:02<00:04, 1403.13it/s]
 41%|████      | 4102/10000 [00:02<00:04, 1407.91it/s]
 42%|████▏     | 4245/10000 [00:03<00:04, 1411.59it/s]
 44%|████▍     | 4387/10000 [00:03<00:03, 1410.20it/s]
 45%|████▌     | 4529/10000 [00:03<00:03, 1402.14it/s]
 47%|████▋     | 4670/10000 [00:03<00:03, 1395.62it/s]
 48%|████▊     | 4810/10000 [00:03<00:03, 1390.39it/s]
 50%|████▉     | 4950/10000 [00:03<00:03, 1387.77it/s]
 51%|█████     | 5089/10000 [00:03<00:03, 1382.91it/s]
 52%|█████▏    | 5228/10000 [00:03<00:03, 1380.56it/s]
 54%|█████▎    | 5367/10000 [00:03<00:03, 1381.34it/s]
 55%|█████▌    | 5506/10000 [00:03<00:03, 1381.84it/s]
 56%|█████▋    | 5645/10000 [00:04<00:03, 1381.57it/s]
 58%|█████▊    | 5784/10000 [00:04<00:03, 1382.01it/s]
 59%|█████▉    | 5923/10000 [00:04<00:02, 1382.75it/s]
 61%|██████    | 6062/10000 [00:04<00:02, 1367.83it/s]
 62%|██████▏   | 6199/10000 [00:04<00:02, 1366.87it/s]
 63%|██████▎   | 6336/10000 [00:04<00:02, 1367.42it/s]
 65%|██████▍   | 6473/10000 [00:04<00:02, 1353.03it/s]
 66%|██████▌   | 6609/10000 [00:04<00:02, 1342.56it/s]
 67%|██████▋   | 6744/10000 [00:04<00:02, 1335.44it/s]
 69%|██████▉   | 6879/10000 [00:04<00:02, 1336.84it/s]
 70%|███████   | 7014/10000 [00:05<00:02, 1338.61it/s]
 72%|███████▏  | 7150/10000 [00:05<00:02, 1342.71it/s]
 73%|███████▎  | 7285/10000 [00:05<00:02, 1342.50it/s]
 74%|███████▍  | 7420/10000 [00:05<00:01, 1337.77it/s]
 76%|███████▌  | 7555/10000 [00:05<00:01, 1339.59it/s]
 77%|███████▋  | 7690/10000 [00:05<00:01, 1341.07it/s]
 78%|███████▊  | 7825/10000 [00:05<00:01, 1338.80it/s]
 80%|███████▉  | 7960/10000 [00:05<00:01, 1341.31it/s]
 81%|████████  | 8095/10000 [00:05<00:01, 1330.34it/s]
 82%|████████▏ | 8229/10000 [00:05<00:01, 1328.50it/s]
 84%|████████▎ | 8362/10000 [00:06<00:01, 1322.36it/s]
 85%|████████▍ | 8495/10000 [00:06<00:01, 1323.64it/s]
 86%|████████▋ | 8628/10000 [00:06<00:01, 1325.01it/s]
 88%|████████▊ | 8761/10000 [00:06<00:00, 1323.08it/s]
 89%|████████▉ | 8895/10000 [00:06<00:00, 1327.36it/s]
 90%|█████████ | 9028/10000 [00:06<00:00, 1327.82it/s]
 92%|█████████▏| 9161/10000 [00:06<00:00, 1318.78it/s]
 93%|█████████▎| 9293/10000 [00:06<00:00, 1314.51it/s]
 94%|█████████▍| 9425/10000 [00:06<00:00, 1313.71it/s]
 96%|█████████▌| 9557/10000 [00:06<00:00, 1309.91it/s]
 97%|█████████▋| 9689/10000 [00:07<00:00, 1310.85it/s]
 98%|█████████▊| 9823/10000 [00:07<00:00, 1317.26it/s]
100%|█████████▉| 9956/10000 [00:07<00:00, 1320.13it/s]
100%|██████████| 10000/10000 [00:07<00:00, 1323.42it/s]
Define new variables

Defining new variables...:   0%|          | 0/10 [00:00<?, ?it/s]
Defining new variables...:  20%|██        | 2/10 [00:00<00:00, 16.47it/s]
Defining new variables...:  40%|████      | 4/10 [00:00<00:00, 18.24it/s]
Defining new variables...:  60%|██████    | 6/10 [00:00<00:00, 18.76it/s]
Defining new variables...:  80%|████████  | 8/10 [00:00<00:00, 18.89it/s]
Defining new variables...: 100%|██████████| 10/10 [00:00<00:00, 19.10it/s]
Defining new variables...: 100%|██████████| 10/10 [00:00<00:00, 18.75it/s]
File logit_asian_10_alt.dat has been created.
logprob = the_model_generation.get_logit()
the_biogeme = BIOGEME(biogeme_database, logprob)
the_biogeme.modelName = MODEL_NAME
Biogeme parameters read from biogeme.toml.
/Users/bierlair/MyFiles/github/biogeme/docs/source/examples/sampling/plot_b01logit.py:84: DeprecationWarning: 'modelName' is deprecated. Please use 'model_name' instead.
  the_biogeme.modelName = MODEL_NAME

Calculate the null log likelihood for reporting.

the_biogeme.calculate_null_loglikelihood({i: 1 for i in range(SAMPLE_SIZE)})
-23025.850929940458

Estimate the parameters

results = the_biogeme.estimate(recycle=False)
*** Initial values of the parameters are obtained from the file __logit_asian_10_alt.iter
Parameter values restored from __logit_asian_10_alt.iter
Starting values for the algorithm: {'beta_rating': 0.7474053406335313, 'beta_price': -0.40656630678576633, 'beta_chinese': 0.6120193026885046, 'beta_japanese': 1.1876198392575683, 'beta_korean': 0.7100322440030521, 'beta_indian': 0.9377605509978457, 'beta_french': 0.6115512270994261, 'beta_mexican': 1.2233552157690688, 'beta_lebanese': 0.6578163784697699, 'beta_ethiopian': 0.47263438214296166, 'beta_log_dist': -0.6017924240399324}
As the model is not too complex, we activate the calculation of second derivatives. To change this behavior, modify the algorithm to "simple_bounds" in the TOML file.
Optimization algorithm: hybrid Newton/BFGS with simple bounds [simple_bounds]
** Optimization: Newton with trust region for simple bounds
Optimization algorithm has converged.
Relative gradient: 4.75385039567593e-07
Cause of termination: Relative gradient = 4.8e-07 <= 6.1e-06
Number of function evaluations: 1
Number of gradient evaluations: 1
Number of hessian evaluations: 0
Algorithm: Newton with trust region for simple bound constraints
Number of iterations: 0
Optimization time: 0:00:01.316658
Calculate second derivatives and BHHH
File logit_asian_10_alt~00.html has been generated.
File logit_asian_10_alt~00.yaml has been generated.
print(results.short_summary())
Results for model logit_asian_10_alt
Nbr of parameters:              11
Sample size:                    10000
Excluded data:                  0
Null log likelihood:            -23025.85
Final log likelihood:           -18389.31
Likelihood ratio test (null):           9273.078
Rho square (null):                      0.201
Rho bar square (null):                  0.201
Akaike Information Criterion:   36800.62
Bayesian Information Criterion: 36879.94
estimated_parameters = get_pandas_estimated_parameters(estimation_results=results)
display(estimated_parameters)
              Name     Value  Robust std err.  Robust t-stat.  Robust p-value
0      beta_rating  0.747405         0.015277       48.922826             0.0
1       beta_price -0.406566         0.012767      -31.843948             0.0
2     beta_chinese  0.612019         0.050248       12.180081             0.0
3    beta_japanese  1.187620         0.046358       25.618523             0.0
4      beta_korean  0.710032         0.042348       16.766548             0.0
5      beta_indian  0.937761         0.043111       21.752391             0.0
6      beta_french  0.611551         0.061565        9.933479             0.0
7     beta_mexican  1.223355         0.036539       33.480934             0.0
8    beta_lebanese  0.657816         0.062525       10.520805             0.0
9   beta_ethiopian  0.472634         0.050153        9.423939             0.0
10   beta_log_dist -0.601792         0.015140      -39.748700             0.0
df, msg = compare(estimated_parameters)
print(df)
              Name  True Value  Estimated Value    T-Test
0      beta_rating        0.75         0.747405  0.169838
1       beta_price       -0.40        -0.406566  0.514300
2     beta_chinese        0.75         0.612019  2.746018
3    beta_japanese        1.25         1.187620  1.345622
4      beta_korean        0.75         0.710032  0.943790
5      beta_indian        1.00         0.937761  1.443713
6      beta_french        0.75         0.611551  2.248835
7     beta_mexican        1.25         1.223355  0.729218
8    beta_lebanese        0.75         0.657816  1.474341
9   beta_ethiopian        0.50         0.472634  0.545648
10   beta_log_dist       -0.60        -0.601792  0.118391
print(msg)
Parameters not estimated: ['mu_asian', 'mu_downtown']

Total running time of the script: (0 minutes 16.427 seconds)

Gallery generated by Sphinx-Gallery