scispace - formally typeset
Search or ask a question
Author

Fabian Bastin

Other affiliations: Université de Namur
Bio: Fabian Bastin is an academic researcher from Université de Montréal. The author has contributed to research in topics: Mixed logit & Discrete choice. The author has an hindex of 12, co-authored 44 publications receiving 636 citations. Previous affiliations of Fabian Bastin include Université de Namur.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper developed a model of activity and trip scheduling that combines three elements that have to date mostly been investigated in isolation: the duration of activities, the time-of-day preference for activity participation and the effect of schedule delays on the valuation of activities.
Abstract: This paper develops a model of activity and trip scheduling that combines three elements that have to date mostly been investigated in isolation: the duration of activities, the time-of-day preference for activity participation and the effect of schedule delays on the valuation of activities. The model is an error component discrete choice model, describing individuals' choice between alternative workday activity patterns. The utility function is formulated in a flexible way, applying a bell-shaped component to represent time-of-day preferences for activities. The model was tested using a 2001 data set from the Netherlands. The estimation results suggest that time-of-day preferences and schedule delays associated with the work activity are the most important factors influencing the scheduling of the work tour. Error components included in the model suggest that there is considerable unobserved heterogeneity with respect to mode preferences and schedule delay.

100 citations

Journal ArticleDOI
TL;DR: This work allows for local SAA minimizers of possibly nonconvex problems and proves, under suitable conditions, almost sure convergence of local second- order solutions of the SAA problem to second-order critical points of the true problem.
Abstract: Monte Carlo methods have extensively been used and studied in the area of stochastic programming. Their convergence properties typically consider global minimizers or first-order critical points of the sample average approximation (SAA) problems and minimizers of the true problem, and show that the former converge to the latter for increasing sample size. However, the assumption of global minimization essentially restricts the scope of these results to convex problems. We review and extend these results in two directions: we allow for local SAA minimizers of possibly nonconvex problems and prove, under suitable conditions, almost sure convergence of local second-order solutions of the SAA problem to second-order critical points of the true problem. We also apply this new theory to the estimation of mixed logit models for discrete choice analysis. New useful convergence properties are derived in this context, both for the constrained and unconstrained cases, and associated estimates of the simulation bias and variance are proposed.

86 citations

Journal ArticleDOI
TL;DR: A new algorithm is described that uses Monte Carlo approximations in the context of modern trust-region techniques, but also exploits accuracy and bias estimators to considerably increase its computational efficiency.
Abstract: Researchers and analysts are increasingly using mixed logit models for estimating responses to forecast demand and to determine the factors that affect individual choices. However the numerical cost associated to their evaluation can be prohibitive, the inherent probability choices being represented by multidimensional integrals. This cost remains high even if Monte Carlo or quasi-Monte Carlo techniques are used to estimate those integrals. This paper describes a new algorithm that uses Monte Carlo approximations in the context of modern trust-region techniques, but also exploits accuracy and bias estimators to considerably increase its computational efficiency. Numerical experiments underline the importance of the choice of an appropriate optimisation technique and indicate that the proposed algorithm allows substantial gains in time while delivering more information to the practitioner.

53 citations

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the applicability of the progressive hedging algorithm (PHA) for managing high-dimensional reservoir systems over long-term (more than a year) horizons in highly uncertain decision environments.
Abstract: [1] Among the numerous methods proposed over the past decades for solving reservoir management problems, only a few are applicable on high-dimensional reservoir systems (HDRSs). The progressive hedging algorithm (PHA) was rarely used for managing reservoir systems, but this method is a promising alternative to conventionally used methods for managing HDRSs (e.g., the stochastic dual dynamic programming). The PHA is especially well suited when a new stochastic optimization model must be built upon an existing deterministic optimization model (DOM). In such case, scenario subproblems can be resolved using an existing DOM with minor modifications. In previous studies, the PHA was rarely used and only tested on problems covering short-range planning horizons (2 months with six time periods) where a small number of nonanticipativity constraints (NACs) must be satisfied. Large reservoirs often need to be managed over a much longer planning horizon (1–5 years) containing many tens of time periods. In such case, convergence becomes much more difficult to achieve because of the larger number of NACs to be satisfied. Finding a nonanticipative solution becomes particularly difficult when the input scenarios differ drastically. In this study, we demonstrate the applicability of the PHA for managing HDRSs over long-term (more than a year) horizons in highly uncertain decision environments. We apply the PHA on Hydro-Quebec's reservoir system over a 92 week (period) horizon. We analyze the performance of the PHA for different penalty parameter values. Deterministic solutions are compared to stochastic solutions.

47 citations

Journal ArticleDOI
TL;DR: This paper proposes to capture the randomness present in the model by using a new nonparametric estimation method, based on the approximation of inverse cumulative distribution functions, which provides a more realistic interpretation of the observed behaviours.
Abstract: The estimation of random parameters by means of mixed logit models is now current practice for the analysis of transportation behaviour. One of the most straightforward applications is the derivation of willingness-to-pay distribution over a heterogeneous population, an important element for dynamic tolling strategies on congested networks. In numerous practical cases, the underlying discrete choice models involve parametric distributions that are a priori specified and whose parameters are estimated. This approach can however lead to many problems for realistic interpretation, such as negative value of time, etc. In this paper, we propose to capture the randomness present in the model by using a new nonparametric estimation method, based on the approximation of inverse cumulative distribution functions. This technique is applied to simulated data, and the ability to recover both parametric and nonparametric random vectors is tested. The nonparametric mixed logit model is also used on real data derived from a stated preference survey conducted in the region of Brussels (Belgium). The model presents multiple choices and is estimated on repeated observations. The obtained results provide a more realistic interpretation of the observed behaviours.

45 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of Uncertainty-Based Multidisciplinary Design Optimization (UMDO) theory and the state of the art in UMDO methods for aerospace vehicles is presented.

426 citations

Journal ArticleDOI
TL;DR: A criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient, and establishes an O(1/\epsilon) complexity bound on the total cost of a gradient method.
Abstract: This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an $${O(1/\epsilon)}$$ complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.

380 citations

Journal ArticleDOI
TL;DR: An overview of the use of Monte Carlo sampling-based methods for stochastic optimization problems with sampling is given, with the goal of introducing the topic to students and researchers and providing a practical guide for someone who needs to solve a stochastically optimization problem with sampling.

256 citations

Journal ArticleDOI
TL;DR: Recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives are reviewed.
Abstract: Trust region methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trust region methods compute a trial step by solving a trust region subproblem where a model function is minimized within a trust region. Due to the trust region constraint, nonconvex models can be used in trust region subproblems, and trust region algorithms can be applied to nonconvex and ill-conditioned problems. Normally it is easier to establish the global convergence of a trust region algorithm than that of its line search counterpart. In the paper, we review recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trust region subproblems and regularization methods are also discussed.

249 citations