Mode choice
Mode choice analysis is the third step in the conventional four-step transportation forecasting model. The steps, in order, are trip generation, trip distribution, mode choice analysis, and route assignment. Trip distribution's zonal interchange analysis yields a set of origin destination tables that tells where the trips will be made. Mode choice analysis allows the modeler to determine what mode of transport will be used, and what modal share results.
The early transportation planning model developed by the Chicago Area Transportation Study focused on transit. It wanted to know how much travel would continue by transit. The CATS divided transit trips into two classes: trips to the Central Business District, or CBD and other. For the latter, increases in auto ownership and use were a trade-off against bus use; trend data were used. CBD travel was analyzed using historic mode choice data together with projections of CBD land uses. Somewhat similar techniques were used in many studies. Two decades after CATS, for example, the London study followed essentially the same procedure, but in this case, researchers first divided trips into those made in the inner part of the city and those in the outer part. This procedure was followed because it was thought that income drove mode choice.
Diversion curve techniques
The CATS had diversion curve techniques available and used them for some tasks. At first, the CATS studied the diversion of auto traffic from streets and arterial roads to proposed expressways. Diversion curves were also used for bypasses built around cities to find out what percent of traffic would use the bypass. The mode choice version of diversion curve analysis proceeds this way: one forms a ratio, say:where:
Given the R that we have calculated, the graph tells us the percent of users in the market that will choose transit. A variation on the technique is to use costs rather than time in the diversion ratio. The decision to use a time or cost ratio turns on the problem at hand. Transit agencies developed diversion curves for different kinds of situations, so variables like income and population density entered implicitly.
Diversion curves are based on empirical observations, and their improvement has resulted from better data. Curves are available for many markets. It is not difficult to obtain data and array results. Expansion of transit has motivated data development by operators and planners. Yacov Zahavi’s UMOT studies, discussed earlier, contain many examples of diversion curves.
In a sense, diversion curve analysis is expert system analysis. Planners could "eyeball" neighborhoods and estimate transit ridership by routes and time of day. Instead, diversion is observed empirically and charts drawn.
Disaggregate travel demand models
Travel demand theory was introduced in the appendix on traffic generation. The core of the field is the set of models developed following work by Stan Warner in 1962. Using data from the CATS, Warner investigated classification techniques using models from biology and psychology. Building from Warner and other early investigators, disaggregate demand models emerged. Analysis is disaggregate in that individuals are the basic units of observation, yet aggregate because models yield a single set of parameters describing the choice behavior of the population. Behavior enters because the theory made use of consumer behavior concepts from economics and parts of choice behavior concepts from psychology. Researchers at the University of California, Berkeley and the Massachusetts Institute of Technology developed what has become known as choice models, direct demand models, Random Utility Models or, in its most used form, the multinomial logit model.Choice models have attracted a lot of attention and work; the Proceedings of the International Association for Travel Behavior Research chronicles the evolution of the models. The models are treated in modern transportation planning and transportation engineering textbooks.
One reason for rapid model development was a felt need. Systems were being proposed where no empirical experience of the type used in diversion curves was available. Choice models permit comparison of more than two alternatives and the importance of attributes of alternatives. There was the general desire for an analysis technique that depended less on aggregate analysis and with a greater behavioral content. And there was attraction, too, because choice models have logical and behavioral roots extended back to the 1920s as well as roots in Kelvin Lancaster’s consumer behavior theory, in utility theory, and in modern statistical methods.
Psychological roots
Early psychology work involved the typical experiment: Here are two objects with weights, w1 and w2, which is heavier? The finding from such an experiment would be that the greater the difference in weight, the greater the probability of choosing correctly. Graphs similar to the one on the right result.Louis Leon Thurstone proposed that perceived weight,
where v is the true weight and e is random with
The assumption that e is normally and identically distributed yields the binary probit model.
Econometric formulation
Economists deal with utility rather than physical weights, and say thatThe characteristics of the object, x, must be considered, so we have
If we follow Thurston's assumption, we again have a probit model.
An alternative is to assume that the error terms are independently and identically distributed with a Weibull, Gumbel Type I, or double exponential distribution. . This yields the multinomial logit model. Daniel McFadden argued that the Weibull had desirable properties compared to other distributions that might be used. Among other things, the error terms are normally and identically distributed. The logit model is simply a log ratio of the probability of choosing a mode to the probability of not choosing a mode.
Observe the mathematical similarity between the logit model and the S-curves we estimated earlier, although here share increases with utility rather than time. With a choice model we are explaining the share of travelers using a mode.
The comparison with S-curves is suggestive that modes get adopted as their utility increases, which happens over time for several reasons. First, because the utility itself is a function of network effects, the more users, the more valuable the service, higher the utility associated with joining the network. Second because utility increases as user costs drop, which happens when fixed costs can be spread over more users. Third technological advances, which occur over time and as the number of users increases, drive down relative cost.
An illustration of a utility expression is given:
where
With algebra, the model can be translated to its most widely used form:
It is fair to make two conflicting statements about the estimation and use of this model:
- it's a "house of cards", and
- used by a technically competent and thoughtful analyst, it's useful.
Suppose an option has a net utility ujk. We can imagine that having a systematic part vjk that is a function of the characteristics of an object and person j, plus a random part ejk, which represents tastes, observational errors and a bunch of other things. The introduction of e lets us do some aggregation. As noted above, we think of observable utility as being a function:
where each variable represents a characteristic of the auto trip. The value β0 is termed an alternative specific constant. Most modelers say it represents characteristics left out of the equation, but it includes whatever is needed to make error terms NID.
Econometric estimation
Turning now to some technical matters, how do we estimate v? Utility isn’t observable. All we can observe are choices, and we want to talk about probabilities of choices that range from 0 to 1. Further, the distribution of the error terms wouldn’t have appropriate statistical characteristics.The MNL approach is to make a maximum likelihood estimate of this functional form. The likelihood function is:
we solve for the estimated parameters
that max L*. This happens when:
The log-likelihood is easier to work with, as the products turn to sums:
Consider an example adopted from John Bitzan’s Transportation Economics Notes. Let X be a binary variable that is equal to 1 with probability γ, and equal to 0 with probability. Then f = and f = γ. Suppose that we have 5 observations of X, giving the sample. To find the maximum likelihood estimator of γ examine various values of γ, and for these values determine the probability of drawing the sample
If γ takes the value 0, the probability of drawing our sample is 0. If γ is 0.1, then the probability of getting our sample is: f = fffff = 0.1×0.1×0.1×0.9×0.1 = 0.00009 We can compute the probability of obtaining our sample over a range of γ – this is our likelihood function. The likelihood function for n independent observations in a logit model is
where: Yi = 1 or 0 and Pi = the probability of observing Yi = 1
The log likelihood is thus:
In the binomial logit model,
The log-likelihood function is maximized setting the partial derivatives to zero:
The above gives the essence of modern MNL choice modeling.