Category Archives: Economics

Forgetting Faster Than We Learn

There are many players in the world of transport policy these days. On net, this influx of new actors into the policy, advocacy and planning realms is likely a benefit, but does offer some concerns. One thing that I see again and again is that new entrants and existing players in the world of urban transport policy too often don’t know or have forgotten lessons learned in the past. On one level this is just a nuisance, and it is good that old knowledge is rediscovered. On another more troubling level this is like health professionals having to rediscover penicillin every other generation.

Here are two recent examples where existing knowledge is ignored or not known or marginalized.

In a recent Vox piece the concept of induced demand was discussed with reference to recent empirical work by the economists Gilles Duranton and Matthew Turner (a few months ago Wired also wrote about this particular work.). Induced demand is a well-known concept that goes back at least to Anthony Downs’ “Iron Law of Congestion,” yet the Vox piece suggests induced demand is new knowledge. Reading beyond recent urban economics research reveals that scholars in transportation economics and urban planning have extensively explored induced demand. Here is a 1995 article by Mark Hansen in ACCESS where he describes the problem. Robert Cervero, Robert Noland, Robert Cervero and Mark Hansen and the Transportationist himself (note to regular readers: this post is by David King, not David Levinson) are some of the scholars who have published in leading journals, presented at conferences and included induced demand in their teaching. Here is what Robert Cervero wrote in 2000:

“No issue has paralyzed highway programmes and side-tracked our ability to rationalize new road development as concerns over “induced travel demand”. Time and again, experiences show that building new roads or widening existing ones, especially in fast growing areas, provides only ephemeral relief – in short time, they are once again filled to capacity. A study using 18 years of data from 14 California metropolitan areas found every 10 percent increase in highway lane-miles was associated with a 9 percent increase in vehicle-miles-traveled four years after road expansion, controlling for other factors. Similar findings have been recorded in the United Kingdom. In the United States, regional transportation plans, such as in the San Francisco Bay Area, have been legally contested by environmental interest groups on the very grounds that they failed to account for the induced travel demand effects of road investments and expansions.”

This ACCESS article (2003) by Cervero is worth reading for nuance about what induced demand really means for transport planning and policy. He notes that while induced demand claims have stopped highway expansions in the past, induced demand claims gloss over more important concerns about the use and costs of travel.

It is also worth noting that even though induced demand is usually discussed in the context of expanded road capacity induced demand actually applies for any particular transportation technology. Transit expansion along a corridor has the same effect on induced demand as road widening. On his blog Kevin Krizek explained how congestion is a poor argument for expanded cycling facilities also because of induced demand. We actually know a lot about how transport capacity affects the price of travel, which affects demand for travel across time and space.

A second example about forgotten knowledge has to do with taxi policy. In a recent opinion piece about how wonderful Uber is Mohamed El-Erian describes how Uber will disrupt the inefficient taxi stands near Penn Station in Midtown Manhattan:

“Arriving earlier this week in New York at Penn Station, I joined many others in a rather slow-moving line for taxis. I did so out of habit. But a few minutes into my wait, I realized that the smart thing to do was to pull up the Uber app on my phone. In a few seconds, Uber linked me up with a car, which picked me up four minutes later. The driver was courteous, and the vehicle was clean. And all this for a fare that was similar to what I would have paid for a traditional cab — after a much longer wait, that is.”

This is a terrible argument for Uber type services and reflects little understanding about how taxi networks actually work on existing streets. The whole reason we have taxi stands is because it is really inefficient to have hundreds of people emerge from Penn Station (or any event, station, airport, etc.) to hail hundreds of cabs. We do not have the street capacity or curb capacity to accommodate this, and some type of queuing is necessary. The Uber model, as described, only works when a few people are using the service and is simply not scalable to the extent that a taxi queue is.

These are just two examples, but lots of people are wading into transport policy based on limited reading and personal anecdotes, and if we follow their lead we will have to relearn all the things that we already know. Forgetting knowledge is not a new phenomenon and not limited to any particular set of experts, but it is problematic and deserves more discussion about how to fix it. In a recent lead editorial in the May 2014 Planning Magazine (gated link) the American Planning Association’s CEO, Paul Farmer, begins as follows:

“During a chat about planning in the U.S. and Canada, several planning colleagues addressed the topic of value capture. “We’ve coined the phrase ‘windfalls,’” one Canadian colleague proudly remarked in describing the unearthed benefits that a property owner might realize from investment made by others. The late Don Hagman might have been pleased, amused, or irritated by this appropriation of the concept he popularized, if not invented, in his extensive writings half a century ago.”

(The book referenced is Windfalls for Wipeouts: Land Value Capture and Compensation.)

Concern about keeping knowledge alive isn’t just sour grapes about all the stuff I learned in grad school that people ignore. It’s not clear how we can steadily move policy forward (in a better way, however “better” is defined) if we can’t keep the lessons of the past in mind. This is not a question only for transport policy, either. In a recent book Jo Guldi and David Armitage argue that historical study should play a larger role in economic and policy debates.

As transport policy attracts more specialists from fields outside of transport—economics, computer science, software engineers, data miners, etc.—the challenge of sharing existing knowledge rather than rediscovering knowledge is really important. We don’t need to have lots of policies that won’t work just to relearn than such policies don’t work.

Park and ride lot geometry

I don’t find park and ride lots attractive. I don’t want them in my neighborhood. I wish the land around park and ride lots were valuable. But let’s do some math.

In one acre, there are 43,560 square feet. It takes about 300 square feet to store a parked car (including lanes, etc.). This suggests you can store 145 parked cars per acre. That is similar to this result.

If every one of those parked cars carried 1 person, that is 145 transit boardings from that station in the morning (and 145 boardings elsewhere in the evening, assuming symmetry). That generates 290 daily transit trips.

In contrast, let’s say we had zero park and ride spaces. Let’s further assume that adjacent land uses have a 50% transit mode share for work trips and 0% for non-work trips. We would need 300 resident workers on that acre to have a similar number of transit trips generated. Since only half the population works, we are looking at 600 total persons on that acre of land. That is the equivalent of 384,000 persons per square mile. That is a lot of people.

Even if only workers lived there, and they had 100% transit mode share for work trips and another 2 non-work trips per day by transit, that is still 145 people per acre. That is the equivalent of 92,800 people per square mile. That is Manhattan like densities (actually higher). Of course not all of Manhattan is high-rise apartments, so that is not necessarily as high as the highest densities in Manhattan, but it is higher than the lowest densities in Manhattan.

Low, or even medium, density land use around the station will not enable as many transit users  as the park and ride lot.

I still don’t like park and ride lots.

Jarrett Walker has a different take

A One-Track Mind

While at this point, we are almost assuredly beating a dead horse, until the Green Line Extension (Southwest LRT) is actually under construction, there remains the possibility it can be improved. While the best improvement (given the existence of an LRT to fourth ring Southwest suburbs) would be to route it along a path where people actually live, if we cannot maximize benefits, surely we should minimize costs.
I speak of course of the tunnel under the park.

Kenilworth Sections
Kenilworth Sections

The stated reason is the right-of-way is insufficiently wide to accommodate two tracks of LRT, one track of freight rail serving about 3 trains a day, a bike path, and the buildings that were built where it would have been convenient to run some more track.
There are two obvious solutions to this problem which have not been given serious consideration as far as I can tell.

First, the freight and LRT can share the track at different times. The experience with Northstar certain demonstrates why having a few passenger trains on a freight railroad can create lots of passenger delay, but this is different, it would be a freight train on a passenger track owned by the public.

Everyone says “But, FRA”. I realize there are institutional barriers which need to be overcome. Perhaps those are more expensive to overcome than $130 million, or whatever the difference in the surface solution and what the tunnel will cost.

Second, if one-track is good enough for freight, why is it not good enough for LRT for a short section? (This is an idea previously considered by Matt Steele at streets.mn.) This of course is a tight fit, and may require waivers from appropriate regulatory authorities, but is physically possible from the drawing I have seen.

Section B-B
Section B-B

For the sake of argument, let’s assume we want to single track 1.5 miles, with trains going up 45 miles per hour (say an average speed of 30 mph to make the math easy). This would take 3 minutes. The trains are on 10 minute headways in each direction, or one train every 5 minutes through the bottleneck. (Note, Matt assumed 2 minutes, and higher speeds. I am using conservative assumptions).

If timing were perfect, there could be zero delay from this scenario. This is a deterministic case. That is the assumption underlying Matt’s post.

However, as we know, timing is rarely perfect, so we need to look at stochastic delay. Stochastic is engineering jargon for random. Random is engineering jargon for a case where multiple outcomes have an equal likelihood of being chosen (or some are more likely than others, but we cannot be sure that would be the case).

Even when things are random, that doesn’t mean we cannot ascertain the average of the distribution.

Let’s suppose we  have an arrival rate of 1 train every 5 minutes (our arrival rate lambda=0.2 trains per minute), and a server rate of 1 train every 3 minutes (mu=0.33 trains per minute). If the systems is completely random (and we certainly hope it is better than that), we can use stochastic queueing theory to estimate the delay.

Worst case (aside from someone actively and maliciously controlling the trains so they do arrive at the same time (which implies that deterministic solutions with zero delay are possible)), we can model this as an M/M/1  queue (meaning, as wikipedia says: arrivals follow a Poisson process and job service times have an exponential distribution) . This assumes Markovian (random) arrival and departure processes and a single channel.

The utilization rate (rho = lambda/mu) is 0.6, meaning the server is busy 60% of the time.

Math gives us a formula for the average queue size:

Average queue size = rho/(1 – rho) = 1.5

Math gives us a formula for the average wait time :

E(w) = lambda / mu*(mu-lambda) = 0.2/(0.333*(0.333-0.2))=4.5 minutes

At 1 million passengers per month (12 million per year) for 30 years, this is 360 million people delayed 4.5 minutes=1.6 billion minutes of delay. At $20/hour, this is $533 million.

Clearly this value is larger than the cost of the tunnel.

On the other hand, perhaps we only need to single track for 0.5 miles.

In that case, the server time is 1 minute, so mu=1. Capacity utilization is 20% (i.e. rho is 0.2). Average queue size is 0.25 trains. The average wait time is 0.25 minutes.

Our 360 million people are delayed 0.25 minutes at $20/hour is $30 million. This is considerably less than the cost of the tunnel.

The train speeds could be adjusted so no-one would know they were delayed (i.e. trains would slow down approaching the switch, or be held at the previous station, as needed. And remember this is worst case, delay should be less than this with any competent schedule adherence. With perfect schedule adherence, they are indeed zero (our deterministic solution).

Single-tracking is a solution to high capital costs. It is not optimal, it has delay costs  that depend on the length of the stretch, headways, how much control Metro Transit has over running times, and so on.

Everything involves trade-offs.

There is of course a concern about running LRT next to (near) freight trains, carrying lots of explosive ethanol. I say, don’t do it. Run them at different times, even if on different tracks. If freight trains are only permitted at night, or in a mid-day window when an LRT is held upstream of the pinch-point for a few minutes, or ideally in a scheduled break, there should be zero chance of collision. There is always a chance of derailment – that doesn’t change, but derailment is less hazardous than collision for what I hope are obvious reasons.

In the long run, maybe freight will go away (e.g. once people stop using ethanol), go somewhere else, or another solution will be found. At that time, the line can be double-tracked if needed.

In the short term, the money saved could be used to temporarily relocate the trail to quiet residential streets nearby, compensate the neighborhood, give money to the Park Board, or any number other socially worthwhile goals.

Crossposted at streets.mn.

The 60-Year Line

Whenever we build a piece of large-scale infrastructure, we should be thinking about the markets it serves today, and the market it serves over its lifetime. We are often building lines that aim to promote development. That is, they are serving non-places in the hope they become places. The evidence on this is mixed. Sometimes lines successfully promote development, sometimes they don’t. If the lines were privately built (as in times of yore), this would be much less of public policy question, as the public is not bearing the monetary risk. That is not to say there are no policy questions, the line-builder wants right-of-way, and that often requires eminent domain powers.

However the lines are now publicly built, so the public is bearing the risk so that the privately owned lands might appreciate in value, and the public might get a small share of that increment. Usually we don’t employ value capture. General tax revenues are not nearly enough to justify the line, since lines are expensive now — all the good lines, the low-hanging fruit, have been built, and most development is a transfer from one place to another.

The risk is the capital outlay will not be recovered from future revenue (from users, or non-users).

In contrast, building lines where people actually are, where demand currently exists, presents much lower risk in revenue projections.

Lines typically last upwards of 60 years with a given technology. We certainly cannot predict 60 years into the future. 60 years ago was before both the Shinkansen and the Interstate Highway System. Predictions from 60 years ago about today were not terribly accurate. Sixty years is longer than a Kondratieff Cycle.

Will today’s places have any activity in 60 years? A good test of that is whether the place had activity 60 years ago. Look at the map of 60 years ago. Where was the activity? Where is it today? The intersection of those two maps show places with proven longevity. There are no guarantees those places will have activity in 60 years of course (“past performance is no guarantee of future results”), but they are more likely to because there is an underlying cause for the stability of the place. That is, there was a cause for that place to develop in the first place (e.g. a useful waterfall, a port, or a junction between intercity rail lines), and the positive feedback structure between transportation, accessibility, and land use actively worked to reinforce the strength of that place.

Value Capture Flowchart
Value Capture Flowchart

Applying that to the Twin Cities, the best prediction you can make is that there will be strong demand between Downtown Minneapolis and Downtown St. Paul. We currently serve that corridor with interstate highway and transit.

Applying that again to the Twin Cities, the newest places (if we can call them that) outside the beltway are making claims for long-term investments of resources fixing them into the urban system without the evidence of long-term stability (See e.g. the SW LRT to a park and ride lot on Mitchell Road, or Highway 212, or the Bottineau Line to a cornfield, or Highway 610). It is certainly possible those destinations will become significant demand generators, but it is far from certain. If a private firm wanted to bear the risk of those prospective developments not working out, more power to them. But the public is asked to do this, while perfectly good markets go unserved or underserved for lack of capital.

Met Council board not big transit riders, survey finds | startribune

Eric Roper at the Strib riffs on my dogfooding article and does a local study: Met Council board not big transit riders, survey finds

“We should ask whether members of the council have sufficient expertise about transit … to be managing a transit system. Do they understand the problems at a deep level?” said University of Minnesota professor David Levinson, who researches transportation systems and has written about the need for transit decisionmakers to commute on their own product.

Levinson, the professor, compared the low transit usage by the Met Council to the board of Apple not using computers. He has frequently criticized the lack of information at most Twin Cities bus stops when compared to other cities, including route numbers, destinations, frequency and maps.

“Having that experience of being lost on the transit system is probably a useful experience for [council members] to have to understand why their system isn’t as attractive as it should be, why it’s not as popular as they hope it would be,” Levinson said.

“It’s a success”

There are no more common words to hear shortly after the opening of a new rail project in the United States than “It’s a success”. The forecast of the declaration of success is far more accurate the forecast of ridership or costs.

For instance, Metrorail (WMATA) claims:

Metro: Silver Line ridership remains strong

Metro today provided updated Silver Line ridership information showing that, less than two months after opening, the new line is already performing at 60 percent of its projected ridership for the end of the first full year of service. As of last week, an average of 15,000 riders are entering the system at the five new Silver Line stations on weekdays for a combined 30,000 trips to or from the new stations.

In the planning process, Silver Line ridership was projected to reach 25,000 boardings at the five new stations after one full year of service.

Metro estimates that the Silver Line is currently adding approximately 6,000 new riders — making roughly 12,000 trips — to the Metrorail system each weekday. The balance, approximately 9,000 riders, are primarily former Orange Line riders who have switched to the Silver Line.

Some outlets have used the word “success” to describe the line, as has Secretary Foxx. Certainly it is still early, and maybe the Silver Line will exceed first year forecasts, or final year forecasts, or even have benefits in excess of costs, or somehow reduce inequity in the Washington region, or lead to economic development, or any number of other objectives hoisted on transit lines. It is arguably successful from a project delivery perspective, in that it was delivered, and opened for service, but that seems a narrow way to think about success.

In contrast, another new start, Metro Transit’s Green Line, has done a bit better, even with all sorts of traffic signal timing issues. It too is heralded as a success, with ridership exceeding forecast year ridership about 3 months in.  While many of its riders were transfers from existing bus services, it clearly is serving more new people for less money than the Silver Line.

Which is more successful? Which is a better investment? Time will tell, and I will leave that to the reader’s judgment.

I have two hypotheses as to why these words are so common.

First, it may be that all projects are successful. For this hypothesis to hold, we would need to see enormous transit market share across the country after several decades of more than 20% of all transportation funding going to transit (figure 2, but also this). Sadly the evidence suggests otherwise.

Alternatively, it may be that the appearance of success is important, independent of the actual facts on the ground. Calling “success” aligns you with “Team Rail” and rewards your supporters. The illusion of success is critical to obtain future funds. No one wants to give money to an agency that actively (if honestly) claims “It’s a failure” or “It’s a disappointment”, or “We’re still perfecting it,” or even “It’s a hobby“.

I hold this latter explanation as more likely. This is not to say there are no successes in urban rail transit. There are many. Starting in 1863 with the  London Underground, rail transit globally had an extraordinarily good run for 60 years. In the US, it sort of petered out after that for the next 50 years or so, though in other countries, rail transit has continued at various levels of strengths.

Some of the lines in the past 40 years have been more successful than others, all depending on your definition of success. (For instance, a list of LRT systems by ridership per mile is here.) The best systems remain the ones built in the early 20th century, with only LA’s Metro Rail breaking the top 5 in riders per mile (and DC’s MetroRail coming 6th).  Yet as far as I can tell, all new systems have been declared successful by somebody (even the relatively low ridership per mile lines like Tampa’s TECO line, or Charlotte’s Lynx). Some are even pre-declared, like The Tide in Hampton Roads.

I find it hard to see billions being spent on the Silver Line so far to add 6000 riders (12000 trips)  as an unqualified success, (I would find it hard to see meeting these low forecasts as a success either). This is more $ per passenger than many commuter rail lines spend, which few outside the agencies themselves are calling successes (the advocates of course do use that exact word).

If spending $2B added zero or negative riders, that would be truly surprising, indicative of active destruction of money. I will just state there were plausible alternative uses of the funds that would have improved society in other ways. Every expenditure has an opportunity cost.

Do not believe or repeat the press releases of agencies and advocates uncritically.

 

 

How to account for higher quality of service in Benefit/Cost Analysis

I recently had an twitter and email conversation with Benjamin Ross about rail vs. bus benefit/cost analysis (BCA).

The problem is that conventional BCA in practice does not consider the quality differences of different modes, focusing primarily on travel time, monetary costs, and monetized externalities. Assuming everything else were analyzed correctly, this leads us to over-invest in low quality modes and under-invest in high quality modes, from a welfare-maximizing perspective.

Let’s start with a few premises

1. The value of time (value of travel time savings) of each user differs because of a variety of factors. Everyone is in a hurry sometimes, and so has a higher value of time (willingness to pay for saving time) when time-strapped than at other times. Some people have more money than others, and so find it easier to pay to save time. The related notion of value of travel time reliability (VTTR) is reviewed here.

2. We don’t actually know user value of time. (An alternative approach evaluates just based on travel time, and assumes everyone is equal, since time is just as fast for rich and poor people.  For instance Carlos Daganzo and his students (e.g. Gonzales) optimize in terms of time, and convert monetary and other costs into time, referring to value of time as a politically determined variable. E.g. section 2.3.2 here. developing a temporal value of money rather than a monetary value of time. This is not standard in transportation economics.)

3. We  assume the value of time of all users is the same in a Benefit/Cost Analysis because the alternative would bias investment toward users with a high value of time. E.g. wealthy people in the western suburbs would get more investment than poor people in the city because they have a higher value of time, which is politically unacceptable to admit, as they did not pay proportionate to their value of time (since transportation funding on major roads comes predominantly from gas taxes. In contrast for local roads it comes predominantly from property taxes, which of course are paid for more by the wealthy).  For a market good this is not a problem (rich people pay for and get better goods and services all the time, otherwise why be rich). We do BCA because transportation is a publicly provided good.

4. We have models which purport to know people’s value of time and do use that in forecasting travel demand. The ratio of coefficients to time costs and money costs is implicit in the mode choice model. The value of time is usually in practice estimated from revealed preference data, but values have a wide range depending on location and methodology.

5. Travel demand models are highly inaccurate, etc., for a variety of reasons.

6. If these models were correct, the log-sum of the denominator of the mode choice model multiplied by the value of time (determined by the coefficients on time and cost in the model), with a little math, gives you an estimate of Consumers Surplus. This estimate is not usually used in practice, as no one outside of economics and travel demand modeling believes in utility theory.

7. Benefit/Cost Analysis is much simpler (and more simplistic) than travel demand modeling, and uses travel time savings and monetary cost in estimating Consumers Surplus.

8. BCA doesn’t actually estimate CS, just change in CS, since we don’t know the shape of the demand curve, but can estimate small changes to the demand curve and assume the curve is linear. Those doing BCA often use the rule of 1/2 to find the area of the benefit trapezoid)

Area=benefit=(Tb-Ta)*(1/2)*(Qb+Qa).

Multiply the area by the Value of Time to monetize. This is shown in Figure 1.

BenRoss.001

9. This assumes the value of time experienced is the same independent of how it is experienced. Yet people clearly would pay more for a better experience. That doesn’t show up unless you have multiple demand curves (see below), and that is never done except by academics.

10. The travel demand model gives you an alternative specific constant (ASC), which says all else equal, mode X is preferred to mode Y, and will tell you how much additional demand there will be for X than Y under otherwise identical circumstances (namely price and time).

11. Empirical evidence suggests the ASC is positive for transit compared to car (all else equal, people like transit over car. Car mode shares are higher in most US markets because all else is not equal).

Usually the ASC is higher for new rail than new bus, since trains are a nicer experience. This is sometimes called the rail bias factor.

For instance Table 3 below reproduces values the FTA accepts for rail bias factors according to the linked report. The implication is that people would be willing to spend 15-20 minutes longer on a commuter rail than a local bus serving the same OD pair and otherwise with the same characteristics (except for the quality of the mode).

Much of this is just a question of modeling specification though, so e.g. the rationale includes things that (a) can be modeled and specified (but aren’t typically), and (b) may be improved for bus routes. Recent research says this number can be brought down a lot by better specification.

Mode

Constant Range (relative to Local Bus)

Rationale

Commuter Rail

15 – 20 minutes

Reliable (fixed‐guideway), vehicle and passenger amenities, visibility, station amenities, etc.

Urban Rail

10 – 15 minutes

Reliable due to dedicated, fixed‐guideway, well‐identified, stations and routes, etc.

BRT

5 – 10 minutes

Reliable when running on semi‐dedicated lanes, often times uses low access and especially branded vehicles

Express Bus

‐10 to 10 minutes

Non‐stop, single‐seat ride, comfort, reliable when running on semi‐dedicated lanes

Infrequent off‐peak service, unreliable when subject to road congestion

 

12. The Consumers Surplus from a mode choice model would reflect this with higher utility when rail is available than if bus were available.

13. The Consumers Surplus from BCA, using the rule of 1/2,  would be higher for a rail line (Figure 2) than a bus line (Figure 1) because the demand is higher.

BenRoss.002

14. The CS from BCA would not reflect fully the quality difference. It should be shown as moving the demand curve outward. The benefit from the red area (Figure 3) is missing.

 

BenRoss.003

 

 

15. The red area is impossible to estimate with any confidence, since the shape of the curves outside the known area (before and after) is unknown. I drew the total consumers surplus as a triangle (and the change in CS as a trapezoid) (Figure 3), but this is misleading. Certainly it is positive.

16. If it were a triangle, and the Demand curves were parallel, some geometry might reveal the shape, but we also don’t know the lines are parallel. In reality they surely aren’t. The high value of time folks (on the left) might be willing to pay a lot more for the improved quality than the low value of time folks on the right.

Ben Ross proposes to improve BCA and develop an adjustment factor to account for the differences in quality  between modes. He suggests we look at the number of minutes it takes to get a number of riders for each mode.

I have mathematized this. So Rq=Crail,q – Cbus,q, where R is the travel time difference at some number of riders q, and Cm,q is the travel time (cost) at which you would get q riders on mode m. 

To illustrate:

If 1,000 people ride the bus at 10 minutes and 1,000 people ride the train at 12 minutes, Ben proposes the extra pleasure (or lessened pain) of taking rail is equal in value to a time savings of two minutes.

At a given margin, this is probably approximately correct. That is, the  marginal (the 1,000th) train rider is willing to take (pay) 12 minutes 12 minutes while the 1,000th bus rider insists on 10 minutes.

The problem we are trying to construct an area (the benefit). There is no guarantee that R is constant.

  • The 2,000th rail rider might insist on 11 minutes, while the 2,000th bus rider requires 8 minutes. R2000= 11-8 =3 ≠ 12-10.
  • The 10,000th rail rider might be willing to pay 3 minutes, while the 10,000th bus rider requires -3 minutes (you have to pay them 3 minutes to ride the bus). R10000=3–3 = 6.

Now we could try to find the “average” value of R, or the value of R for the average rider.  So let’s say you have forecast 30,000 riders for a line, then you try to find R for the 15,000th rider, and apply it over the whole range.

(What travel time do you need to get only 15,000 bus riders and 15,000 rail riders, this will be much different than the actual travel time you are modeling, and it will be a higher travel time, so the model will require some adjustment to obtain this number).

This again assumes distance between the curves is fixed. Unlike the rule of 1/2, which is meant to be applied over a small area, so the curvature doesn’t really matter, the assumption here is this applies over the whole demand curve, where differences in curvature might be quite significant.

If we used the model to trace out the demand curves, we could then integrate (find the red area), but this is data that is not generally obtained or reported to the economist doing the BCA. The modeler could compute this of course if they wanted to, with a bunch of model runs, but the modeler could just use the log sum, and no one believes the model or in utility or understands log sums. So the economists takes the forecast in its reduced form, and treats the method for getting it as a black box (or magic).

So is the approximation R reasonable? Is using this value better than using the implied R of 0 which is currently done?

As Ben notes,

All we really have is our one Alternative Specific Constant. It’s tough enough to draw a single value of that constant out of the available data, we surely can’t measure its dependence on income, walkability, etc.  What we actually know is the size of the rail preference under the conditions where the data was collected that the constant was calibrated against, not under the conditions that the model is simulating.
The hard part is scaling from measurement conditions to project conditions, but there are only a few simple alternatives (per trip, per mile, per minute) so if you don’t know which is right you could show results for all of them (and accept that reality may be in between).

I don’t see how this is different from the money value of time.  Doesn’t it involve the same kind of approximation?  And an assumed method of scaling?  Measured under one set of conditions, used under different conditions.

 

I don’t think I would trust using the model to trace out the demand curves.  The delta we’re looking at is ultimately derived from that Alternative Specific Constant.
When you only have one measured data point, drawing curves inevitably pulls in assumptions that tend to get insufficient examination and can easily introduce subtle (or not-so-subtle) errors.  The only robust conclusions are the ones that you can connect directly to your measured data point.  In my opinion (derived mostly from other kinds of models, but very strongly held) the best way to proceed is to treat your measured data point as a constant, multiply it by the relevant parameters, and go straight to an answer.  Then adjust it for whatever important factors that you can point to and explain in words why your measurement didn’t account for them and why your correction is appropriate.
You can certainly compare the calculation to a black-box model that solves partial differential equations (or in the transportation case a giant matrix), but you shouldn’t believe any model results whose cause you can’t explain convincingly after you get it.  (yes, the model sometimes detects your erroneous intuition, but most of the time it’s the model that is wrong).