Annals of excess cost: Volume 2

From an anonymous source:

The Ramsey Star bus service had 115 park-and-ride
users in fall 2012, two months before the Ramsey Station opened.
2012 ANNUAL REGIONAL PARK-AND-RIDE SYSTEM REPORT
Meaning of course that most commuter rail riders are not new, so your
denominator perhaps should only be 15 new riders or $880,000 per rider!

Black Holes: A brief treatise on the nature of journal publications and the conditions necessary for formation of black holes in networks of scholarly communication.

BlackHole
Academia should be faster than it is. This especially applies to the transportation and planning journals with which I am familiar. It often takes more than a year to do less than 8 hours of work (reviews and editorial decision-making).
Some peer reviewed journals (which will remain nameless in this post) are best described as Black Holes. Articles are submitted to be reviewed and never escape. There may or may not be an acknowledgment of receipt. The paper may or may not have been sent to reviewers. The reviewers may or may not have acknowledged receiving the paper in a timely fashion. They may or may not conduct the review in a timely fashion. Some reviewers might do their job, but the editor may be waiting on the slow reviewer before making a decision.
There are several causes for this black hole:

  1. Authors – Why would you be foolish enough to place your trust in an editor you don’t know? But of course, for graduate students and tenure track faculty, what choice do you have when your career is determined by success in the publication game? In this game, the author is in general the supplicant. If the author is famous, the situation might be reversed, and the editor should be seeking your paper to make their journal stronger, but given there are well over 1 million peer-reviewed journal articles published each year (and I guess 2x that submitted), and only 20,000 Scopus indexed journals, the average journal has leverage over the average author.
  2. Reviewers – Why would I do free labor for a stranger (the Editor) for a community I don’t know (prospective readers) to help an anonymous person (the Author)? Why would I do it quickly?
    • The noble answer is to stay on top of cutting edge research.
    • A plausible answer is the opportunity to ensure your own work is properly referenced. Though this might appear sketchy to you as an author when reviewers say cite X, Y, and Z (and reveal themselves), yet you as an author will still cite these works in the revised manuscript, and it looks perfectly natural to the reader. This motivates the reviewer and raises the citation rate of the reviewer’s own works.
    • Another answer is to accumulate social capital.
      Where exactly do I redeem the social capital I am accumulating? Where is the social capital bank:
      Editors write promotion (or immigration!) letters in support of good, quick, helpful reviewers. Editors might more favorably view the papers of helpful reviewers. Editors might more favorably review proposals of helpful reviewers. Editors might be more likely to nominate good reviewers for awards. Editors might nominate good reviewers to an Editorial Advisory Board and bestow upon them some prestige. The reviewer might be an editor elsewhere and be able to “return the favor”. But all of this is probabilistic and a bit vaporous. Journals sometimes publish list of reviewers. In any case, a list of (self-reported) number of reviews by journal is one of the beans that is counted in the promotion and tenure process.
  3. Editors – What leverage do I have over unpaid labor (reviewers) and why should I care personally about ungrateful authors who have submitted an unready paper to my journal which will almost inevitably not be accepted the first round. The leverage is future favors I might bestow in advancement of potential reviewers (see above). This indicates Editors should favor graduate students and assistant professors as reviewers over full professors. Unfortunately full professors are more famous and more likely to be selected to do reviews. I am personally running at a rate of about 100 review requests per year now. If I were really famous, I would need to decline far more than I do now. If I were really, really famous, I would not have time to decline requests (or perhaps I would have staff decline requests for me).

So there is a social network at play in this process, and if any link breaks between author and editor, between editor and reviewer, or back from reviewer to editor or editor to author, the circuit is not complete, the paper entered the system, and like a light from a black hole, cannot escape.
This is one reason I like journals that have check to automatically track publication status, nag reviewers, and have quick turnaround times. This is one reason I like the idea of “desk reject”. It is much better to be rejected immediately then after 6, 9, 18, 24 months of review. Fast has value.
There is a second black hole, not quite as large, dealing with accepted papers that have yet to be formatted for publication. This is usually solved by an online “articles in press” or “online first” section of the journal website. The advantage to the journal of this is the ability for papers to accumulate citations before they are actually “published”, thereby gaming the ISI impact factors, which look at the number of citations in the first 2 years from publication.
A major problem with looking at 2 years when journals are slow is apparent. I cite only papers published before I submit my paper. If it takes 2 years to accept and publish, I will not have included any papers from the past two years. Therefore slow fields have lower impact factors than fast fields. This feeds the notion (in a positive feedback way) that these fields are sleepy backwaters of scientific research rather than cutting edge fields where people care about progress.
To break the black holes I have a couple of ideas:

  1. A “name and shame” open database (or even a wiki) which tracks article submissions by journal, so that authors have a realistic assessment of review, and possibly re-review and publication times. Also the amount of time in the author’s hands for revision would be tracked.
  2. Money to pay reviewers and editors to act in a timely fashion and publication charges to finance open scholarly communication. A few journals pay reviewers. When I get one of those, I am far more likely to review quickly than when I get requests from other journals, especially for journals outside my core area, especially when the likelihood of withdrawing social capital is minimal. Other journals charge authors and use the funds to speed the process (but as far as I know these journals don’t pay reviewers). Of course, we need to be clear to avoid “pay to play”. Libraries could help here, redirecting funds from the traditional subscription model to a new open access model, helping their university’s authors publish in truly open access journals. The new federal initiative will hopefully tip the balance.

We all know the journal system as we have known it is unlike to survive as is for the next 100 years. It is surprising it is lasted as long as it has, but academia is one of the last guilds.
There are lots of cool models out there beyond the traditional library pays for subscription of expensive journal: from open access journals with sponsors (JTLU), author fees (PLOS_One), membership (PeerJ), decentralized archives (RePEc), and centralized electronic archives arXiv.
Yet we need some way of separating the wheat from the chaff, and peer-review, as imperfect as it is, has advantages over the open internet where any crank can write a blog post.
Eventually time will act as a filter, but peer-review, the review of papers by experts to filter out the poorly written, the wrong, the repetitive and the redundant, can save readers much time.

Annals of excess cost

Drew Kerr at Working on the railroad reports:

“A new $13.2 million station in Ramsey also opened in early November, and is seeing an average of 130 weekday boardings.”

So we have the capital cost of a commuter rail station at $100,000 per passenger. Note this is more than a car or even a simple house. The park and ride ramp (“ramp” is midwestern for “garage”) has 800 free parking spaces.
Of course if they had the anticipated 800 daily passengers (assuming 1 passenger per vehicle, since this is Amerucah, and one vehicle per space, since why build excess), then the capital cost would be a mere $16,500 per passenger. How much do we spend per bus passenger at most bus stops? Is it even $1?

Automated Vehicles are Probably Legal in the United States

Bryant Walker Smith writes 99 pages saying Automated Vehicles are Probably Legal in the United States:

“This paper provides the most comprehensive discussion to date of whether so-called automated, autonomous, self-driving, or driverless vehicles can be lawfully sold and used on public roads in the United States. The short answer is that the computer direction of a motor vehicle’s steering, braking, and accelerating without real-time human input is probably legal. The long answer, which follows, provides a foundation for tailoring regulations and understanding liability issues related to these vehicles.
The paper’s largely descriptive analysis, which begins with the principle that everything is permitted unless prohibited, covers three key legal regimes: the 1949 Geneva Convention on Road Traffic, regulations enacted by the National Highway Traffic Safety Administration (NHTSA), and the vehicle codes of all fifty US states.
The Geneva Convention, to which the United States is a party, probably does not prohibit automated driving. The treaty promotes road safety by establishing uniform rules, one of which requires every vehicle or combination thereof to have a driver who is “at all times … able to control” it. However, this requirement is likely satisfied if a human is able to intervene in the automated vehicle’s operation.
NHTSA’s regulations, which include the Federal Motor Vehicle Safety Standards to which new vehicles must be certified, do not generally prohibit or uniquely burden automated vehicles, with the possible exception of one rule regarding emergency flashers.
State vehicle codes probably do not prohibit—but may complicate—automated driving. These codes assume the presence of licensed human drivers who are able to exercise human judgment, and particular rules may functionally require that presence. New York somewhat uniquely directs a driver to keep one hand on the wheel at all times. In addition, far more common rules mandating reasonable, prudent, practicable, and safe driving have uncertain application to automated vehicles and their users. Following distance requirements may also restrict the lawful operation of tightly spaced vehicle platoons. Many of these issues arise even in the three states that expressly regulate automated vehicles.
The primary purpose of this paper is to assess the current legal status of automated vehicles. However, the paper includes draft language for US states that wish to clarify this status. It also recommends five near-term measures that may help increase legal certainty without producing premature regulation. First, regulators and standards organizations should develop common vocabularies and definitions that are useful in the legal, technical, and public realms. Second, the United States should closely monitor efforts to amend or interpret the 1969 Vienna Convention, which contains language similar to the Geneva Convention but does not bind the United States. Third, NHTSA should indicate the likely scope and schedule of potential regulatory action. Fourth, US states should analyze how their vehicle codes would or should apply to automated vehicles, including those that have an identifiable human operator and those that do not. Finally, additional research on laws applicable to trucks, buses, taxis, low-speed vehicles, and other specialty vehicles may be useful. This is in addition to ongoing research into the other legal aspects of vehicle automation.”

(Via Marginal Revolution.)

First do no harm: Cities and infrastructure as living systems | streets.mn

Cross-posted from streets.mn: First do no harm: Cities and infrastructure as living systems 

First do no harm: Cities and infrastructure as living systems

 

Cities, and the infrastructure networks that bind them, are alive.

Wikipedia says:

Life (cf. biota) is a characteristic that distinguishes objects that have signaling and self-sustaining processes from those that do not, either because such functions have ceased (death), or else because they lack such functions and are classified as inanimate. Biology is the science concerned with the study of life.

Any contiguous living system is called an organism. Organisms undergo metabolism, maintain homeostasis, possess a capacity to grow, respond to stimuli, reproduce and, through natural selection, adapt to their environment in successive generations. More complex living organisms can communicate through various means.

The city has often been likened to an organism, with downtown representing the control functions of the brain. Scientists have examined the city’s metabolism, and ask what nature teaches about cities.

The better analogy is probably that of the superorganism. Like an ant colony, a city (which obviously contain lots of individual organisms, us) and its infrastructure persists over time, taking in and sending out resources. The city grows (or dies) and occasionally sends off spores to form a new metropole.

Money in the urban economy is then analogous to the energy and food supplies needed by more conventional life forms. The more trade, the larger the city can grow. And like a tree which grows up and out, with a rotted out core, the same often happens to cities. Interaction with the outside world, the source of energy or economic resources, takes place at the boundaries of an organism. The super-organism may eventually decide it doesn’t need the inside or finds that is best used for storage. Or it may rediscover its abandoned areas. The tension between agglomeration and external trade is resolved in different ways in different places.

We also talk about the lifecycle of technologies, from birth, to growth, to maturity, to decline, analogizing technologies to living organisms. Individual deployments of those technologies may follow similar lifecycles.

The “production function” of living systems combines fixed and variable costs. As a homeowner we may plant a tree. But if we don’t take care of the tree, its likelihood of long-term success is low. We maintain it. We prune it. We water it. We protect from bugs, and so on. We don’t “set it and forget it” about trees, nor should we about infrastructure. We need to think about the lifecycle of buildings and infrastructures. Eventually they fail or we realize they are going to fail, or we might want to replace them because they are functionally obsolete. To keep them alive we need to monitor, maintain, repair, and eventually rebuild these systems, alternatively we might just abandon them.

Epidemiology studies the state of human health, as measured by the presence or absence of disease, as well as the causes of those diseases, whether genetic, behavioral, or environment. Someone should similarly be responsible for studying and treating the state of urban health, focusing upon the city’s circulatory system, and looking at causes including human behavior and the urban environment (which is usually taken as fixed) in which humans interact. As knowledge from epidemiology leads to treatments by doctors prescribing medicine, nutritionists telling the patient to change his habits, or regulators changing environmental standards, knowledge from transportation leads to treatments by traffic engineers prescribing angioplasty for the hardened arterials of our city, planners building bypasses, or gurus telling us to change our behavior or urban environment.

There is at least one useful lesson from medicine: First do no harm. We would not want a doctor to chop off our arm, and leave a gaping hole for a few decades while he figured out what to do next. We should consider why we permit destruction of functional if not optimal parts of cities well before we have any plan or resources to close the gaping wound with something else functional. The equivalent of a city’s doctors need to require replacement by something other than a vacant lot or surface parking before they permit demolition.

The point is that instead of viewing cities as inorganic discrete objects, we should think about the city as a holistic super organism: where changes to one component have effects on many others, and where decisions now shape the choices available later.

$40bn “fix it first” plan headlines Obama’s infrastructure push

I get quoted in Global Construction Review: $40bn “fix it first” plan headlines Obama’s infrastructure push

In his State of the Union address last month, US President Barack Obama proposed investing $50bn, starting right away, on the country’s transportation infrastructure.
Of that, $40bn would go toward the upgrades most urgently needed on highways, bridges, transit systems, and airports in what the White House has dubbed a “fix-it-first” policy.
“The national transportation system faces an immense backlog of state-of-good-repair projects, a reality underscored by the fact that there are nearly 70,000 structurally deficient bridges in the country today,” the White House said in a statement.
Mr Obama’s plan, which would need congressional approval, also proposes attracting private investment by pairing federal, state, and local governments with private capital, in what’s being called the “Rebuild America Partnership”.
And a third plank in the President’s infrastructure push is cutting red tape. Through a “historic modernisation of agency permitting and review regulations, procedures, and policies”, the President hopes to cut in half the duration of typical infrastructure projects.
The “fix-it-first” element of the plan received a muted welcome from Professor David M Levinson, an expert on the economics of infrastructure at the University of Minnesota.
“The priority should clearly be on repair because most of the system is built out, and we’ve had nationally declining travel over the last 10 years, so there’s not a major need for expansion nationally,” he told GCR.
The American Society of Civil Engineers (ASCE) has warned of an investment gap of $846bn in surface transportation
“The general problem is that the median age of an interstate highway link in the US is almost 50 years old now, and the expected lifespan of such links was in the order of 50 years.
“Generally most of the infrastructure that has got to be there 10 years from now is there now, and if we want it to be there ten years from now we need to fix it.”
The American Society of Civil Engineers (ASCE) has warned of an infrastructure investment gap, between now and 2020, of $846bn in surface transportation. If not addressed, says the ASCE, this shortfall will hurt the US economy.
Is $40bn enough?
“No,” Prof Levinson said. “No one really knows what’s enough. It’s about the equivalent of one year’s federal spending on roads. So it would be like adding an extra year to the decade, or 10% more over 10 years. It’s not trivial. It’s not going to solve the problem, either, but it’s a real amount of money.”
He also questioned the wisdom of infrastructure investment driven by the federal government.
“The states should be addressing this,” he said. “They can prioritise things locally, they know where the issues are, and they’re the beneficiaries.
“They know how much they need to spend locally to satisfy the local risk-reward, benefit-cost ratio. The federal government allocates things by formula and that means there’s a major inefficiency there.”

OpenScheduleTracker

OST

We have an entry in the Knight Foundation’s Knight News Challenge, which asks “How might we improve the way citizens and governments interact?”. Ours is OpenScheduleTracker. Please go there to read the details and “applaud”.

OpenScheduleTracker archives public transit schedules and provides an easy-to-use interface for understanding how schedules change over time, comparing different schedule versions, and identifying what areas are most affected by schedule changes.

What’s The Problem?

OpenScheduleTracker addresses three primary weaknesses in the way that transit system changes are currently reported and discussed:
 
1. Small changes are ignored
Public transit schedules evolve constantly, but we often focus only on big changes — new routes, new stations, line closures — and ignore small changes like schedule adjustments, frequency changes, and transfer synchronization. These small changes are not glamorous, but they can have a big impact on the way that a transit system meets or misses the needs of local communities.
2. Big changes are misunderstood
When a new bus route is added or a new rail station opens, the public discussion tends to focus on effects near the new facility: people want to know what’s happening “in my backyard.” These effects are important, but they are only part of the whole picture. Changes to transit systems have network effects which extend through the entire system: a new station in one neighborhood provides access to local opportunities for all users of the system.
3. Old schedules aren’t available for comparison
Analyzing schedule changes over time is often frustrated by the inconsistent availability of previous transit schedule versions. Transit operators’ policies for archiving historical schedule data varies widely, and even when schedules are archived the public often has access only to the current version. Public transit system schedules are significant investments of time, money, and expertise; when they are lost or inaccessible, the public loses the value of that investment.