Rail Advocacy in a Nationalized Environment

Needs to Identify Preferred Passenger Corridors

The East Palestine rail disaster is once again igniting calls to nationalize America’s freight railroads. Over the past year, American rail advocacy has coalesced around nationalization as the best policy option for dealing with the major problems endemic in the industry, with the Class I railroads’ horrendous labor policies becoming front-page news with the specter of a rail workers’ strike last fall. East Palestine cements the notion that the major American railroads’ corporate malpractice does not just extend to labor but also to their physical plants; it takes extraordinarily poor maintenance practices for a mainline train running at full speed to derail across a set of wrong-way points. Unfortunately, Norfolk Southern has had a history of such negligent practices that have resulted in several near misses, including a coal-train derailment on well-maintained Amtrak track and a rather spectacular fireball in Columbus, Ohio. Clearly Norfolk Southern’s delinquent attitude towards safety comes from the top.

However, the purpose of this post is not to discuss the case for nationalization. Rather, what I want to discuss is what comes after nationalization, and how that can help passenger rail advocacy. In particular, what I want to draw attention to is the presence of redundant corridors in America’s freight network, and how such corridors can help speed up implementation of medium-speed intercity corridors by focusing freight traffic on one line and freeing up the other for passenger traffic. We can turn to the formation of Conrail, and the Union Pacific’s consolidation of the Front Range-Bay Area rail routes, to see precedence for this.

Veneers of Competition and Operations Consolidation

American rail regulation is built on a nineteenth-century understanding of how railroads worked. This understanding saw railroads as forming more-or-less coherent “systems” linking disparate communities. Because lines were generally at some remove from one another, while multiple railroads might compete in specific corridors, they controlled local monopolies over lineside traffic. (This is also functionally identical to how Japan’s private railroads “compete” with one another.) This yields what we might call a “veneer of competition”, where, even though different railroads might have redundant paths between major-city pairs, one railroad or another had a local monopoly in serving smaller and even medium-sized communities between the more distant city pairs. 

Veneers of competition, however, are inherently very inefficient for interurban or intercity traffic–unless sufficient traffic exists on the corridor for all of the redundant alignments to be filled near or to saturation. 19th century American railroads and Japanese railroads both used intensive land-use development to grow traffic to economically justify their corridors, and in both countries, these railroad suburbs have become some of the most desirable places in their conurbations to live. However, if traffic starts dropping, it exposes these redundant corridors’ fragility and can set them on economic death spirals where the railroads have to cut services to cut costs, which in turn drives traffic away, reducing line viability, driving more cost cutting…

This is exactly what happened to the Midwestern rail network, where, as of 1950, four major railroads (the Pennsylvania, New York Central, Baltimore & Ohio, and Nickel Plate Road) “competed” against each other. As traffic steadily dropped, the two largest and sickest–the Pennsylvania and New York Central–sought to amortize costs by merging with one another and dropping inefficient redundant routings. This effort, however, failed, and the merged line, the Penn Central, filed for bankruptcy in 1970. Ultimately, the Penn Central’s bankruptcy drove the rest of the Northeastern network into bankruptcy as well and forced the 1976 partial nationalization that created Conrail. It was only when Conrail planners were able to redistribute traffic flows across the network and successfully shed unproductive lines that Conrail began to turn a profit. (Unfortunately, Conrail was subsequently eaten by the successor roads to the Baltimore & Ohio and Nickel Plate–the latter being modern Norfolk Southern.)

However, discursion aside, the net result of the American rail network’s consolidation is that, while most small cities (like East Palestine, Ohio) are entirely within the regional monopoly of one provider, nearly every city pair of any significance has at least two competing routes serving them. Broadly speaking, east of the Mississippi, these routes are run either by Norfolk Southern or CSX; west of it, by Union Pacific or BNSF. From a freight-planning perspective, this does make for a pretty redundant network, and indeed it may be arguable that American railroads prefer running heavy, slow, underpowered freight trains because the network’s redundancies allow for the slotting inefficiencies such operations necessarily entail. Upon nationalization, the new national operator would inherit the big four operators’ physical plants; initial operations would mimic precedent patterns.

A map of the Union Pacific network from Trains magazine (2013). There are duplicate green and orange lines between San Francisco and Salt Lake City, and red and yellow lines between Salt Lake City and Denver. These routes comprise wholly redundant mainlines, of which Union Pacific focuses most of its traffic on the red and orange lines, respectively.
Fig I. A map of the Union Pacific network from Trains magazine (2013). Notice the duplicate green and orange lines between San Francisco and Salt Lake City, and red and yellow lines between Salt Lake City and Denver.

However, because the optimal number of routes between a city pair is one, under a national regime primary routes between city pairs should collapse to one. This means that traffic loads on one route–the preferred route–will remain high, while those on the other route will decline over time. We can already see such a pattern in the Union Pacific’s routes between the Front Range and the Bay Area. Historically, the Union Pacific’s mainline ran between Omaha, NE, and Ogden, UT. At Ogden, the Union Pacific diverged into a line towards Oregon via Idaho and a line towards Southern California via Las Vegas. More importantly, however, the Southern Pacific line to the Bay Area met the Union Pacific at Ogden; it was this line that supplied the Union Pacific’s mainline with the bulk of its traffic load. 

In 1983, the Union Pacific bought another railroad that paralleled the Southern Pacific route, the Western Pacific, in order to develop a direct line all the way from Omaha to the Bay Area. This strategy was successful: the massive Southern Pacific system nearly (?) went bankrupt and was bought by a small railroad, the Denver & Rio Grande Western (D&RGW), in 1992. However, the D&RGW had overextended itself and was itself nearly bankrupt some three years later, when the Union Pacific bought it. This meant that the Union Pacific had two wholly redundant mainlines between the Front Range and the Bay Area. Union Pacific traffic patterns today prioritize through traffic along its own mainline to Ogden and the former Southern Pacific route; the alternative routing along the Western Pacific and D&RGW is a secondary route that sees relatively little traffic by comparison.

Conrail’s planners applied the same logic as they redeveloped traffic patterns in the 1980s. From Penn Central, Conrail had inherited two nearly redundant Midwestern networks (i.e. the Pennsylvania’s and New York Central’s). Because the New York Central’s network was generally in better condition than the Pennsylvania’s, Conrail chose to emphasize it, downgrading and eventually spinning off or abandoning most of the Pennsylvania’s former mainlines to Chicago and St. Louis. Likewise, where Norfolk Southern inherited Conrail trackage that paralleled their own (Nickel Plate) routes across the Midwest, they have chosen to de-emphasize the Nickel Plate lines in favor of the Conrail ones. 

Thus, from both rail planning theory and practice, we can expect operations planners of a nationalized network to focus traffic on certain preferred routes over other options. From this, it follows that the less-preferred route offers significant slotting opportunities to rapidly grow passenger rail traffic. We can therefore think of nationalization allowing us to identify preferred freight and passenger routes between major city pairs. This leads to the next question passenger-rail advocates should ask themselves: How do we ensure that freight planners don’t turn a preferred passenger route into a freight mainline?

Identifying Preferred Routes: Purpose and Process

The answer to this, of course, is that passenger advocates need to identify preferred passenger routes and coax freight planners to focus mainline freight traffic onto (also clearly identified) alternative alignments. Doing so would allow advocates and planners alike to capitalize on the redundancies inherent in the American rail network to construct broadly parallel passenger and freight networks. It would also identify the most problematic corridors for routing purposes, which in turn should drive capital investment decisions. However, I cannot stress enough that intercity freight routing patterns are more flexible than passenger ones: it is vitally important that the approach to passenger corridors does not force passenger rail onto a suboptimal routing. 

A good example of this occurring is the Union Pacific’s prioritization of the Southern Pacific route over the Western Pacific one. Here, most destinations and hence potential passenger loads east of Sacramento, CA, occur around the Southern Pacific route–in fact, Reno, NV would be a good place to terminate corridor trains out of California! By contrast, the ex-Western Pacific route completely bypasses Reno. This is fine for a train that isn’t going to get broken down until Utah at the earliest, but it’s definitely not fine for a train whose main revenue stream is expected to be the Reno-California market. Thus, passenger and freight planners and advocates need to work with each other to make sure that both markets’ needs are met.

The next question becomes: how does one identify a preferred route? Here there is a marked separation between freight and passenger routing decisions. Given the kinds of cargo through freight trains are apt to handle (anything from containers to coal to polyvinyl chloride), the optimal routing for through freight will be one that has the lowest grades possible and passes by the fewest population centers possible. By contrast, passenger equipment is more curvature sensitive than grade sensitive; thus, optimal passenger alignments favor low-curvature routes through larger population centers.

Current conditions: the Norfolk Southern and CSX mainlines linking Chicago, Detroit, Cleveland, Buffalo, and Pittsburgh. The CSX mainline generally runs through more rural terrain but the NS mainline more closely follows the lakeshore.
Fig II. Current conditions: the Norfolk Southern (black) and CSX (blue) mainlines linking Chicago, Detroit, Cleveland, Buffalo, and Pittsburgh

Consider, as a case study, the Chicago-Cleveland, Cleveland-Buffalo, and Cleveland-Pittsburgh freight mains. Currently, both of the major freight railroads that serve (for a given definition of “serve”) the eastern half of North America, Norfolk Southern and CSX, focus their primary traffic along mainlines that run east from Chicago to the Cleveland area. The CSX route follows the old Baltimore & Ohio mainline from Youngstown, OH west; Norfolk Southern follows the old New York Central mainline west of Cleveland and the old Pennsylvania mainline east of it. Both railroads split traffic in the Cleveland area: most Norfolk Southern traffic to upstate New York diverges in Cleveland, with the onward route towards Buffalo following former Nickel Plate trackage. By contrast, CSX’s split at Greenwich, Ohio, a rural town around a hundred miles southwest of Cleveland, is much more operationally significant, with all CSX traffic heading towards the Northeast’s lower half continuing down the former Baltimore & Ohio with a slight detour in Pittsburgh and all CSX traffic heading towards New York and New England turning north and following former New York Central mainlines through Cleveland, Buffalo, and Albany. If the railroads were nationalized, which of these two alignments is better for which traffic?

The answer here is that if we want to grow passenger traffic, we need to focus on identifying the passenger corridor first. The freight traffic can reasonably–justifiably–follow either the Norfolk Southern or CSX corridors. However, passenger traffic will strongly favor the Norfolk Southern corridor west of Cleveland, and the CSX corridor east of it. (That would be the former New York Central mainline all the way from Chicago to Buffalo.) West of Cleveland, this corridor accesses the largest city in the region, Toledo, which CSX accesses not along its Chicago-Pittsburgh mainline but rather a separate Detroit-Columbus route; it also accesses South Bend and Elkhart in Indiana and Sandusky in Ohio. By contrast, the biggest city along the CSX mainline appears to be Defiance, Ohio…and if you don’t even know where Defiance is, well, that’s my point exactly. East of Cleveland, the preference becomes more cloudy. Both routes are very straight and easy to upgrade; both routes run through central Erie, PA and Dunkirk, NY; downtown Ashtabula, OH, sits directly between the two. However, Conneaut, OH’s New York Central train station headhouse is still around, and Amtrak’s Lake Shore Limited follows the CSX alignment, meaning that Erie’s facilities remain CSX-oriented. Ideally, then, freight traffic between Cleveland and Buffalo should be shifted to the Norfolk Southern corridor.

Identifying a preferred passenger route (red) following Norfolk Southern corridors west of Cleveland, CSX between Cleveland and Pittsburgh, and a mixed corridor to hit Youngstown, OH, yields a preferred freight route (unmarked)
Fig III. Identifying a preferred passenger route (red) following Norfolk Southern corridors west of Cleveland, CSX between Cleveland and Pittsburgh, and a mixed corridor to hit Youngstown, OH, yields a preferred freight route (unmarked).

This case in point illustrates an important underlying fact–America’s rail network can more flexibly address freight needs than passenger needs, and so passenger needs have to be heavily weighted when choosing which corridors to prioritize for freight traffic. Otherwise, in a vacuum, freight planners will choose whichever corridor works best for them, regardless of passenger needs. (And in the case of the Chicago-Buffalo mainline, that is almost certainly “CSX all the way”, both because Willard is a better yard and because the CSX route bypasses all of the major population centers between Chicago and Cleveland.) If transit advocacy views nationalization of the rail network as a desirable end goal, it must aggressively identify good passenger corridors and push for them to be treated as such. This does not just mean crayoning; it also means demonstrating that investing in passenger rail in these corridors is economically sound.

We need to make these decisions at both the state and regional levels as well. This is for two reasons. First, many optimal corridors clearly cross state lines. For example, the group of cities around Lake Erie and between Toledo and Chicago are at least as geographically expansive as the Northeast Corridor and are home to about 24 million people–roughly the same amount of people as what the ongoing CAHSR project will serve. It is also a region that sprawls across six states–Illinois, Indiana, Michigan, Ohio, Pennsylvania, and New York. But for most of these states, a corridor linking together Chicago, Toledo, Detroit, Cleveland, Erie, and Buffalo will only hit one major city and is thus of secondary importance to intrastate connectivity (e.g. the 3C, Keystone, Empire, and Wolverine corridors are all preferred investment routes for their respective states). Unless we can develop the capacity to intensify service along the Lake Shore Limited’s route, we will remain hobbled in our ability to provide a strong interstate rail network where it needs to go.

Concluding Thoughts

Thus we can see how important it is to identify optimal passenger rail corridors. Such an identification program is critical to our ability to lobby for passenger-rail intensification, especially in the context of a nationalized rail system where there is strong pressure in route planning to emphasize one route between a given city pair at the cost of de-emphasizing another route. By identifying routes of focus for passenger traffic, rail advocates can control the conversation and coax freight traffic planners into moving traffic away from priority passenger routes by showing that passenger traffic along those routes is indeed a priority. However, doing so will require ramping advocacy up to a much greater scale than it currently operates at.

Hankyu Operations: A Case Study

The Hankyu Railway is one of Osaka, Japan’s “Big Four” private railways. It operates three routes out of its Umeda Terminal in central Osaka: the Kobe, Takarazuka, and Kyoto lines. It does so with extremely high throughput, operating roughly twelve trains an hour all day, every day on its Kobe and Takarazuka lines. (It operates trains on the Kyoto Line with even greater frequency.) This is a frequency and reliability that continues to elude American mass transit agencies. This raises the question: how does Hankyu do it?

The answer is that Hankyu does not run complex schedules with very many stopping patterns. Instead, they operate very simple schedules in very short intervals. This creates “pulses” of movement throughout the system, a rhythm so regular you can set your watch by it. This idea of a simple, regular, and highly rhythmic schedule is known as takt in English, a term coined by Anglophone transit planners from their German cousins. Building schedules to takt maximizes throughput on space-constrained mainlines. 

Hankyu and Takt

The most regular schedules in the world are the ones achieved by single line subways, such as the ones in the Paris Métro. This is not a coincidence. These systems are closed, their equipment’s performance characteristics are well known, and operations planners can schedule throughput with extremely short intervals. Some of the world’s busiest subway lines achieve intervals as low as 90 seconds on two track infrastructure. 

The situation is a bit different for Hankyu. First, none of its lines are closed. The Kobe Line interlines with the Hanshin and Sanyo railways through central Kobe; the Takarazuka Line connects with the Nose Railway, which operates some through service to Osaka via Hankyu; and the Kyoto Line’s branch to Senri through-runs with the Osaka Metro’s Sakaisuji Line. Secondly, the Hankyu network is highly branched. The Kobe Line has branches from the mainline to Itami, Imazu, Takarazuka, and Koyoen; the Takarazuka Line, besides the bespoke Nose Railway (which itself branches twice), has a branch to Mino’o; and the Kyoto Line, besides the branch to Senri, also has a branch to Arashiyama. Finally, Hankyu’s ridership is more “interurban” than “intraurban”; that is, its riders expect fast service between distinct cities moreso than they expect stops at various points of interest within the métropole. While inarguably efficient, running all-stops trains at constant intervals is suboptimal in terms of ridership expectations. Simply put, Hankyu’s riders do not expect to stop between e.g. Osaka and Kyoto. 

The solution to this problem is to operate multiple, highly regular, service profiles on the line. That is, there is a strong distinction between “local” and “express” trains, where local trains stop at all stops along the line while express trains only stop at a set subset (as it were) of these stops; namely, the busiest. Thus, while the Kobe Line has approximately 20 stops from Hankyu’s Umeda Terminal to Shinkaichi, express Kobe trains only stop at a third of these, mostly concentrated in the Nishinomiya and Kobe areas. Because express trains are faster than locals, they set the interval rhythm. That is, they set the takt. Express trains leave Umeda at :X0 (ten, twenty, thirty past the hour, and so on); local trains leave once the express has cleared the block, allowing a clear signal — approximately a minute later. 

During the day, Kobe Line trains run in ten-minute intervals. This means that a local-express pair departs Umeda once every ten minutes. The same is true of Kobe’s main station, Sannomiya. Hankyu goes a step further, too: it uses the takt it established on the Kobe Line to schedule the Takarazuka Line. In fact, Kobe Line and Takarazuka Line trains depart regularly on the same intervals! However, Hankyu does not extend this Kobe-Takarazuka takt to the Kyoto Line. This is because the Kyoto Line has the longest run of the three, and has three primary service patterns (instead of the two the other two lines share). Because of this, the Kyoto Line has a different natural interval than the other two, one which sees three departures in 12 minutes rather than two in 10. (That is, the Kyoto Line’s midday frequency is ~18 trains per hour (tph).)

Takt-Supporting Infrastructure

The idea of the takt is that Kobe Line express trains are faster than its local trains. Moreover, Kobe Line express trains are more than nine minutes faster than its local trains! What this means is that either (a) line capacity is set by the minimum interval in which the line stays clear (about 20 minutes), or (b) infrastructural investment must be made to support the takt. This is the concept of a timed overtake. Simply put: at some point between Osaka and Kobe, the express train runs into the preceding local train. The point where this occurs can, of course, be calculated, given average train speeds. Thus, the station closest to (but not beyond) the collision point needs to be a passing siding as well as a station. This is where faster expresses are able to pass slower locals. On the Kobe Line, this occurs at Nishinomiya-Kitaguchi.

Stops at Nishinomiya-Kitaguchi have this rhythm: First, a local train pulls into the station. Roughly a minute later, they are followed by an express train. This meet facilitates a cross-platform transfer, which is useful for passengers wishing to go beyond Nishinomiya but not quite all the way to downtown Kobe (or Osaka, in the other direction). The express train departs. Finally, once the block clears, the local train departs as well. It should be apparent that this process ensures that the dwell time for locals at the meet point is quite long — at least three minutes and perhaps as much as five. In fact, this dwell time is so long that locals may be thought of acting as different routes altogether on either side of the timed overtake. However, without this timed overtake, it is impossible to achieve the frequency Hankyu achieves on what is fundamentally a two-track mainline. 

The Takarazuka Line is significantly shorter than the Kobe Line. Unlike on the Kobe Line, where expresses pass locals at Nishinomiya-Kitaguchi, on the Takarazuka Line, locals terminate where following express trains meet them. This occurs at Hibarigaoka-Hanayashiki. What this means is that Takarazuka Line trains have two distinct service patterns: (1) express trains which originate at Takarazuka, and (2) local trains which originate at Hibarigaoka-Hanayashiki, and depart that station once expresses clear the block ahead. For both the Kobe and Takarazuka lines, Hankyu’s operations planners identified where the timed overtake was to occur given a specific all-day service profile, and designed the infrastructure to fit that profile.

This targeted investment can also be used to allow for even tighter intervals. On the Kobe Line, Sonoda station — approximately halfway between Umeda and Nishinomiya-Kitaguchi — is a fully four-track station, despite only seeing local service. The same is true for Sone station on tbe Takarazuka Line (approximately halfway between Umeda and Hibarigaoka-Hanayashiki). Unlike these two, however, passing tracks run through the middle of Rokko station (approximately halfway between Nishinomiya-Kitaguchi and Kobe-Sannomiya). In all three of these cases, however, the added infrastructure allows for timed overtakes at double the line’s normal frequency. The Kobe and Takarazuka lines are therefore able to handle frequencies of up to 24 tph. This helps Hankyu both manage peak loads, and provide peak service paradigms outside of its two (or, in the Kyoto Line’s case, three) normal service paradigms. 

There is one other point to make about Hankyu’s physical plant here. Each of the three lines heading to Umeda has its own separate two-track mainline, each leading to three bay platforms. Three tracks, as it turns out, is more than adequate for Hankyu’s short-turn traffic, and three-track terminals is standard practice among railroad lines throughout Japan: Hankyu and Hanshin’s original termini at Sannomiya likewise had three tracks, as do Hankyu’s Kyoto terminus at Kawaramachi, Kintetsu’s original Nara Line terminus at Namba, Kintetsu’s terminus at Nara, the Kobe Electric Railway’s terminus at Shinkaichi, JR’s Nara Line and San’in Main Line termini at Kyoto, and Kintetsu’s Kyoto Line terminus at Kyoto, among others. Because the Kobe, Kyoto, and Takarazuka lines all depart onto their own segregated mainlines rather than to a unified trunk, there is no need for Hankyu to build a massive throat linking all nine tracks at Umeda. It also allows Kobe and Takarazuka line trains to have simultaneous departures, and Kyoto Line trains to have near-simultaneous departures (which can become simultaneous when their respective takts line up).

In other words, capital investments on the Hankyu system are guided first and foremost by a primary service profile — what is necessary to support a takt that best balances competing ridership demands with a minimum of schedule variations. This allows Hankyu to achieve 12 tph frequencies on the Kobe and Takarazuka lines all day, every day, and 15 tph frequencies on the Kyoto Line all day, every day. This is in direct contrast to the United States, where capital investments tend to occur as a hedge against poor operations planning, which in turn results in excessive capacity that enables even poorer operations planning. However, Hankyu’s three mainlines do not exist in a vacuum. They are fed by a feeder network of at least a dozen branches and two feeder railroads, and themselves interline with four other railroads (including one of the feeder routes).

But What About Branching?

One may have noticed already that the Hankyu network is heavily branched. It has around half a dozen branches, and in addition to this, it has a feeder route (the Nose Railway) which acts as a branch in its own right; a second feeder route with a free transfer (the Kobe Electric Railway); and interlines with three more routes (the Osaka Metro, the Hanshin Electric Railway, and the Sanyo Electric Railway).

In European, Australian, and North American systems, not only are suburban rail lines quite heavily branched, but the rider expectation is that the train they board will take them directly into the city center. This is likely closely related with the low-frequency clockface schedules one finds in outer-suburban regions, if such schedules meaningfully exist at all. Furthermore, as Alon Levy notes, transfers incur a ridership penalty; they are necessarily inconvenient, and the longer the transfer, the less convenient it is, and the greater the penalty. 

By contrast, Hankyu follows Jarrett Walker’s “Frequency is Freedom” dictum. That is, Hankyu does not directly route trains from any of its branchlines to Umeda. Instead, its branchlines are serviced by their own, independent shuttle train operations running from the branch terminus to the mainline junction. Like the mainline, these branchline trains also have high frequencies. In fact, their frequencies are so high (the Koyo Line has a frequency of 10 minutes; the Imazu Line to Takarazuka of 7.5 minutes; the Imazu Line to Imazu (these operate independently) of 10 minutes; the Itami Line of roughly 7.75 minutes; the Mino’o Line of 10 minutes; the Nose Electric Railway (ignoring its own branch) of 10 minutes; the Senri Line of about 6 minutes; and the Arashiyama Line of about 8 minutes. These, note, are all-day frequencies; I checked the schedule at half past eight in the evening. By implementing high-frequency service both on mainline and branchline alike, Hankyu is able to minimize the transfer penalty at its junction stations. 

This is not to say the transfer penalty does not exist, however; Hankyu’s branchlines’ ridership is noticeably lower than its mainlines’, and it is fairly evident that its profitable mainline services cross-subsidize far less profitable branchline services. 

Of note here, while Hankyu’s practices regarding their branchlines are standard among the private operators, the largest operator in the area, JR West, follows a different practice on their Urban Network. The Urban Network is massively interlined, facilitating anywhere-anywhere service patterns (within limits). It operates more in line with European norms than do Hankyu, Hanshin, Sanyo, Keihan, Kintetsu, Nankai, or any of the other minor operators within the region. Thus, trains on the JR Kobe and Takarazuka lines, which converge at Amagasaki, may run either towards Kyoto and beyond via Osaka Station or towards Kizu (and Nara and beyond) on the Gakkentoshi Line. Trains on the JR Hanwa Line (Osaka – Wakayama), Kansai Airport Line, and Yamatoji Line (Osaka – Nara) may terminate at Osaka (Tennoji for the Hanwa and Kansai Airport lines; Namba for the Yamatoji Line), or they may continue to JR-Osaka via the Osaka Loop Line. JR West supports this extensive interlining with a quad-track mainline from Maibara in the east, through Kyoto, Osaka, and Kobe to Nishi-Akashi in the west, as well as with double-tracked mainlines everywhere else. 

Like the private railways, JR supports extensive clockface scheduling and takt operations across its system. However, because it uses operating practices more akin to Western regional rail rather than the metro network-style practices Japan’s private operators favor, it necessarily has more complex operations which in turn lead to reduced services across some of its lines, e.g. infrequently spaced (by Japanese standards) locals on the Hanwa Line. Its operational practice of through-running Wakayama and Kansai Airport expresses to JR-Osaka is also a significant driver of congestion on the Loop Line, Osaka’s answer to Tokyo’s famous Yamanote Line; in order to alleviate this, JR is intending to build a regional-rail-style central tunnel along Naniwasuji through the western side of central Osaka. This will pair with the Tozai Line, a tunnel linking the Gakkentoshi Line on Osaka’s east side with the Kobe and Takarazuka lines on its west side; ideally, the Naniwasuji tunnel will also allow JR to design a more operationally coherent network.

We have thus so far found that Hankyu’s operations practices are built around a very simple schedule, repeating at a regular rhythm. This we call takt. Having implemented this type of schedule, Hankyu then uses it to inform its capital investment strategy. It creates larger stations, either quad-tracked or with passing sidings, where express trains can meet locals (for cross-platform transfers) or overtake dwelling locals. Specifically, Hankyu has built facilities for timed overtakes at Rokko, Nishinomiya-Kitaguchi, and Sonoda on the Kobe Line, and at Hibarigaoka-Hanayashiki Sone on the Takarazuka Line; these facilities support peak frequencies of 24 tph and all-day frequencies of 12 tph, or two trains, one express and one local, per 10-minute interval. (They have also presumably built similar facilities on the Kyoto Line, but we have not taken the time to analyze this.) In addition, they have also constructed a six-track trunk leading out of Umeda, organized into three separate two-track lines, in order to support simultaneous departures.

However, there is one more significant consideration we need to give to Hankyu’s scheduling practices, and that is how the Kobe Line interlines with the Hanshin and Sanyo electric railways in central Kobe. 

Hankyu and Interlining

Hanshin and Sanyo

In 1968, the Kobe Rapid Railway opened. This underground railroad linked the Sannomiya termini operated by the Hankyu and Hanshin electric railways, respectively, with the Sanyo Electric Railway on the other side of Kobe. It also extended the Kobe Electric Railway, a minor operator in the city’s mountainous north side, to a new terminal at Shinkaichi, one with a free transfer within the farezone. With the Kobe Rapid Railway, Hanshin and Sanyo began extensive through-running; even to this day, the Hanshin and Sanyo networks remain operationally unified. 

The Hanshin-Sanyo mainline has even more complex operations than the Hankyu Kobe Line. (Like Hankyu, Hanshin operates trains from a terminal at Umeda in Osaka.) There are no fewer than five basic operations Hanshin and Sanyo run along their combined line:

  1. Express trains from Umeda to Himeji (these are the true Hanshin-Sanyo shared expresses)
  2. Express trains from Umeda to Higashi-Suma, just past the Kobe Rapid Railway’s western portal (these are the Hanshin expresses)
  3. Express trains from Himeji to Hanshin-Sannomiya (these are the Sanyo expresses)
  4. Local trains from Hanshin-Umeda to Kosoku-Kobe, a station within the Kobe Rapid Railway tunnel
  5. Local trains from Himeji to somewhere in Kobe (Sanyo allows Shinkaichi and both the Hankyu and Hanshin Sannomiya termini as potential endpoints).

Above and beyond this, Hanshin also fits trains running from Sannomiya to Amagasaki (where the Namba Line diverges from the Hanshin mainline) and on to Namba on Osaka’s south side and Nara via the Kintetsu Railway in their operations pattern. Like Hankyu, both Hanshin and Sanyo build their schedules around 10-minute takts; this is true even though the Hanshin line between Osaka and Kobe has nearly twice as many stops as either JR or Hankyu. 

Like Hankyu, Hanshin and Sanyo both make extensive use of timed overtakes to yield both a frequent all-day takt and the ability to schedule the takt at double frequency (or run non-takt variants) during peak hours. Heading east from Sannomiya, timed overtakes are available on Hanshin at Oishi, Mikage, Ogi, Nishinomiya, Koshien, Amagasaki-Center-Pool-Mae, Amagasaki, Chibune, and Noda. Of these, I can only say with certainty that Mikage and Nishinomiya are used for timed overtakes during normal frequencies, although the pattern strongly suggests Pool-Mae and Chibune are as well. 

Heading west from Shinkaichi, timed overtakes are available on Sanyo at Higashi-Suma, Sanyo-Suma, Kasumigaoka, Sanyo-Akashi, Fujie (on the eastbound side only), Higashi-Futami, Takasago, and Oshio. Keeping in mind the every-other-overtake rule and the fact that Fujie clearly cannot be used for westbound timed overtakes (and therefore can only be used for peak-hour times heading eastbound), the timed overtakes Sanyo uses to maintain normal frequency are (1) Sanyo-Suma, (2) Sanyo-Akashi, (3) Higashi-Futami, and (4) Oshio. Strangely, it does not appear there is a timed overtake between Oshio and Himeji (Shikama’s center track is a stub track for Sanyo’s Aboshi Line), which is perhaps mitigated by Himeji itself being a four-track terminal. 

This allows both the Hanshin and Sanyo mainlines to employ the same takt schedule that Hankyu employs on its Kobe and Takarazuka lines. This is especially beneficial for Hankyu because it can time its Kobe Line departures from Shinkaichi to fit within Hanshin-Sanyo’s takt. In addition to this, Hanshin can program its Nara trains (departing from Sannomiya) to fit into available slots left over in its takt.  

Finally, it is worth noting that Hanshin times its locals to terminate at Kosoku-Kobe with a cross-platform transfer to Hankyu, and Sanyo times its locals to depart Shinkaichi, the next station up, a minute after Hankyu arrives. This complex double cross platform transfer is the product of a well defined takt combined with the strict operational discipline needed to maintain it. 

Nose and Kobe Electric Railways

However, any discussion of interline movements in the Hankyu system is incomplete without a discussion of such movements in Hankyu’s feeder railways. There are, recall, two: (1) the Kobe Electric Railway and (2) the Nose Electric Railway. These two feeder systems handle interlined operations in very different ways. 

The Nose Electric Railway connects to the Hankyu Takarazuka Line at Kawanishi-Noseguchi and its mainline runs from there north to Yamashita. At Yamashita, two branches diverge. One runs west to Nissei-Chuo; the other runs east to Myokenguchi. Despite the apparearance of a complex system, Nose is actually operationally simple. It runs three services: one from Kawanishi-Noseguchi to Yamashita, one from Yamashita to Nissei-Chuo, and finally one from Yamashita to Myokenguchi. Because it runs these three services separately, it is able to provide 10-minute frequencies on all three, rather than having either (a) a mainline with double the frequency of the branches or (b) a branch that has a significant frequency penalty relative to the mainline.

By contrast, the Kobe Electric Railway is what the FRA would call a “sealed system”. Unlike the Nose Electric Railway, which is connected to Hankyu’s Takarazuka Line at Kawanishi-Noseguchi, the Kobe Electric Railway has neither the track gauge nor the structure gauge of Hankyu. It runs on narrow-gauge tracks, like JR and unlike the rest of the Hankyu network (excepting the Osaka Monorail), which uses standard gauge, but it uses a narrower structure gauge than JR. There are no track connections between the Kobe Electric Railway and any other railway, anywhere.

Unlike anything else in the Hankyu system, the Kobe Electric Railway practices extensive interlining of its two mainlines, the Sanda Line linking Shinkaichi with Sanda and the Ao Line linking Shinkaichi with Ao. During the day, the Kobe Electric Railway runs trains along the Sanda Line once every ten minutes, and it also runs a shuttle along the Ao Line from Nishi-Suzurandai to its junction with the Sanda Line at Suzurandai at roughly the same interval. (It should also be noted that this stretch of the Ao Line is single-tracked, so this shuttle runs at the maximum available interval.) It also runs trains from Shinkaichi to Shijimi, Shinkaichi to Ono, and Shinkaichi to Ao at significantly lower frequencies than the shuttle train between Suzurandai and Nishi-Suzurandai. Communities along the Ao Line west of Nishi-Suzurandai have service frequencies under 20 minutes outside of peak hours; they are noticeably underserved relative to the Kobe Electric Railway mainline to Sanda, or anywhere on the standard-gauge Hankyu network. 

Lessons from Japanese Operations

Lesson 1. Schedules need to be simple and rhythmic.

It is quite common for American commuter rail schedules to have extremely complex service profiles. This is particularly bad on LIRR’s Port Jefferson Branch, which appears to have as many distinct service profiles as weekday trains serving the line, and there are a lot. (I wish I was kidding.) However, takt scheduling demands the opposite approach. It demands a high degree of regularity to departures, and stemming from that, a scheduling approach that minimizes variations in schedule profiles. In fact, Japanese railways post all possible stopping patterns a line can have, if it has more than one (and most do), over at least one car door in every train car. The space constraint such posters create further limits the temptation for Japanese operators to create unnecessarily complex schedules. 

Simple schedules are also repeatable schedules. This means that the schedule should be optimized to a given interval of repetition, preferably a divisor of 60 (so that the schedule can repeat at minimum once an hour) or of 30 (so that the schedule can repeat at minimum once every half hour). As we have seen, Hankyu’s systemwide schedules repeat on frequencies of 10 or 12 minutes, if not even more frequently. This makes takts extremely easy: a 10-minute schedule repeats 6 times an hour, and a 12-minute one, 5 times. Also notice here that the takt’s basic pulse need not be a singular departure: Hankyu’s basic pulse sees two departures every 10 minutes on the Kobe and Takarazuka lines, and three departures every 12 minutes on the Kyoto Line. This means that the all-day hourly frequency is 12 trains an hour on the Kobe and Takarazuka lines, and 15 trains an hour on the Kyoto Line.

Lesson 2. The schedule should identify chokepoints, where capital improvements can then be concentrated.

Optimally, a railway line should run a single service frequency, namely, all-stops locals at the greatest frequency the signaling system allows. This is how subways operate. However, passenger demands often create mixed service profiles. This means that an operator might need to run express trains (which skip stops) along the same line as local trains (which do not). Sooner or later, the express train is going to run into the preceding local. 

If Hankyu’s Kobe Line were two tracks the entire length, it would be horrendously inflexible in terms of schedule profiles. Either it could run locals at extremely high frequencies, local-express pairs at a 20-minute interval (it takes 20 minutes for an express to catch up to the previous local at Sannomiya), or semi-expresses with ad-hoc stop skippage in order to try to balance service loads from various stations (which is why the Port Jefferson Branch’s schedule looks like drunk stoners on cocaine planned it). Instead, by building the Kobe Line’s primary operations profile around 10-minute frequencies, Hankyu was able to identify the capital investment necessary to achieve such frequencies. This turned out to be quad-tracking the its midpoint station, Nishinomiya-Kitaguchi.

Quad-tracking a single station is obviously a lot cheaper than quad-tracking the entire line. But the limited capital expense was optimally placed to maximize operational leverage. By placing the Kobe Line’s quad-track station where expresses catch up with locals when departures have 10-minute intervals, it doubled the line’s frequency, given the mixed service profile, from 6 tph to 12. Essentially, the quad-track station at Nishinomiya-Kitaguchi allows Hankyu to have both frequent fast trains and frequent trains that stop everywhere on the same two-track Kobe Line. 

Hankyu pushed this concept further: they quad-tracked two other stations on the Kobe Line, Rokko and Sonoda. Each of these stations lie at the midpoint between Kobe-Sannomiya and Nishinomiya-Kitaguchi and Nishinomiya-Kitaguchi and Osaka-Umeda,  respectively, which allows Hankyu to schedule 2 departures every 5 minutes, or a 24 tph base schedule, at peak frequencies using the basic unit of departure. They can achieve greater throughput still during peak hours by building schedules with somewhat different local and express patterns, such as locals which terminate at Nishinomiya-Kitaguchi and trains which express to Nishinomiya-Kitaguchi and run local past it. It also implies that the Hankyu Kyoto Line has a peak frequency of at least 30 tph, barring fouling issues the Senri Line creates at Awaji. 

Lesson 3. Timed overtakes are your friend.

Mixed-traffic is both ubiquitous and difficult. Unlike in an environment where every train is expected to make every stop with high frequency, it is impossible to maintain consistent spacing in mixed-traffic schedules. As an express train moves along the line, it gets further away from the local train behind it and closer to the one in front of it. American schedules tend to mitigate this by making most trains not-quite-expresses and not-quite-locals. This results in every stop getting similar (irregular) service. However, this solution is highly suboptimal. 

As Hankyu’s Kobe Line shows, timed overtakes unlock capacity along a given route. Without timed overtakes, the Kobe Line would have to be (a) operated either like a subway line with very high throughput capacity but a stopping pattern that would render the longest Hankyu runs uncompetitive against JR, (b) with express speeds that would allow the longest runs to be competitive against JR, but with frequencies that limit the effectiveness of this competition, or (c) with a schedule that made every train stop at some, but not all, local stops (which would render Hankyu uncompetitive against JR both in terms of frequency and speed). 

By building a passing facility at Nishinomiya, Hankyu unlocked frequency. Their available throughput went from 2 departures every 20 minutes (6 tph) to 2 departures every 10 minutes (12 tph). With that passing facility, it was able to run trains fast enough to match JR’s time from Osaka to Kobe, at frequent enough intervals to match what JR could achieve on their quad tracked mainline. They were further able to unlock frequency by adding passing facilities at Rokko and Sonoda, allowing peak frequencies of 2 departures every 5 minutes (24 tph). In other words, through the magic of timed overtakes, Hankyu was able to make their two-track line mimic a four track line’s frequency. 

This is an especially important observation when one considers that a great deal of American light rail infrastructure (hello DART!) runs on two-track lines that extend far out into the suburbs. By building a schedule such that expresses will always overtake locals at predefined locations, and providing infrastructure capable of handling the meet, an operator like DART can achieve throughput, and hence frequency, that mimics what a four-track mainline is capable of even without having a four-track line in its own right. 

Lesson 4. Branching is a double-edged sword.

Modern American network design tends to heavily emphasize interlining, and most European commuter-rail variants heavily branch. The same is true of the Japanese rail network. However, the way Americans, Europeans, and Japanese handle branching is very different. 

In Euro-American contexts, branching is viewed as a way to achieve greater trunkline frequencies. If all trains are scheduled to access the city center, then the trunkline frequency can be stated as the sum of branchline frequencies. However, this approach has consequences. It privileges frequencies on the trunk over those on the branches. For example, a trunk with two equal branches can achieve a frequency of one train every 10 minutes only if the branches are allowed to have frequencies of one train every 20 minutes. High frequency in the core is the consequence of suboptimal frequency on the branches. 

Another problem branching incurs is that it builds uncertainty into the system, uncertainty corrected for by building excess capacity. This is because the more lines feed into the trunkline, the more likely it is that a problem on one of them cascades across the system. This is a problem JR has to a much greater degree than any of Japan’s private railways. (Delays of up to 10 minutes on trains in JR West’s Urban Network, which uses Europ-American branching, are uncommon but not unheard of.) 

Japan’s private railways’ solution, with exceptions, has been to separate branchline schedules from mainline ones. This allows each line to be independently scheduled, such that each service has a high degree of frequency. The flip side of this coin, however, is that branches always require transfers. These transfers can be, but are not always, cross-platform transfers. When they are not, however, one never leaves the fare area. The net result of this is that most branches on Japanese private railways operate like short shuttles linking the branch terminus with the mainline, like the New York subway’s S trains. These shuttles normally operate at 10-minute frequencies. Extremely high-frequency operations such as these minimize the impact of transfer penalties.

Perhaps the ultimate example of how much high frequencies reduce transfer penalties is the Kobe Electric Railway. The Kobe Electric Railway terminates in Shinkaichi, on Kobe’s west side and some 40 minutes away from Sannomiya on foot. However, because both the Kobe Electric Railway and the Hankyu-Hanshin-Sanyo system  operate out of Shinkaichi at very high frequencies, the transfer penalty for passengers from the Kobe Electric Railway heading downtown (or towards Osaka, Kyoto, or Himeji) becomes all but invisible. The overall effect is similar to if passengers transferring at Secaucus Junction could expect their train to arrive presently without even consulting a timetable. 

It should be noted, however, that part of why this solution works is because of the Kansai region’s high overall density. Around 20 million people live within a 40-mile radius of Osaka’s city hall. No US conurbation is this dense. It is a neverending sea of urbanity. However, this does not mean Japanese lessons are inapplicable. Indeed, Japanese branching habits occur not just near central Osaka, but also in its suburban and even exurban fringes. A good example of this is the Keihan Railway’s Katano Line. This route runs along the eastern edge of Osaka’s built-up area. It is far more suburban than anything closer to the core. However, Keihan Railway operates the Katano Line the same way Hankyu operates the Imazu Line, that is, as a shuttle service from its junction with the mainline to its ultimate terminus. 

Thus, while branches are able to service very many more destinations than a single mainline, it is useful to question how we should go about operating them. When it comes to branches, there is a very real tradeoff to be had between high all-day frequency, on one hand, and one-seat service, on the other hand. While one-seat service may still be desirable during the peaks, frequencies that make cross-platform transfers at the junction as painless as possible are a far more optimal solution for very frequent all-day service.

Conclusion

The Hankyu Railway is one of Osaka’s largest private railway operators. It has a long history of operating intensive passenger service at a profit. It does so by operating high-capacity, high-frequency, and high-throughput service — on double-track mainlines. In order to do so, it has focused on optimizing operations and then directing capital to augment and expand operational capacity. 

This is the heart and soul of the Swiss transit maxim organization before electronics before concrete. By bringing throughputs on mixed-traffic, two-track mainlines to their absolute maximum, Hankyu is able to provide world-class urban rail service at a fraction of the capital cost embedded in peers and competitors with four-track trunklines. 

Convention Center Hotels

Introduction

The Pennsylvania Convention Center has historically been seen as something of a white elephant in the City of Philadelphia. Despite having a million square feet of space (some 680k of which is exhibit space, making it the third largest convention center by exhibition space in the Northeast), the Convention Center has long suffered from the curse of relatively few bookings, an issue that an expansion about a decade ago was supposed to solve (and didn’t). Blame for the sorry state of the Convention Center’s operations was laid squarely at the feet of its laborers, in particular the Carpenters Union, who were summarily evicted from the Convention Center back in 2015.

Over the last couple of years, the Convention Center’s operations have undergone a night-and-day change. From being a large yet sleepy facility that only rarely booked shows large enough to fill the entire space, the Convention Center has become an attractive convention destination, having, so it seems, a revolving door of conventions constantly coming in and going out. Conventions large enough to fill the entire space are not such rare occasions anymore, with two major ones — Lightfair International and BIO — having arranged back-to-back conventions just last spring, and conventions that comfortably fill half the available space being regularly comfortably hosted (though, still, not two at the same time; this is, however, clearly an issue internal to the building administration).

In other words, the Pennsylvania Convention Center is now the rival to DC’s Walter White Convention Center and the Boston Convention & Exhibition Center (BCEC) for major conventions in the Northeast. It is also making good use of its newfound muscle, having been able to attract the region’s premier comic-con from Valley Forge some years ago and, more recently, the Natural Products Expo East from Baltimore (where they outgrew a decidedly undersized convention center), as well as hosting shows like Lightfair which more typically exhibit in ultralarge convention centers like Chicago’s McCormick Place or the Las Vegas Convention Center, two of the country’s largest.

Statement of Need / Purpose

Philly, all of a sudden, is a hot commodity in the events industry, and it’s starting to throw into stark relief the fact that the Pennsylvania Convention Center only has two dedicated major hotels — the Loews and Marriott, the latter being the only hotel space to have a physical connection with the Convention Center proper. This is in contrast with most of its peer convention centers, which are all but ringed with a belt of sizable hotels: the Boston Convention Center, for example, has a Hyatt Regency, Sheraton, and Hilton all directly connected to it, as well as a plethora of significant other hotels within a short walk. Most of the hotels servicing the Pennsylvania Convention Center, by contrast, are relatively small facilities reflective of its history as a fairly sleepy building that could demand Center City hotel space on the rare occasions it filled up for a major convention.

But with the ramping-up of Philadelphia’s events industry over the past 2-3 years or so, Convention Center demand is taking an ever-larger bite of Center City’s hotel space. The local hotel industry is waiting to see how Brook Lenfest’s W/Element project at 15th and Chestnut will affect Center City’s hotel-space demand, but if Convention Center business continues apace, the likely answer is: by barely a blip. Center City’s hotel space, as it currently stands, is essentially large enough to handle either Convention Center business or the business-traveler and tourism business typical of any major downtown, but not both.

Clearly, then, if the expansion of Convention Center business represents a new normal, more hotel space is needed. And, seeing as Center City’s existing hotel space is oriented around Center City needs — not Convention Center needs — the majority of this space will need to be oriented around the Convention Center.

Methodology

With that all in mind, then, let us consider what kind of space is available around the Convention Center. For this project, I have exclusively considered sites that are (a) relatively un(der)developed that (b) offer easy apparent connections to the Convention Center concourse. A lot that faces Race Street wouldn’t be useful, then, because there is no direct connection to the concourse anywhere along the Race Street side, save at Broad — the concourse can be understood as following Broad and Arch streets.

I have tried to favor larger sites in order to yield spaces akin to the Loews or the Marriott, but I recognize this is not always possible. The Marriott, in particular, is quite an extensive facility that sprawls over four buildings and involves three distinct mastheads; it is more likely that most new Convention Center hotels would be more akin to the Loews or the main Marriott building in size.

I have identified some seven distinct development sites, each with its own challenges and opportunities. Let us examine each in turn, beginning at the northwest corner and moving clockwise.


 

I. The Race-Vine Site

 

Screenshot 2019-11-11 at 10.00.06 PM
The Race-Vine site, relative to the Convention Center (bottom right)

The only hotel space directly adjacent to the Convention Center’s 2011 expansion is a relatively small Aloft in the former public utilities building at Broad and Arch, an Aloft that is itself a recent entrant to the city’s hotel market. It is perhaps because of this that the Convention Center’s Broad Street side tends to be underutilized, mainly used by conventions whose footprints lie mostly or exclusively in the 2011 annex. It is also where the bulk of the significant available space adjacent to the Convention Center lies.

Conveniently, the Race-Vine site lies directly across Race Street from the Convention Center concourse. Direct access is thus easily achieved by means of e.g. a skybridge, or an underground passageway should this not be favored. About half of the site is a surface parking lot, with a very forgettable low building facing Race between Juniper and Watts. It also lies adjacent to Broad Street, one of the city’s main thoroughfares, and to the Race/Vine Broad Street Line station.

The main challenge the Race/Vine site imposes is its Broad Street face. Historically, most of the buildings along this stretch of Broad had medical offices and such in support of Hahnemann Hospital across the street, but with its closure earlier this year, such services are obsolete in this location and will likely be moved elsewhere or liquidated altogether.

Much more significant for the architect are the extant buildings. I am unsure whether any are on any historic register, but I am sure that any proposal that entails the demolition of any of them will likely put them on one in no time flat. Most of Broad is fronted by handsome period commercial midrises; a talented and enterprising architect can reuse these to impressive effect while putting the majority of a convention center hotel’s more space-intensive functions in new-build sections along the large parking lot between Juniper and Watts streets. It would be fairly easy to build a hotel with a thousand keys or more on the Race-Vine site.

II. The Hahnemann Site

Screenshot 2019-11-11 at 10.35.28 PM
The Hahnemann site, relative to the Convention Center (bottom right)

Just across Broad from the Race-Vine site, and cater-corner to the Convention Center, lies the Hahnemann Hospital site. This is, in point of fact, the large urban hospital that closed earlier this year, according to rumor, for luxury condos and/or a hotel, the latter being what we are currently interested in.

 

Hahnemann Hospital is a large, rambling complex of structures arranged in an L-shaped pattern along 15th and Vine streets; a large parking lot occupies its southeastern corner. These buildings, having recently been in use, can be assumed to be structurally sound, and offer extensive opportunities for a variety of uses. In particular, for our interests, a hotel would make sense stretching along the property’s Race Street frontage, which is mostly a parking lot but inclusive of a low-lying building at the far corner of 15th and Race. Such a structure would easily be similar in size to the main Marriott building at 12th and Market.

A fairly significant issue arises when considering connections to the Convention Center across Broad, however. Broad Street is more than just a thoroughfare; the North Broad viewshed is meant to be one of the city’s most iconic (which is a major reason why the Convention Center’s 2011 annex fronts it so monumentally). A skybridge across Broad would be frowned upon, to say the least; instead, a direct connection would have to be built underground, if at all.

This may end up being easy or difficult. The Broad Street Line runs, naturally enough, under Broad Street, but it lies far enough down for its Race-Vine to have north and south concourses that cross the tracks. Because the southeast corner of the Hahnemann site abuts the Race-Vine station’s southern concourse, the construction of an all-weather connection between the former and the Convention Center would necessarily entail an enlargement of it; such an enlargement, however, would need to sprawl across Race in order to reach the Convention Center’s Broad Street Atrium, with the attendant worries about public utilities this involves. It would also in all likelihood be a necessary compromise for the construction of a new Convention Center hotel with a weatherproof connection at the northwest corner of Broad and Race.

That said, outside of this hiccup, the Hahnemann site is perhaps the best available Convention Center-adjacent potential-hotel site, with a potential size sprawling across the entire block and a natural footprint that is mostly parking lot and, by all appearances, relatively easy to build on. Indeed, that’s probably what Paladin “Healthcare” had in mind this whole time.

III. The Reyburn Plaza Site

Screenshot 2019-11-11 at 11.13.57 PM
The Reyburn site, relative to the Convention Center (top right)

A block south of the Hahnemann Hospital site is the Municipal Services Building, which sits on a barren, windswept plaza mainly inhabited by giant board-game pieces. This is known as Reyburn Plaza, and at one time, it was the largest open space adjacent to City Hall. Now, however, it is the third wheel of City Hall open spaces, a forgotten relic, by all appearances, of the 1960s.

The opportunity Reyburn offers is evident: it is a large space, adjacent to both the Convention Center and to City Hall, that is completely unbuilt-on. Unfortunately, that’s where the challenges begin.

Of all the sites presented in this list, Reyburn is, without a doubt, the most technically challenging to build on. This is because it sits on top of significant underground infrastructure including the Subway-Surface trolley tunnel Center City Commuter Connection railroad tunnel abutting each other on the south side of the site — and the Broad Street Line making a detour off Broad and around City Hall’s clock tower, one of the largest piles of stones in the world. Securing a solid foundation for a major building here is a significant structural engineering problem.

Add to that that Reyburn Plaza is still technically zoned as open space in the City plan, despite the fact that it is, by all appearances, functionally useless for the task (the original Reyburn Plaza configuration had a bandshell where the Municipal Services Building now stands), and that the Municipal Services Building itself extends well into the potential building site here under the plaza, and you have the kind of infrastructural hot mess that only the most intrepid, well-connected, and deep-pocketed hoteliers would even dare face.

And of course you had all the problems of getting across Broad Street to the Convention Center’s atrium that you had at the Hahnemann site!

All that said, for the not-inconsiderable challenge of developing Reyburn, there is considerable reward. This is easily the most iconic unused potential building site in the entire city. A canny developer might be able to get the City to move out of an aging and poorly designed Municipal Services Building in favor of newer, better-designed space elsewhere (Site IV, for example?), consolidating the entire block and producing a confection that can swing more easily than the northern two sites between Center City and Convention Center needs.

In short, the Reyburn site has the highest risk for any enterprising hotel developer — and, potentially, the highest reward.

IV. Site IV

Screenshot 2019-11-11 at 11.37.32 PM
Site IV, relative to the Convention Center (top)

Yes. This is literally the most boring site on this list. It’s a City-owned parking lot behind the Criminal Justice Center, across the street from the Convention Center’s Arch Street concourse. It’s not particularly large, but a tall building might be able to squeeze an adequate room count in. This site is so forgotten that even most locals forget that there’s a parking lot here.

That said, while Site IV is undersized relative to the other sites presented here, it is, without a doubt, the easiest to get done, especially if one’s idea of “getting it done” amounts to a clone of the Home2Suites down the block — and it’s also readily zoned for whatever ambition the develop may have, including, perhaps, Brook Lenfest doing a reprise of his W/Element project down at 15th and Chestnut.

V. The Gallery II Site

Screenshot 2019-11-11 at 11.47.44 PM
The Gallery II site, relative to the Convention Center (left)

The Gallery (now styled the “Fashion District of Philadelphia”) is a regional shopping mall built in Center City in two phases, the original around 1977 and its annex around 1983. Having been undermaintained for a decade after it was divested to the Pennsylvania Real Estate Investment Trust (PREIT) during the merger between what were then the two largest shopping mall companies, Simon and the Rouse Company, it closed in 2015 for extensive — near-gut — renovations before reopening earlier this year.

It is also notorious in the Philadelphia development community for having been designed to support three high-rises on top of it — one at 9th and Market, one at 10th and Market, and one at 10th and Filbert — none of which have ever been built. High-rise pads that, one may note, are also quite convenient to the Convention Center.

The Gallery II site proposes using both of the Gallery II pads — the ones, that is, along 10th Street — for a new Convention Center hotel.

The upside to this proposal is that the really annoying structural work is already done. The foundations for a pair of 20-story towers already lie embedded in the shopping mall, which makes it easy to achieve room counts. Unfortunately, the downside is that this is perhaps the most spatially-constrained of all the proposals: short of capping the atrium, something I would imagine would be anathema to PREIT, there is little (if any) space for providing amenities like a large ballroom. Add to that the fact that attempting a physical connection between the Gallery II caps and the Convention Center would involve snaking a structure over Jefferson Station and probably leasing out the relevant floor in Jefferson Tower and we have a fairly tricky — but not impossible — potential hotel site to work with.

VI. The Greyhound Station Site

Screenshot 2019-11-12 at 12.16.44 AM
The Greyhound station site, relative to the Convention Center (left)

A Greyhound bus station lies immediately north of the Gallery adjoining a parking garage with a Hilton Garden Inn on top. This latter hotel is actually a temporary facility: the structure was flung up in anticipation of the 2000 Republican National Convention nearly twenty years ago.

Is this site viable? With luck, Greyhound will move to 30th Street when a new intercity bus hub gets built there as part of Philly District 30, and once Greyhound moves, this site becomes viable — and moreso — quite attractive.

The main issue the Greyhound Station site would need to contend with is the large Gallery parking garage. Ideally, the redevelopment of this site would shift the garage east, onto the former bus station parcel; a hotel in this space could thus have a direct (skybridge) connection to the Convention Center Concourse by 11th and Arch, a hotel tower adjacent to the Convention Center between Filbert and Arch, and events space (as well as more hotel towers) atop the parking garage, accessible by a skybridge over 11th, most likely from the hotel’s main tower’s main amenity deck.

This proposal could well also involve an expansion of the Reading Terminal Market, one of the country’s biggest foodie destinations, on its ground floor, in an era now occupied more-or-less exclusively by parking ramps.

As with the Gallery II proposal, though, the major weakness of the Greyhound station site is the fragmentary nature of the parcels. A redevelopment involving the Gallery garage will likely involve its replacement and the daylighting of an endarkened stretch of 11th Street and hence the construction of multiple interlinked buildings on multiple parcels.

VII. The Combined Site

Screenshot 2019-11-12 at 12.28.28 AM
The combined site, relative to the Convention Center (left)

Exactly what it says on the tin — a site that combines the Gallery II and Greyhound Station sites above! Such a proposal ameliorates the major weakness of the Gallery II site (the lack of suitable space for e.g. a ballroom) by essentially envisioning the Gallery II tower pads as an annex of the Greyhound Station site. This in turn yields, across a complex system of parcels, enough space for a hotel with a room count to rival the Marriott’s campus on the other side of the Convention Center, with at least three major towers available on the 11th Street parcel and two Gallery pads alone, and two more on top of the replacement parking garage inherent to the Greyhound station site proposal.

There’s an impishness to this site not found in any of the other proposed convention center hotel sites: a sense of a unified campus leaping from one underutilized space to another in one of the country’s densest downtowns, a building at play several stories in the air with only a tangential relationship with the ground level. At a certain level, the combined site is the most aesthetically pleasing due to this relationship to the cityscape below. It is ungrounded, light, part of the city yet simultaneously apart from the city.

Thus, the Combined site is perhaps optimal for a hotelier wishing to install a hotel campus with multiple mastheads the equal of Marriott’s at 13th and Filbert: one can easily imagine two, and perhaps even three, mastheads in a single interconnected facility here, one that is in the main relatively easy to construct due to the extant structural cores embedded in the Gallery and the likelihood Greyhound will move in the hear future.


Concluding Remarks

I hope these seven hotel sites have given some hopeful developer ideas about how to approach the problem of constructing new convention center hotel-type space adjacent to the Pennsylvania Convention Center, a space whose business has radically expanded in the past few years and which is now starting to push the limits of what Center City’s current hotel inventory can do.

Event services firms (e.g. Freeman, GES, etc.)’s employees within the Convention Center generally estimate the facility needs a whopping 10,000 (!!) more hotel rooms in order to meet the existing demand, much less any further growth in the facility — that would be equivalent to some ten more Marriotts around the building. Here I have identified space for perhaps up to six. Let’s get building!

Google Maps file found here.

The Fallout

The Fallout

In an utterly shocking* turn of events that no one could have possibly predicted,* the least competent man to ever hold the office of President has managed to fail completely at legislating, attempted to transform an executive order into a king’s writ, lashed out at the judiciary when that failed, and sunk into a quagmire where there is now not insubstantial (albeit still circumstantial) evidence that the President of the United States is, in fact, a traitor. All in less than a month.

This is not normal. Not only is this not normal, this is an entirely new degree of paralysis, as every day brings some new nightmare to deal with from DC. Whether or not you agree with the Republicans’ legislative agenda, legislation is not happening because everyone — Democrat and Republican alike — is scrambling from crisis to crisis, from scandal to scandal.

It is difficult to see how Trump can last four years. His relationship with the press is adversarial, proving that Trump never learned not to piss off whosoever wields the pen. The intelligence community is already in revolt against him; Democrats are poised to make huge House gains in 2018, likely on a platform of impeachment; Congressional Republicans are, at this point, barely holding the floodwaters bearing the foul swamp miasma of corruption back from engulfing them all. All in less than a month.

The question now is what will happen.

The Republican Dilemma

Right now, House Republicans are in a bit of a pickle. Trump has lost so much credibility so fast that there are already voices suggesting impeachment. But he still has a small cadre of loyal supporters, and this cadre usually controls who wins House district primaries.

This is untenable. What this means is that, by not acting, House Republicans — especially in the more suburban districts — risk losing the general to a groundswell of anti-Trump support. But to act would mean alienating the Trumpet base, who would swiftly and mercilessly primary them. Their calculation is perhaps, then, that the best window for removal is in the six-month window between primaries and the general in the middle of 2018; the hope would then be that after they secure primary nomination, they can defang their opponents. It would be shrewd.

It would also involve waiting through roughly 500 more days (and may I remind you, we’ve gone from “inauguration” to “probably a traitor” in 20-some-odd days) of utter insanity before it happened. And in any event, leaders such as Ryan and Chaffetz seem to have decided the best path forward is party before country, letting the White House’s ethics quagmire fester.

It’s hard to see a path forward for House Republicans. Their gerrymander is strong — they may be trusting in it — but public lividness at Trump’s unpresidential shenanigans is also strong, stronger possibly than in Katrina’s aftermath, when Democrats took control of Congress and lame-ducked W.

The Senate is different. There aren’t many Republican seats up in the Senate in 2018 (but plenty of Democrat ones), meaning that by the time most Republican Senators have to campaign again, the Trump scar will be a distant memory, already receding into the domain of history books and language, where “Donald Trump” will likely replace “Benedict Arnold” as a connotation of cold treachery.

The dichotomy can be seen in real time. The Senate has already moved to begin investigating Trump’s Russian connections (although they have not yet appointed a special prosecutor), while Chaffetz is moving investigations on anything associated with the White House forward at the slowest pace he reckons he can get away with. Chaffetz, I may add, is a Utah Republican with a very high risk of getting primaried by someone who’s more willing to impeach.

Sooner or later, something’s got to give. Trump will have have a short Presidency and leave, at minimum, in disgrace. The questions are: how short? what deal will ensure his removal? and what will happen after?

The Trumpian Constitutional Crisis

While the man is a walking constitutional crisis, pretty much constantly in violation of the emoluments clause his Russian entanglements notwithstanding, perhaps the biggest Constitutional crisis of all will happen once he leaves. And that is the crisis of: how do we prevent this from ever happening again?

Whether it’s Pence or Ryan or Pelosi (heaven help us if he lasts that long) who replaces him, this will be the very first item on the 45th President’s legislative agenda. The new Secretary of State will, of course, be tasked with fixing the international damage the Trump administration caused, but the domestic agenda comes in second after locking madmen out of the White House forevermore.

Make no mistake, this will be a Constitutional crisis. Among other things, we can already see

  • Trump should never have gotten to a position where he could be nominated as President;
  • Impeachment may not be a strong enough tool for dealing with executive treason; and
  • Secondary methods of Presidential removal may also need to exist.

A New Amendment

Dealing with the first is pretty clear, and should be bipartisan. It’s the territory of a Constitutional amendment, and one that can be worded with one or two unambiguous sentences. Something like

Amendment 28. The President of the United States must have previously held at least one elected office, at the state or federal level, prior to running for President.

An Amendment so simply worded, in the immediate wake of such an unambiguous disaster as Trump, should pass the 2/3rds majorities and reach the 3/4 ratification mark within a single legislative session. This would be, after all, little more than actual politicians ensuring that an actual politician gets the highest political office in the land.

Dealing with an Incompetent President, or a Vegetable One

The second and third issues are of opposite import. The second is meant to ensure that a made double agent in elected office (including the Presidency) can be removed in a nonpartisan way, with a minimum of fuss; the third, a form of “no confidence” removal if the President becomes demonstrably unfit for office, also through nonpartisan processes.

I recently saw a proposal for dealing with the third that I quite like: impaneling living former Presidents [who have, in light of Trump, served full terms]** to determine if the current President is fit to serve, should a crisis of personal faculties arise. This should be an inherently bipartisan body, meaning that any decision they agree to should be above partisan politics. Because the body will always be small, and the decision being made is of grave import to the nation, I would also add that the decision to remove must be unanimous.

Of course, this also leaves open the question of who can call such a panel to convene. I would personally give this tool to the states, where a simple majority of state governors to convene the panel acts as a vote of no confidence such that possibilities of removal need to be investigated. (And of course, the unanimity requirement functions as a check, such that state governors can’t abuse the tool to simply get rid of a President of the opposite party.)

Handling Elected Traitors

This leaves the third and last issue to resolve: clearly leaving dealing with Presidential treason, like other high crimes and misdemeanors, to an inherently political process such as impeachment, is not enough. Treason is not like perjury or even conspiracy. Both Richard Nixon and Bill Clinton always had the nation’s best interests at heart, even if their methods were suspect.

By contrast, treason is a betrayal of public trust, using the trust so granted to advance some other sovereign state’s best interest. It isn’t just putting your own best interests over your nation’s, as Trump does every time he flagrantly violates the emoluments clause; it’s using entrusted information to advance somebody else’s agenda (as Flynn did with Russia).

An elected official committing treason, then, is not just betraying his party; he’s betraying his position and his country. Impeachment is inherently a political tool, and the latter two betrayals transcend politics altogether and need to be handled as such. The judiciary, then, must be the one handed the tool of removal over treason.

Being accused of treason would be no different than needing to stand trial over murder or fraud. But there are two added wrinkles: (1) an elected official standing trial for treason would do so before the highest court matching his jurisdiction, that is, a state governor standing trial for treason does so before his state’s Supreme Court; a Congressman or the President standing trial for treason does so before the Supreme Court of the United States, and (2) a treason conviction not only entails criminal punishment for the elected official, but the liquidation of staff subject to rules choosing a successor. That is, the Speaker of the House, not the Vice President, automatically becomes President if the previous President is removed via treason conviction.

(The idea here is that if treason occurred at the very top, then the whole staff is implicated in aiding and abetting it. Trying to figure out who knew what would take far too long and would almost certainly delay the VP’s confirmation, leading to a vacant Presidency until the whole legal nightmare gets sorted out.)

Other Procedural Issues

These three issues look to be the biggies coming out of the Trump debacle. The President needs to be qualified to hold the office in some way, and there needs to be more ways to remove a sitting President should the worst come to pass. My take on the latter two is that the states can be entrusted with a secondary, broad-ranging removal process, and that the judiciary needs to be entrusted with a secondary, narrowly-focused removal process, one that is triggered by one crime and one crime only because that crime is too severe to let people play political football with.

These are, of course, not the only procedural issues people have pointed out. The Electoral College has clearly been subverted in terms of purpose. Gerrymandering has institutionalized minority rule in the House. A certain successful former state governor can’t run for President because he wasn’t born in the US. The two-party system as a whole is failing to provide meaningful political discourse and coalition-building between the whole panoply of ideologies, left to right.

While the Trump administration’s fallout will most certainly precipitate at least one Constitutional amendment and a broader Constitutional crisis, however, I’m not holding my breath on how much of it will be addressed. Part of the robustness of our system is that there are many avenues to effecting lasting change, such that if one is gummed up or refusing to do its job, there is another. Both gerrymandering and the Electoral College can be resolved in processes outside that of Constitutional amendments.

Unfortunately, it does not seem Mr. Schwarzenegger will ever get his (richly deserved) chance to run for President. Nor does it appear we will see a third party rise in our lifetimes, short of one party or the other collapsing. Maybe there are some dreams we dare dream for too deep.


* Sarcasm.

** I am adding this section.

Triple-Deckers’ Murky Origins

Triple-Deckers’ Murky Origins

The Boston triple-decker is perhaps the most New England housing type of them all. A simple wooden flat construction, the triple-decker provides comfortable and reasonably private housing accommodation for three families on two lots. While others, such as Old Urbanist’s Charlie Gardner, have pointed out some of the triple-deckers’ limitations, they are inarguably the solution Victorian Boston either wanted or needed.

Yet they are also incredibly murky. They spring as if from whole cloth in a region whose previous architectural vernacular was vastly different, with no clear origin. They are different enough from the only other wooden buildings in New England — farmhouses and Maritime rowhomes — that they clearly spring from an entirely different tradition. In terms of time and place, triple-deckers are, for all intents and purposes, naturalized immigrants.

Where did they come from?

In a previous post, I explored wooden residential vernaculars in the United States, itself a strangely murky topic, and came to the conclusion they developed in the Mohawk Valley, from there migrated into the Lower Lakes region, and then were disseminated nationwide through the development of ideas such as mail-order and tract housing. I also suggested that the New England triple-decker was a branch of this tradition. I want now to explore why I came to this conclusion.

A City of Brick and Wood

Boston is a bit schizo, in terms of residential architecture. Where Mid-Atlantic cities have a tight, well-defined brick rowhome vernacular, and New York has its blocky vernacular that can be purposed to rowhomes or apartments, Boston has two competing — almost clashing — vernaculars: a brick rowhome, clearly developed from the British style, and the wooden triple-deckers, as different from them as green-skin space babes.

Look closer and we can see some patterns that tell us how and why this may have come to be.

Boston’s oldest intact neighborhoods, Beacon Hill and the North End, feature charming British rowhomes that would not look out of place in the oldest parts of Mid-Atlantic cities — or British burgs like Bath or Bristol. However, like other such core neighborhoods, these would have begun to fall in esteem in the mid 19th century.

Part of what egged that on would have doubtless been the construction of Back Bay and the South End, fens surrounding the Boston peninsula’s neck that were drained, filled in, and turned into stately brick rowhomes, real estate projects that, for all intents and purposes, tripled the city’s size. These parts of town quickly became the wealthy’s preferred neighborhoods, a distinction they wear to this day.

The triple-decker, by contrast, does not encroach closer to the city center than the South Side, on the other side of a rail approach. They have no clear relationships with the stately brick vernacular Boston’s elite favored; they are interspersed with some cramped Maritime wooden rowhomes that Boston’s period suburbs (e.g. Cambridge favored), which only serve to highly how utterly unlike them the three-flats are; they even give way to masonry where people with means wanted their own Brahmin-esque rowhomes. All of this is to say: the triple-decker is a housing solution clearly quickly and widely adopted as if out of nowhere, and even at the time of its adoption were clearly meant to cater to the working class.

It makes a lot of sense that workingmen might favor triple-deckers, particularly in a society where homeownership wasn’t as important as it would become in the 20th century. Maritime rowhomes are not unlike Philadelphia trinities or Manayunk rowhomes — small and cramped on the inside. By contrast, a triple-decker’s flat, even though it would have had roughly the same net amount of space, would have felt open and airy, more spacious and gracious. Boston’s builders could cram one more family into the same space that two Maritimes rowhomes would have taken up, while at the same time upcharging workers for the privilege. It would have felt like an all-around win-win.

But this only tells us why the triple-decker would be rapidly adopted in the Victorian era. It tells us nothing about where it came from. Indeed, we can see from this analysis that the reason three-flats were so popular was because they were such a radical departure from the region’s pre-existing rowhome vernaculars … something that only further highlights the style’s immigrant nature.

So Who Else Did Flats?

Flats weren’t popular in colonial British cities. We can see this by looking at the three great groups of colonial architectural vernaculars — New England, the Mid-Atlantic, and (what remains of) Tidewater. In each of these places, separated by the organic New Englander street systems, the tidy Baroque Tidewater parade-ways, and the endlessly utilitarian Mid-Atlantic grids, the same type of subdivision plan dominates throughout: narrow and deep lots, and houses optimized to fit. Workingmen usually lived in houses that were single rooms stacked on top of each other, undoubtedly cramped and uncomfortable in an era of large families. As fire concerns crisscrossed the continent, major cities increasingly required brick, resulting in the antebellum living arrangements so well preserved in Philadelphia and Boston.

There were two major colonies that did do flats, however. One was New France’s core, up by the St. Lawrence, which would later become part of Canada; we can see widespread use of brick flats throughout Montréal in a form that, at street level, looks and feels like rowhomes. The other was the Low Countries’ successful colony around the Hudson River fjord and the terminal-moraine outcrop sprawling into the sea, one of the main conduits for furs from the expanding Iroquoian empire to Europe. England would later acquire this colony and rename it, but its Dutch heritage remained strong.

One way it did so was with the use of potash in cooking, from which modern quick breads and cookies developed. Another was its flat-tolerant vernacular.

The walk-up flat is simultaneously a new and ancient building type. Large apartment buildings were known in Rome, for example, but largely fell out of favor during the medieval era. Indeed, rowhomes are built on a medieval model of housing: a tiny plot of land, where the family shop would be located on the ground floor and their living quarters above. In larger and denser European cities, owners would build extra working space and rent it out; eventually, wealthier renters would dispense with the workspace and simply provide a structure subdivided between distinct living spaces.

Modern flats as such probably originated sometime during the 17th or 18th centuries on the continent: the very different way that Europe would approach flat architecture compared to North America suggests that the technology was still in infancy when Britain came into ownership of the North America’s other Continental colonies. But they were also latecomers to Britain, and (this is important) had also spread to New France and New Netherlands before the British took them over. This explains why native British colonies did not have flats, nor New Sweden, but why Québéc and New York do.

Midwestern Interlude

Residential Vernaculars” mainly explored different modes of urbanization associated with different (Northern) crossings of the Alleghenies. Pittsburgh, Cincinnati, and St. Louis are clearly rowhome cities; Buffalo, Cleveland, and Chicago just as clearly … aren’t.

In fact, Chicago is also interesting for our discussion here, as it is home to three-flats. We have a fair grasp of their derivation in the area — the Great Chicago Fire would have resulted in masonry requirements for larger residential structures, and the three-flat appears to have already been a common multifamily variant of the balloon style common in the Northern Lakes.

New England triple-deckers and Chicago three-flats have a lot in common, actually. Both are fully-detached walk-up triplexes — a solution not found in European flats … or Montréalais plexes … or New York apartments … or, for that matter, anywhere else outside the US. And while the only thing we can say for sure about the triple-decker’s origin is that it was clearly not in New England, because the three-flat is closely tied to the Lower Lakes vernacular as a masonry variant on the region’s balloon-frame’s multifamily variant — and not the only one, at that — if we can figure out where the Lower Lakes vernacular developed, then it may well be possible that triple-deckers have the same place of origin.

Canal Cities

My thesis is that we can — and that we can see where.

It is the early 19th century, and New York is falling behind at opening up its frontier. Philadelphia is linked with much of Appalachia and the Ohio Valley by road by this time; New York has, until recently, been blocked from doing the same by Iroquoian strength in the Mohawk Valley. (It’s worth pointing out here that the Pittsburgh region was part of the British frontier even prior to the Seven Years’ War; the same was not true of the Buffalo or Cleveland areas.) With the diminution of Iroquoian power, the Mohawk Valley was opened to development, and a water connection between the Hudson and Lake Erie was completely feasible in a way that one between the Potomac or Susquehanna and Ohio was not.

This led to the construction of the Erie Canal, linking New York with the freshwater sea, a geographical advantage that Philadelphia and Baltimore would be hard-pressed to counter. When the Erie Canal began construction, most of the populations of Ohio, Indiana, and Illinois lay around the Ohio Valley. (We can see evidence of this by noting that Michigan was admitted to the Union some twenty years after even Illinois, suggesting it remained sparsely populated for some time relative to its neighbors.) But with the easy link between New York and the Great Lakes, it could make up for lost time through superior transportation — and even potentially edge out Pennsylvanian influence in what was, at the time, the western frontier.

So we can clearly see that the economic forces at work in the Mohawk Valley were clearly New York’s. Montréal has a better path into the Great Lakes and would have had its own Canadian issues to deal with. We also have a traceable path for the Lower Lakes vernacular back towards the Mohawk Valley area, just as there is a traceable path for early Ohio Valley architecture all the way back to Philadelphia. We can, however, note with some consternation that this path only goes back to the Mohawk Valley, with known social — but no physical — connections to New York.

Or aren’t there? One of the major features of the New Yorker brownstone is that it has single-family and multi-family configurations, where multi-family configuration was only later introduced to the Mid-Atlantic rowhome and was probably alien to Boston rowhomes until the 20th century, long after triple-deckers’ rise. What do we see with the Lower Lakes’ balloon frames? Single- and multi-family configurations. In fact, these two configurations exist side-by-side in upstate New York’s canalside cities: Rome, Utica, Syracuse, Rochester. It would appear, then, that builders in the Erie Canal area had a general sense of house-ness that came from the city.

This gives rise to the next question. Why detached structures? After all, no major urban vernacular in the ca. 1820 United States used detached structures. And using detached structures in was was then, as now, the snowiest part of the country doesn’t really make sense when one generally seeks to share warmth in wintertime.

Because the Lower Lakes vernacular is unrelated to any colonial vernacular at first glance, and reveals its deeper relationship to the New York vernacular only on closer examination, the answer is surely something that must have been in the air upstate in the 1820s. One possibility is that they were patterned after Iroquois longhouses; another, local farmhouses. However, the snowiness (upstate New York is among the world’s snowiest places) — and the fact that the balloon-frame vernacular’s earliest known realization was the gablefront house — points to another possibility: the gables kept snow from piling up on rooftops, and builders were forced to place side yards to provide a place for snow to collect to. In this way, the mechanics of snow solve the mystery surrounding the detached balloon-frame’s rise.

When we explore upstate’s older canalside cities, we can now read them like a book. Wood was a preferred building material because the more skilled craftsmen, with masonry experience, were working on the canal. Detaching the houses and adding gables were needed to deal with copious snow Lake Erie sends into the region every winter. And single-family and small flats were copiously intermixed in the way the builders knew back home.

Later, even as the skilled masons were freed up to work on other projects, the habituation to wooden dwellings — much cheaper and faster to build than masonry ones — led to their explosive growth across the regions newly accessible from the Erie Canal: Buffalo, Erie, Cleveland, Toledo, Detroit, Chicago, Milwaukee, and eventually across the Plains and into the West, where they easily outcompeted the older, more conservative Ohio Valley vernacular. Needing a value-added proposition, masons were freed up to work on ever-more-opulent commercial and public architecture, and masonry residential was only reinstated in Midwestern cities after fire ravaged their first phases (which had also become rarer due to better public services). And — for the purposes of this discussion — one also notes that the Mohawk Valley, where this vernacular first arose, is, conveniently, just west of New England.

The House Also Migrates

Rapid industrialization was problematic for British and British-derived vernaculars. British vernaculars show incredible adverseness to multifamily housing, resulting in patently awful working-class solutions like the back-to-back; trinities and Maritime rowhomes of the era were not much better. Because New York was much less averse to the flat, they were able to provide a roomier alternative (at least, until New York tenements, too, became overcrowded).

However, for western New England, the combination of improved living conditions and cheapness of construction the balloon-frame Mohawk Valley flats offered offered a much better solution to working-class housing than anything else in the area. These triplexes, increasingly disassociated from their single-family gablefront cousins, saw their roofs flattened (New England winters are much less severe in the snow department), but were increasingly constructed in neighborhoods consisting almost entirely of them.

Springfield, Massachusetts, is western New England’s largest city; its vernacular (or what remains of it) is also largely a gable-for-gable duplicate of that found across the Berkshires; the same is also true of Pittsfield, Holyoke, and even Worcester. Indeed, one could be forgiven for wondering whether Boston ever developed secondary cities the way Philadelphia did!

Conclusion

So by the mid-19th century, the style of housing first devised in the Mohawk Valley had expanded in just about every direction, including into New England and practically right up to Boston’s doorstep. The last piece of the puzzle now falls into place: Boston’s builders of the generation immediately after Back Bay’s developers simply took the multifamily style they saw in nearby Worcester and built it back in Boston. From a hearth several hundred miles away, from New Yorker ideas executed in wood and optimized for snow, Boston builders picked a low-hanging fruit that they integrated — in the same schizoid way that Lower Lakers integrated every classical style under the sun into their commercial architecture — into their own rowhome vernacular, a vernacular that their own city region‘s inland cities had been loath to develop.

The triple-decker is indeed an immigrant in New England. It is especially so in Boston. Its origins lie in an altogether different vernacular tradition, and its adoption by Bostonites to the point they have made it their own reminds us all that, while the United States has many architectural vernaculars, willingness to solve a practical problem with solutions from a different idea set trumps local loyalty in the vast majority of the country.

But it also cautions us against running with new solutions at the expense of our own traditions. Boston builders didn’t just wholeheartedly adopt the triple-decker; by the turn of the century, it — and the rest of the Lower Lakes residential package — had utterly displaced almost all know-how for developing the antebellum Boston vernacular, that same vernacular whose last hurrah was in Back Bay and the South End.

Football and the NFL

A Beautiful Game

The game of (American) football may be one of the most inscrutable popular pastimes ever devised. Unlike other games, such as baseball or cricket, which test athletes’ finesse and timing, or like basketball or (association) football, which are mainly contests of stamina, American football is subject to chaos like few other sports. In some ways, it’s the purest real-world realization of the concept behind J.K. Rowling’s wizard’s chess.

For football teams playing at a high level, each play is a match of wits between the offensive and defensive coordinators. Both rely on schemes designed to create and take advantage of mismatches, and for both — this is important — the scheme has to be developed around the available talent. (This is of course true in any sport, but even more so in a sport as complex as football, where, say, a single blown coverage assignment results in a sack.)

This is not to say the sport is easy on its players. In fact, part of its draw is its strange combination of finesse and brutality, of beautifully executed plays like deep throws contrasted with setbacks like sacks. It is, in essence, life in 60 minutes on a field.

And a huge part of that is the need to cooperate in football. In most goal sports — like basketball or association football or hockey — giving the ball (or puck) to the most athletically gifted talent on your team is usually a good way to win games. The Lakers were the most dominant team of the early 2000s because they had Kobe Bryant. (Traitor.) The Bulls were the mid-90s’ most dominant team because they had Michael Jordan. Wherever Wilt the Stilt went, his team was dominant. And so on.

Quarterbacks — football teams’ offensive leaders — are, by contrast, not necessarily the most athletically dominant person on the field. In fact, player roles are so varied that it’s hard to say who, exactly, the most athletically dominant person on the field is. Players like Brian Dawkins or Warren Sapp, who set themselves apart by their athletic dominance even for their positions, are at least as rare to come by as their counterparts in other professional sports. Instead, quarterbacks exert leadership by being intellectually dominant — the most skilled person on the field.

The best quarterbacks have to absorb, analyze, evaluate, and act on a tremendous amount of information, all in a jaw-droppingly short time. They have to communicate the play they’re supposed to be running from their coaches to their teammates. They have to read the opposing defense and adjust as they see fit. Sometimes, they’ll even change the play at the line of scrimmage — Peyton Manning excelled at this kind of cerebral quarterbacking. And they have to do all of this in the half-minute or so allotted between plays.

Stereotypes aside, it’s not at all surprising that football is becoming an increasingly international sport. For all that soccer styles itself the beautiful game, there is something truly beautiful in the way a football game is play — something truly beautiful in the way, any given game day, an athletically inferior team can dominate an athletically superior one, through smart coaching and smart play.

Outside the US (and Canada)

The NFL is largely saturated in its core markets. Theoretically, any 2-million-man metropolis can support an NFL team, and most of them have one. The only place the NFL can go, therefore, to expand its product and its brand is out of the US.

Canada has the CFL. There was a time, a while ago, when the CFL ran an American division that largely concentrated on those media markets the NFL ignores — cities like Memphis, Salt Lake, and Las Vegas. So, if not Canada, where else?

The answer has, increasingly, been London. It seems like two minor NFL teams play in London any given week. Wembley is regularly sold out for these affairs. (They’ve also been looking at Mexico City.)

The problem with this, however, is that — it’s London. There’s a 6-hour time difference between there and anything on the East Coast, a significant logistical hurdle. Mexico City represents a natural place for the NFL to begin franchising because games between “Aztecs” and NFC/AFC West teams on a regular schedule are feasible. The solution to this quandary is almost certainly a British league of some kind.

A “BFL”

British athletics have long operated under promotion and relegation — good teams rise to the top while bad ones sink down. It’s an effective system for managing parity (for the most part) while allowing managers to dream title dreams.

This is, in all likelihood, unworkable for a British American football league, however. There are few cities that can profitably support such a team to begin with; a deeper problem is that the support infrastructure (layers and layers of progressively more minor leagues) isn’t remotely as extensive for American football.

In fact, there are just three conurbations with more than two million people in the British Isles: London, Manchester, and Birmingham. If we’re generous and ask about urban areas with more than one million, there are just three more: Dublin (actually just shy of 2m), Leeds — yes, Leeds — and Glasgow. That’s six cities.

Let’s take a look at the other end. An eight-team league would have optimal scheduling: the league is split into two 4-team divisions, playing division rivals twice, the other division once, and an alternating slate with one of the NFL divisions (4) teams for a 12-game schedule. The division winners would then play each other for the championship.

So we can put two teams in London — of opposite divisions, of course — and then one each in Manchester, Birmingham, Dublin, Leeds, Glasgow, and … somewhere else. (Liverpool? Belfast?)

There is a subtle beauty in this system. First of all, eight teams is probably the smallest you can field to maintain a competitive league (at least, in American football). Second, you guarantee that each team plays an NFL team twice at home each season. This serves two roles — two games that are guaranteed sellouts for every BFL team every season (c’mon, a mediocre AFC South divisional game sold Wembley out this year), and secondly, a degree of legitimacy for the expansion teams (because they are given the opportunity to win against NFL teams). It’s an excellent setup for converting known intermittent popularity into permanent new fanbases.

It’s also an expandable system. Is the BFL entrenched and profitable? Perfect, let’s launch the same program in France — Italy — the Iberian peninsula — greater Germany — and so on. Something similar can be applied in Latin America and the Far East. Over time, the Super Bowl simply becomes the oldest of a set of regional championships and a dedicated world championship is needed.

But the thing is — whatever your opinions about the game — as a business, the NFL needs to expand its markets, sustainably. And that means figuring out how to develop secondary leagues abroad. It’s already a continental-scale league as things stand.

Switch Thoughts

Last week, Nintendo announced their next-generation console: the Switch.

Nintendo is in an intriguing position in the console wars — technically, the Wii U was the first console of the current generation, which makes the Switch the last console of its generation. By having two consoles out in a single generation, Nintendo now has a clear innovation edge on its competitors. The Switch will have to compete with the PS4 and Xbox One for, most likely, its entire run.

Like the Wii, though, the Switch is something different. Sony and Microsoft consoles are little-changed from the strategy that won them success in the late 1990s and early 2000s: being little more than stripped-down gaming towers. But the Switch is a bipartite system with a console component and a mobile component. This alone makes its competitors look dated, if not outright obsolete.

The core of the system is a thin tablet. Augmenting that are four key peripherals: (1) the dock, which functions as a hybrid charging port/TV data transmitter (probably with 720p-1080p upscaling), (2) left and (3) right “Joy-Con” controllers, and (4) a Joy-Con grip. (A fifth peripheral is a Pro Controller that looks visually identical to the ergonomic Xbox controller layout.)

After the primary tablet unit, the Joy-Cons are the Switch’s second most arresting feature. They can be slotted into the dummy grip for console play, or into either side of the Switch itself to play like a classic mobile gaming system. They can also be used independently, like the Wii’s motion-based control layout, or even be split into two controllers for local multiplayer. This gives the basic system unparalleled versatility, natively supporting every gameplay style any Nintendo game has ever used.

Except for one. The Switch doesn’t seem to currently support DS-like gameplay.

The Switch’s Potential

My goal here, however, is to suggest a potential design philosophy behind the Switch. Obviously, the semi-mobile platform makes traditional console gaming obsolete. It implies that the next video game generation will see the merger of the Xbox and Surface, and between the Playstation and Xperia, as the most effective way to compete with the Switch and its derivatives. That is: the Switch is leading the way in a tablet-console merger.

Here we must ask what the Switch will run on. Initiating the merger is one thing; following through, quite another. Nintendo must be well aware the kind of mergers the Switch will precipitate — PC and Xbox games will merge, and Sony’s Xperia tablet line will by necessity run Playstation games. A video game system that looks like a tablet is different from a tablet system that plays video games, and Nintendo’s competitors will be able to offer the latter. What about Nintendo?

A huge part of this will hinge on the OS. While Android is the dominant smartphone OS, the tablet game is a 3-way race between it, iOS, and Windows. And Nintendo has little brand recognition as a generalized tech company the way Apple does. That is: a custom OS essentially locks the Switch (and its successors) into a video game system that looks like a tablet, but an Android-based OS makes it a tablet that plays video games — a critical competitive edge once the innovation’s worn off.

The reason is: running Android unlocks a lot of doors with relatively limited downside. With it, the Switch automatically comes with full access to Google Play and its wealth of apps. Without it, Nintendo must either develop substitutes in-house or admit that, at the end of the way, the Switch is fundamentally a toy. With it, your Switch becomes the only tablet you ever need carry with you. Without it, it’s sharing space with your favorite Windows/iPad/Droid tablet.

Yes, running Droid raises the specter of easily-ported games. But this can be overcome with a custom peripheral that the games themselves are loaded on to — is this the reason behind the cartridge’s return? But consider this: Porting games is essentially a rewriting job. For the last three generations or so, Nintendo has lagged in the porting game because of its often-inferior specs, a deal-breaker in a market where porting a game is expensive.

Running the Switch on Android makes porting games cheap. Not in this generation, but the next, when the Playstation and Xperia are likely to merge. A third-party title written for the Switch can have its core be built around a generalized Android release, with extra features for the Switch’s unique capabilities. Switch games become, in this environment, Android games with extra features. And, if Playstation games soon follow, this leaves the Xbox at a tremendous disadvantage: while it may be cheap to port releases for Nintendo and Sony (remember, they’re the same core for the same OS in the same languages, just with slightly different specs, storage media, and peripherals in mind), it’ll be tremendously expensive to do so for Xbox (same core on different OSes in different languages for similar specs, storage media, and peripherals).

Needing to spend less on tedious porting overhead, Japanese developers — those most inclined to eschew the Xbox — have a competitive advantage in this environment, while American ones — who usually have to co-develop for Sony and Microsoft to begin with — have a competitive disadvantage. There is a very real risk embedded in the Switch that Microsoft becomes the 2000s Nintendo of the 2020s — dependent on its first- and second-party IP, as few new third-party houses are willing to expend the resources on developing for both it and its Japanese competitors.

A Path Forward for Nintendo

If the Switch is a true tablet, what does that imply for the DS? Nintendo has some twenty-five years of portable device experience embedded in its Game Boy/DS product line, long the most dominant in the market. And recall that the Switch does not seem designed to support DS-style gameplay (where the Wii U was an experiment to bring it to the console).

There are a lot of companies that run phones and tablets. Apple may be the most famous, with its iPhones and iPads, but nearly every major Android smartphone maker also makes tablets. Windows tablets don’t have nearly the market reach Microsoft wanted precisely because most tablet makers develop their tablets from their phones’ core architecture — not from their towers’. (And how many makers even make towers anymore, anyway?)

Recall here that, while the Switch may be a mobile platform, it isn’t as mobile as the pocket-sized Game Boy/DS line. And if tablets are often matched with smartphones … hmm …

Phones and tablets usually have similar architecture bases. So an Android Switch isn’t just a well-positioned gaming tablet — it’s also the same basic architecture that you would need for a smaller platform. The 3DS is an aging system. Could we see a “Nintendo Phone” in the cards?

It really makes sense, if you think about it. A Nintendo Phone gives them presence in the smartphone/tablet market that computer-derivative devices are converging on. It forces Sony to essentially integrate similar functionality into its smart-devices. And it deals Microsoft another setback — the Windows Phone’s failure still stings — as it’s unable to fully migrate to the new video-game-enabled devices that Nintendo is producing.

Moreover, the Nintendo Phone gives full capability for single-screen touchscreen games. And it works as a second-screen peripheral for the Switch. With its own miniaturized Joy-Cons, the Nintendo Phone and Switch can work in concert to produce DS-like gameplay

Two devices able to produce three (console/portable, touchscreen, DS) game types — as well as being go-to devices for your daily life. No doubt, Nintendo sees how Apple has achieved near-total vendor lock-in. How better to market your devices to similar effect when your killer apps are essentially built into your brand?

Negative Charisma

Perhaps one of the downside of republican governments is that their politics are dependent on charismatic politicians. Rule in republics is by the consent of the ruled (rather than by e.g. force, as in a dictatorship, or heredity, as in a monarchy), and every republican system — both historic and modern — has a periodic reaffirmation of that consent. This is an excessively technical and theoretical way of talking about elections.

Politicians depend on charisma to get elected and re-elected. An uncharismatic politician will never be able to convert oratory into votes. And charisma is not a learned skill: there is a distinct difference between naturally charismatic people and people who have learned to mimic naturally charismatic people. However, at the same time, all charismatic people — by simple dint of standing out in the crowd — will win both adorers and adversaries. In republics, having enough adorers to cancel out adversaries and then some is what gets you elected.

In 2007, the Huffington Post published an opinion piece suggesting that Hillary Clinton has “negative charisma”, in the sense that she has the opposite of charisma. He is right: Hillary is not exactly charismatic. She runs tough elections but is consistently highly rated once in office. For her, elections are — for all intents and purposes — a tedious chore to get through before returning to the real business of government, i.e. governing. She has largely succeeded so far by more skillfully mimicking naturally charismatic people than nearly anybody else in existence. But she is not naturally charismatic.

This is not, however, the sense I have in mind when I suggest “negative charisma”. If the positive effect of charisma is an innate ability to win friends and influence people, then the negative effect of charisma is an innate ability to win enemies and influence people. That is, a negatively charismatic person is someone whose natural charisma acts to their detriment rather than to their benefit. A negatively charismatic person is inherently, deeply self-sabotaging.

Donald Trump Is Negatively Charismatic

While the Constitution outlines the bare minimum to be qualified for the Presidency — according to Article II, a President must be a natural-born U.S. citizen, at least thirty-five years old, and an American resident for at least the past fourteen years — in practice we also expect our Presidents to have significant political experience, the ability to fund a campaign, and the charisma needed to win. Governors and Senators most frequently win major-party nominations for this reason. They fulfill both the implicit and explicit skillset needed for winning the Presidency, having successfully run for — and held — statewide office.

Obviously the septuagenarian New York-born Trump, who has held primary residency in Trump Tower’s penthouse suite for about as long as I’ve been alive, fulfills the Constitution’s explicit requirements. He does not fulfill the usual implicit requirements. He has never held public office — nor did he ever seek to prior to announcing his candidacy. CEO of an ostensibly real-estate company and media personality, he has never demonstrated the ability to hold public office of any sort, much less the most public public office in the US. Most “candidates” like him go away quickly, and if he was — indeed — running as a publicity stunt for his brand (as most in the media seem to think), he had no reason to expect the course of his candidacy to run any differently.

Something different happened. By tapping a regressive-populist core and running against a monumentally divided field, Trump was already galloping towards the nomination by the time Ted Cruz was able to mount a counterattack. It wasn’t enough. And so the Republican establishment, the whole infrastructure built around the declining Reagan coalition, had to grit its teeth and nominate someone who had — remember, with zero experience — developed an Appalachian coalition with extensions into the Old South’s unreconstructed whites and North’s undereducated ex-workforce. Of these, only one voting block was even R when Reagan was President.

This is evidence of powerful natural charisma. But for the negatively charismatic, the self-sabotage kicks in long before the ultimate goal is reached. And it’s inextricably linked to their personality. See, charisma requires treating other people as people to work. Outside of other white males, Trump can’t do that. He has repeatedly demonstrated failure to connect to people emotionally — a recent New York Times opinion piece suggests he has “narcissistic alexithymia” (not an easy-to-spell word!), an “inability to understand or describe the emotions in the self”. And so Trump treats people who do not look like him like, well, objects.

Consider the way he keeps referring to African-Americans as “the blacks”. Not just “blacks”. The blacks. Consider what he is saying, at a deep level. The English definite article is a subtle demonstrative — it points out. It selects an object, or class of objects. Not “some blacks”. “The blacks.” In doing so, Trump is quite literally distancing himself from black people. He is saying, implicitly, that he does not, at a fundamental level, consider black people, well, people — English actually has (at least) two noun classes, and the class that refers to other people behaves quite differently than the one that refers to (inanimate?) objects like, say, rocks. Trump refers to African-Americans more like rocks than people, and in so doing, casts a noun-class distinction that we never realized was there into stark distinction.

At least he refers to women as people! It’s too bad his interest in them begins and ends with their appearance and genitalia. In Trump’s own little world, we can see a clear class progression, with white males at the top of the hierarchy, white females are naturally inferior but useful for *cough* certain tasks *cough*, and nonwhites — who might as well not even be human. This is fertile ground for rapidly building a populist coalition, one that may well only hold together as long as he’s leading them, but it flies in the face of the reality that is American demographics.

This is how charisma turns toxic. Real estate development was — and, in many ways, still is — a bit of an old boys’ club. Even a personality-driven show like The Apprentice can — and quite obviously did — mask elements of media personalities that would harm ratings. There is a reason why Trump is the world’s oldest adolescent. His dad was rich enough and he was just good enough a businessman to indulge in puerile power fantasies long past their natural sell-by date. His ephebophilia actually means his women, such as they are, are the ones with “sell-by dates”. Trump has never, in his life, ever needed to learn how to interact with other people as people and not mere tools.

Hillary is uncharismatic because she doesn’t intuitively know how to interact with other people as people. She knows this is important and works hard to overcome this weakness. But Trump has negative charisma because he does intuitively know how to interact with other people as people — what he does not see, or understand, is why it’s important. And it’s biting him in the ass.

EDIT 10/25: Note: I wrote this post just before Trump’s sex-assault allegations went public.

Lessons from Philadelphia Media

Philadelphia is shockingly barren of hard-hitting investigative journalism. The dominant newspaper, the Inquirer (locally the “Inky”) prefers to sit back, generally focusing its limited investigative resources on police issues. This is useful in its own way — because local media have a long history of holding the Philadelphia Police Department to the fire, police brutality issues here seem not to be as severe as those in e.g. Baltimore or St. Louis — but at the same time it has cast deep shadows for political corruption. Meanwhile, attempts at creating an alternative to the Inky (often with an investigative focus on political corruption) have not met with sustained success.

Perhaps the longest-lasting, the alternative weekly City Paper, sold to the much less interesting, but more profitable, alt-weekly rag Philly Weekly a few years back and was excised from existence. City Paper had been — by far — the best source for local political news, and its writing pool easily boasted the best journalists in the city. After it went under, attempts at online platforms intensified. Patrick Kerkstra led the charge at Philadelphia magazine, developing a suite of daily blogs that mimicked newspaper sections — the front page, sports, real estate — and poaching the city’s best reporting talent (mostly from the recently-defunct City Paper) to run them. Meanwhile, PlanPhilly‘s erstwhile editor, Matt Golas, got local PBS affiliate WHYY to pick it up, and began reorganizing both it and WHYY’s Northwest Philly-focused outlet, Newsworks, into a journalism platform to rival the Inky’s.

Despite City Paper‘s untimely departure, the future of Philly investigative journalism — at least online — looked fairly bright in mid-2015.

Then — just as his efforts at WHYY were bearing fruit — Golas was forced out in late 2015. Kerkstra would follow a year later, as Philly mag’s showrunners decided to go in a different direction, favoring advertiser-pleasing copy over high-readership stories. That fallout has only just begun. And Philadelphia is left bereft of a high-quality investigative-journalism outlet — again.

Despite generations of reporters trying to change it, Philadelphia’s status quo has never favored investigative journalism. The “corrupt and content” city’s dominant paper, for more than a century, was the Philadelphia Evening Bulletin (often shortened to just the “Bulletin”). As its name implies, it never seems to have had much interest in investigative journalism, favoring instead a role as the dominant party machine’s mouthpiece. The Inky was merely a distant #2.

This all changed in the 1970s, when Knight Papers bought the Inky and heavily invested in it, modernizing its facilities and bringing in some of the country’s best investigative journalists. This new, more muckraking Inky quickly began to win Pulitzers — and readers. By the early 1980s, it had forced the staid Bulletin out of business entirely, and became the Philadelphia region’s paper of record. Knight Papers had believed in investigative news, and as the Inky’s editorial board was one of the last they had overhauled before selling to Ridder, it was one of the last that the new combined company would start tinkering with. Thus, the Inky carried on the Knight legacy through the 1980s — a period when it was arguably one of the country’s best papers.

By the early 1990s, however, the replacement of Knight editors with Knight Ridder ones had begun in earnest, and the paper’s quality had begun to suffer. Much like the Bulletin before it, the Inky stopped prioritizing muckraking. Investigative reporters moved on, into the alt-weekly scene or to friendlier paper-of-record locales. Readership and profitability began to suffer — unlike the Bulletin, the Inky did not have an enduring paper-of-record legacy, having only been the city’s dominant for a decade. Spearheaded by powers-that-be at the very top, the Inky turned away from the brand they had successfully built over the previous twenty years, and contented corruption returned to the very top of the local media.

So, by the early 2000s, the paper was treading water when the bottom fell out of its revenue stream. Most people attribute the rise of the Internet to the fall of American newspapers. This is only half-true: it was the rise of Craigslist, in particular, that led to the collapse of the newspaper revenue model — which depended on classified advertising. Easily half, if not more, of that revenue was lost — irrevocably — in every market Craigslist established a beachhead in — and it established a beachhead in every market. Quickly. The Inky’s parent, Knight Ridder, began losing money, shedding staff, and was forced to pivot its revenue model towards retail advertising (the circulars and other junk in the middle, as well as on-page ads) even as competition diversified.

Knight Ridder merged with McClatchy in 2006, and the new owners spun off most of their portfolio of either (a) weaker newspapers or (b) newspapers that did not fit the direction their corporate parent wished to take. The Inky was one of those. Coming under ownership of Philadelphia Media Holdings, its quality continued to worsen, sapping subscribers and readership revenue, in a penny-wise-pound-foolish attempt to trim its way to profitability. Finally, Comcast’s Gerry Lenfest stepped in and assumed control of the bankrupt paper, worried, perhaps, that it would go the way of the Times-Picayune and cease to be a daily affair.

It would be nice if the Inky became a bastion of investigative reporting again, but in all probability it won’t. Newspapers are not the only dominant media voices that tend to avoid investigation in the Philadelphia region. Action News, the dominant local news program, also follows Bulletin-esque editorial guidelines. Ironically enough, the best source for investigative local news is Fox 29, a position that so flagrantly opposes their national showrunners’ that almost every Fox 29-Fox News interaction rapidly becomes painfully awkward to watch.

But there is a strange lesson to be had here. Doubtless, Gilded Age politicians and robber barons disliked muckrakers’ nosing around. The idea of a corrupt and content city with enabling media must have been intoxicating to these people. As the TV replaced papers as the source of most peoples’ news, the trend towards showrunners replicating the ideas implicit in the Bulletin’s editorial guidelines — “the newspaper is the guest in the reader’s house; tell the news, nothing more, nothing less” — began to intensify in the more legitimate circuits. (It gave way to propaganda on Fox News; even liberally-focused MSNBC has yet to go so far down that route.) Corruption rages in the shade, and without muckraking, shadows grow deep.

So how do we monetize muckraking?

Decline and Fall

This past election season has felt truly surreal. Political commentators both left and right understood as early as late 2012 that the Democrats would be vulnerable in 2016, and a good Republican candidate, one who could maintain the party’s core demographics while simultaneously siphoning some black and Latino votes, had a nonzero shot at tipping the scales, someone bland and vaguely Hispanic like Marco Rubio — especially if the Democrats nominated Hillary Clinton.

The Democrats nominated Hillary Clinton.

So what did the Republicans do? They nominated a candidate who most observers agree is the single worst candidate ever fielded by a major party in the United States of America. While, for a moment in July, Donald Trump seemed terrifyingly electable, it lasted about three days into the DNC. And then he went after Gold Star father Khizr Khan.

Ever since then, his campaign has been in a state of utter collapse. Trump, quite literally a textbook narcissist, has seen to it that he utterly dominates the news cycle. This is quite unfortunate for Republicans because this dominance is rooted in petty attacks, like that against Mr. Khan, with a heaping spoonful of scandal, like murky Russian ties, and controversy, like assaying Trump’s true net worth in the increasingly noticeable absence of his tax returns — all of this leading to pundits calling him a fascist while the Republicans’ moderate class run from him. In droves.

Against all odds, Hillary Clinton, a candidate that against a normal candidate should receive 50% ±1% of the popular vote, has opened up a commanding 8-point lead on Trump. Purely by staying away from the media. Against a campaigner as self-evidently incompetent as Trump, Clinton has an excellent chance — currently 26.8%* according to FiveThirtyEight — of winning by a landslide, a victory type that Americans haven’t seen since the 1980s and many pundits did not even think possible in the modern, hyperpartisan political climate.

But if you think this is the Republicans’ bottom — hah! They haven’t even found their bottom yet!

The Green Screen

American politics have been cyclic, coinciding remarkably well with the Kondratieff cycle. The main political parties — the Democrats and Republicans — tend to assemble into coalitions during the primary and midterm phases, while the general election decides which coalition governs and which one opposes. These, in turn, tend to be focused around driving narratives — ideologies that animate coalitions for generations at a time.

The largest governing majorities — supermajorities in any sense of the word — were the Republican governing coalition of the Progressive Era and the Democratic New Deal governing coalition that followed. The post-Teddy Roosevelt Republicans were themselves a policy iteration on a Republican coalition that had largely stayed in power since 1865, mainly due to the era’s North-South politics, while New Deal coalition continued to follow Progressive politics until the Civil Rights Act and Southern Strategy fractured it.

It is also noteworthy that major governing coalitions become focused around uniquely charismatic Presidents. One could therefore say that American politics are divided into the Jefferson period, which defined Jefferson’s Democrat-Republicans and their opponents (initially Federalists and then Whigs); the post-Lincoln period, defined by the loose ends Lincoln had left; the first and second Roosevelt periods, when the progressives were the governing coalitions’ leaders; and the Reagan period, which actually started when Nixon won the Presidency and may or may not have ended in the mid-2000s.

But charisma is a two-edged sword, and Trump is certainly charismatic. Like Teddy Roosevelt, Trump is giving voice to a marginal faction; unlike Roosevelt, who was essentially kicked upstairs into the vice-presidency, thereby allowing him to be in the right place at the right time to implement his agenda, Trump is trying to win the Presidency rather than inherit it.

Trump is far better at inheriting things than winning them.

Because the core of his support is the populist right (aka alt-right aka Neo-Nazis aka proto-fascists), and because — unlike any of his interchangeable dozen-or-so opponents, he actually got his base fired up — Trump is hugely popular among a group approximately the same relative size as UKIP’s (ex-?)base in Britain. But because he espouses this particular ideology to the exclusion of all others, for a whole host of reasons, he was electable (in the sense Nixon was electable in ’68) in the primaries, but is wholesale unelectable — because he does espouse an ideology that is so profoundly foreign — to the left.

Trump needed a good handler to become remotely electable in the general, but his narcissism demands sycophants. Manafort couldn’t handle him, and at this point his primary advisors are mediamen Steve Bannon (formerly of the execrable Breitbart News) and … Roger Ailes. The rest of his inner circle reads like a who’s-who of Republican washouts, and the party’s big-name operatives aren’t interested in his campaign.

Whither Now?

When Bannon replaced Manafort, the Washington Post asked whether it was because (1) Trump was a fool, or (2) he was making a post-election play. Greg Sargent, the writer, thinks the answer is (1) — and perhaps to Trump and Bannon, it is — but Roger Ailes, now formerly of Fox News due to a harassment scandal, remember — is much savvier and much more opportunistic.

I would not be remotely surprised if Ailes was just the first one (or at least the first one in a position to act on it) to read the tea leaves: If Trump is only successful in attracting the regressive-populist alt-right, and literally toxic to anybody else, then simply by sticking to his message he can attract a following of (monetizable) zealous converts. The seed direct-mailing list is there, and Trump generates a not-insignificant amount of publicity — indeed, his own self-promotion is what is killing him this election — putting many of the ingredients in place. Lure in some known Trumpian TV and radio personalities, like Pat Buchanan and Sean Hannity, and — voilà!

But at this point the pattern starts to become clear. This “Trump News Network”, run by Bannon and Ailes, legitimizes the alt-right, in the process continuing to drive away social conservatives, libertarians, and the tattered last remnants of Northeasterner Republicans. The alt-right are American nationalists, but that in itself has the problem that nationalism is tied to ethnicity while American nationhood … isn’t. It is precisely because most Americans** agree, at some level, that openness to diversity is a fundamental defining feature of being American — an idea which no nationalist anywhere would ever be caught dead espousing — that Trump’s politics and agenda are so fundamentally foreign to Democrats and non-Trumpian Republicans alike.

A permanent Trump coalition effectively precludes the Republicans from retaking the White House in 2020, possibly ever. And Trump himself would continue to help the internal strife along. One side or the other*** will decide they’ve had enough and form their own third party, and that will be the end of the postwar Republican Party, the party of the Reagan governing coalition.

A New Start

It can’t happen soon enough! The Reagan coalition is dying. Literally. It has failed miserably at attracting young voters, or at producing black or Latino votes in an increasingly diverse American society, and its core voter is essentially an Angry White Pensioner. The 2012 Republican autopsy said as much. And Trump’s rise — and that of the alt-right in general — go backwards rather than forwards, firing up the core at the expense of alienating literally everyone else. Clearly, the Republicans — or their successors — will need a new base and a new charismatic politician to build a platform around.

It will take a while. Ike was a charismatic politician, but he didn’t do anything to rebuild the base; rather, after the 1932 election, the modern Republican governing majority did not get its charismatic leader for 48 years — 12 elections!

But the Republicans, once they’re severed from the toxic Trumpist wing, might be able to actually start attracting new voters. As a friend of mine puts it, the Reagan coalition is failing because it was made up of voters “on the wrong side of the animating question of postwar American history”, and the sooner it realizes this, the better. Because as long as they’re in denial (and the Trumpists are clearly in denial, to the tune of  a functionally nonexistent minority vote) …

… would be able to admit that yes, their last generation of governance was based around a coalition on the losing side of what is now a half-century-old issue, and that the so-called Party of Ideas needs some damned new ideas damned fast if they hope to remain relevant at all.

But they can’t do that until they finally succeed in cutting out the cancer at their core, which itself won’t happen until their activist base stops misidentifying what their party’s cancer actually is (hint: look in the mirror). Fortunately for all involved, Donald Trump has made it both obvious and damned easy. Republican leadership needs to take this chance, and recognize that it’s okay to lose the next three cycles or so if the party (or what remains of it) comes out stronger in the end.


* This number is derived by taking the average of the three forecast models’ chances for a Clinton landslide. Amazingly, the polls-only model, not the nowcast, shows Clinton furtherest in the lead.

** I.e. Americans who aren’t Trump supporters.

*** Most likely, either (a) because the Republicans get their shit together and find a candidate who can ensure the Trumpian nominee (probably Donald J. Trump) doesn’t get nominated in 2020, leading to The Donald making his own run and fragmenting the remnants of the Republican base, or (b) because the other Republicans finally have enough and defect en masse … possibly to the Libertarians?