The Fallout

The Fallout

In an utterly shocking* turn of events that no one could have possibly predicted,* the least competent man to ever hold the office of President has managed to fail completely at legislating, attempted to transform an executive order into a king’s writ, lashed out at the judiciary when that failed, and sunk into a quagmire where there is now not insubstantial (albeit still circumstantial) evidence that the President of the United States is, in fact, a traitor. All in less than a month.

This is not normal. Not only is this not normal, this is an entirely new degree of paralysis, as every day brings some new nightmare to deal with from DC. Whether or not you agree with the Republicans’ legislative agenda, legislation is not happening because everyone — Democrat and Republican alike — is scrambling from crisis to crisis, from scandal to scandal.

It is difficult to see how Trump can last four years. His relationship with the press is adversarial, proving that Trump never learned not to piss off whosoever wields the pen. The intelligence community is already in revolt against him; Democrats are poised to make huge House gains in 2018, likely on a platform of impeachment; Congressional Republicans are, at this point, barely holding the floodwaters bearing the foul swamp miasma of corruption back from engulfing them all. All in less than a month.

The question now is what will happen.

The Republican Dilemma

Right now, House Republicans are in a bit of a pickle. Trump has lost so much credibility so fast that there are already voices suggesting impeachment. But he still has a small cadre of loyal supporters, and this cadre usually controls who wins House district primaries.

This is untenable. What this means is that, by not acting, House Republicans — especially in the more suburban districts — risk losing the general to a groundswell of anti-Trump support. But to act would mean alienating the Trumpet base, who would swiftly and mercilessly primary them. Their calculation is perhaps, then, that the best window for removal is in the six-month window between primaries and the general in the middle of 2018; the hope would then be that after they secure primary nomination, they can defang their opponents. It would be shrewd.

It would also involve waiting through roughly 500 more days (and may I remind you, we’ve gone from “inauguration” to “probably a traitor” in 20-some-odd days) of utter insanity before it happened. And in any event, leaders such as Ryan and Chaffetz seem to have decided the best path forward is party before country, letting the White House’s ethics quagmire fester.

It’s hard to see a path forward for House Republicans. Their gerrymander is strong — they may be trusting in it — but public lividness at Trump’s unpresidential shenanigans is also strong, stronger possibly than in Katrina’s aftermath, when Democrats took control of Congress and lame-ducked W.

The Senate is different. There aren’t many Republican seats up in the Senate in 2018 (but plenty of Democrat ones), meaning that by the time most Republican Senators have to campaign again, the Trump scar will be a distant memory, already receding into the domain of history books and language, where “Donald Trump” will likely replace “Benedict Arnold” as a connotation of cold treachery.

The dichotomy can be seen in real time. The Senate has already moved to begin investigating Trump’s Russian connections (although they have not yet appointed a special prosecutor), while Chaffetz is moving investigations on anything associated with the White House forward at the slowest pace he reckons he can get away with. Chaffetz, I may add, is a Utah Republican with a very high risk of getting primaried by someone who’s more willing to impeach.

Sooner or later, something’s got to give. Trump will have have a short Presidency and leave, at minimum, in disgrace. The questions are: how short? what deal will ensure his removal? and what will happen after?

The Trumpian Constitutional Crisis

While the man is a walking constitutional crisis, pretty much constantly in violation of the emoluments clause his Russian entanglements notwithstanding, perhaps the biggest Constitutional crisis of all will happen once he leaves. And that is the crisis of: how do we prevent this from ever happening again?

Whether it’s Pence or Ryan or Pelosi (heaven help us if he lasts that long) who replaces him, this will be the very first item on the 45th President’s legislative agenda. The new Secretary of State will, of course, be tasked with fixing the international damage the Trump administration caused, but the domestic agenda comes in second after locking madmen out of the White House forevermore.

Make no mistake, this will be a Constitutional crisis. Among other things, we can already see

  • Trump should never have gotten to a position where he could be nominated as President;
  • Impeachment may not be a strong enough tool for dealing with executive treason; and
  • Secondary methods of Presidential removal may also need to exist.

A New Amendment

Dealing with the first is pretty clear, and should be bipartisan. It’s the territory of a Constitutional amendment, and one that can be worded with one or two unambiguous sentences. Something like

Amendment 28. The President of the United States must have previously held at least one elected office, at the state or federal level, prior to running for President.

An Amendment so simply worded, in the immediate wake of such an unambiguous disaster as Trump, should pass the 2/3rds majorities and reach the 3/4 ratification mark within a single legislative session. This would be, after all, little more than actual politicians ensuring that an actual politician gets the highest political office in the land.

Dealing with an Incompetent President, or a Vegetable One

The second and third issues are of opposite import. The second is meant to ensure that a made double agent in elected office (including the Presidency) can be removed in a nonpartisan way, with a minimum of fuss; the third, a form of “no confidence” removal if the President becomes demonstrably unfit for office, also through nonpartisan processes.

I recently saw a proposal for dealing with the third that I quite like: impaneling living former Presidents [who have, in light of Trump, served full terms]** to determine if the current President is fit to serve, should a crisis of personal faculties arise. This should be an inherently bipartisan body, meaning that any decision they agree to should be above partisan politics. Because the body will always be small, and the decision being made is of grave import to the nation, I would also add that the decision to remove must be unanimous.

Of course, this also leaves open the question of who can call such a panel to convene. I would personally give this tool to the states, where a simple majority of state governors to convene the panel acts as a vote of no confidence such that possibilities of removal need to be investigated. (And of course, the unanimity requirement functions as a check, such that state governors can’t abuse the tool to simply get rid of a President of the opposite party.)

Handling Elected Traitors

This leaves the third and last issue to resolve: clearly leaving dealing with Presidential treason, like other high crimes and misdemeanors, to an inherently political process such as impeachment, is not enough. Treason is not like perjury or even conspiracy. Both Richard Nixon and Bill Clinton always had the nation’s best interests at heart, even if their methods were suspect.

By contrast, treason is a betrayal of public trust, using the trust so granted to advance some other sovereign state’s best interest. It isn’t just putting your own best interests over your nation’s, as Trump does every time he flagrantly violates the emoluments clause; it’s using entrusted information to advance somebody else’s agenda (as Flynn did with Russia).

An elected official committing treason, then, is not just betraying his party; he’s betraying his position and his country. Impeachment is inherently a political tool, and the latter two betrayals transcend politics altogether and need to be handled as such. The judiciary, then, must be the one handed the tool of removal over treason.

Being accused of treason would be no different than needing to stand trial over murder or fraud. But there are two added wrinkles: (1) an elected official standing trial for treason would do so before the highest court matching his jurisdiction, that is, a state governor standing trial for treason does so before his state’s Supreme Court; a Congressman or the President standing trial for treason does so before the Supreme Court of the United States, and (2) a treason conviction not only entails criminal punishment for the elected official, but the liquidation of staff subject to rules choosing a successor. That is, the Speaker of the House, not the Vice President, automatically becomes President if the previous President is removed via treason conviction.

(The idea here is that if treason occurred at the very top, then the whole staff is implicated in aiding and abetting it. Trying to figure out who knew what would take far too long and would almost certainly delay the VP’s confirmation, leading to a vacant Presidency until the whole legal nightmare gets sorted out.)

Other Procedural Issues

These three issues look to be the biggies coming out of the Trump debacle. The President needs to be qualified to hold the office in some way, and there needs to be more ways to remove a sitting President should the worst come to pass. My take on the latter two is that the states can be entrusted with a secondary, broad-ranging removal process, and that the judiciary needs to be entrusted with a secondary, narrowly-focused removal process, one that is triggered by one crime and one crime only because that crime is too severe to let people play political football with.

These are, of course, not the only procedural issues people have pointed out. The Electoral College has clearly been subverted in terms of purpose. Gerrymandering has institutionalized minority rule in the House. A certain successful former state governor can’t run for President because he wasn’t born in the US. The two-party system as a whole is failing to provide meaningful political discourse and coalition-building between the whole panoply of ideologies, left to right.

While the Trump administration’s fallout will most certainly precipitate at least one Constitutional amendment and a broader Constitutional crisis, however, I’m not holding my breath on how much of it will be addressed. Part of the robustness of our system is that there are many avenues to effecting lasting change, such that if one is gummed up or refusing to do its job, there is another. Both gerrymandering and the Electoral College can be resolved in processes outside that of Constitutional amendments.

Unfortunately, it does not seem Mr. Schwarzenegger will ever get his (richly deserved) chance to run for President. Nor does it appear we will see a third party rise in our lifetimes, short of one party or the other collapsing. Maybe there are some dreams we dare dream for too deep.


* Sarcasm.

** I am adding this section.

Advertisements

Triple-Deckers’ Murky Origins

Triple-Deckers’ Murky Origins

The Boston triple-decker is perhaps the most New England housing type of them all. A simple wooden flat construction, the triple-decker provides comfortable and reasonably private housing accommodation for three families on two lots. While others, such as Old Urbanist’s Charlie Gardner, have pointed out some of the triple-deckers’ limitations, they are inarguably the solution Victorian Boston either wanted or needed.

Yet they are also incredibly murky. They spring as if from whole cloth in a region whose previous architectural vernacular was vastly different, with no clear origin. They are different enough from the only other wooden buildings in New England — farmhouses and Maritime rowhomes — that they clearly spring from an entirely different tradition. In terms of time and place, triple-deckers are, for all intents and purposes, naturalized immigrants.

Where did they come from?

In a previous post, I explored wooden residential vernaculars in the United States, itself a strangely murky topic, and came to the conclusion they developed in the Mohawk Valley, from there migrated into the Lower Lakes region, and then were disseminated nationwide through the development of ideas such as mail-order and tract housing. I also suggested that the New England triple-decker was a branch of this tradition. I want now to explore why I came to this conclusion.

A City of Brick and Wood

Boston is a bit schizo, in terms of residential architecture. Where Mid-Atlantic cities have a tight, well-defined brick rowhome vernacular, and New York has its blocky vernacular that can be purposed to rowhomes or apartments, Boston has two competing — almost clashing — vernaculars: a brick rowhome, clearly developed from the British style, and the wooden triple-deckers, as different from them as green-skin space babes.

Look closer and we can see some patterns that tell us how and why this may have come to be.

Boston’s oldest intact neighborhoods, Beacon Hill and the North End, feature charming British rowhomes that would not look out of place in the oldest parts of Mid-Atlantic cities — or British burgs like Bath or Bristol. However, like other such core neighborhoods, these would have begun to fall in esteem in the mid 19th century.

Part of what egged that on would have doubtless been the construction of Back Bay and the South End, fens surrounding the Boston peninsula’s neck that were drained, filled in, and turned into stately brick rowhomes, real estate projects that, for all intents and purposes, tripled the city’s size. These parts of town quickly became the wealthy’s preferred neighborhoods, a distinction they wear to this day.

The triple-decker, by contrast, does not encroach closer to the city center than the South Side, on the other side of a rail approach. They have no clear relationships with the stately brick vernacular Boston’s elite favored; they are interspersed with some cramped Maritime wooden rowhomes that Boston’s period suburbs (e.g. Cambridge favored), which only serve to highly how utterly unlike them the three-flats are; they even give way to masonry where people with means wanted their own Brahmin-esque rowhomes. All of this is to say: the triple-decker is a housing solution clearly quickly and widely adopted as if out of nowhere, and even at the time of its adoption were clearly meant to cater to the working class.

It makes a lot of sense that workingmen might favor triple-deckers, particularly in a society where homeownership wasn’t as important as it would become in the 20th century. Maritime rowhomes are not unlike Philadelphia trinities or Manayunk rowhomes — small and cramped on the inside. By contrast, a triple-decker’s flat, even though it would have had roughly the same net amount of space, would have felt open and airy, more spacious and gracious. Boston’s builders could cram one more family into the same space that two Maritimes rowhomes would have taken up, while at the same time upcharging workers for the privilege. It would have felt like an all-around win-win.

But this only tells us why the triple-decker would be rapidly adopted in the Victorian era. It tells us nothing about where it came from. Indeed, we can see from this analysis that the reason three-flats were so popular was because they were such a radical departure from the region’s pre-existing rowhome vernaculars … something that only further highlights the style’s immigrant nature.

So Who Else Did Flats?

Flats weren’t popular in colonial British cities. We can see this by looking at the three great groups of colonial architectural vernaculars — New England, the Mid-Atlantic, and (what remains of) Tidewater. In each of these places, separated by the organic New Englander street systems, the tidy Baroque Tidewater parade-ways, and the endlessly utilitarian Mid-Atlantic grids, the same type of subdivision plan dominates throughout: narrow and deep lots, and houses optimized to fit. Workingmen usually lived in houses that were single rooms stacked on top of each other, undoubtedly cramped and uncomfortable in an era of large families. As fire concerns crisscrossed the continent, major cities increasingly required brick, resulting in the antebellum living arrangements so well preserved in Philadelphia and Boston.

There were two major colonies that did do flats, however. One was New France’s core, up by the St. Lawrence, which would later become part of Canada; we can see widespread use of brick flats throughout Montréal in a form that, at street level, looks and feels like rowhomes. The other was the Low Countries’ successful colony around the Hudson River fjord and the terminal-moraine outcrop sprawling into the sea, one of the main conduits for furs from the expanding Iroquoian empire to Europe. England would later acquire this colony and rename it, but its Dutch heritage remained strong.

One way it did so was with the use of potash in cooking, from which modern quick breads and cookies developed. Another was its flat-tolerant vernacular.

The walk-up flat is simultaneously a new and ancient building type. Large apartment buildings were known in Rome, for example, but largely fell out of favor during the medieval era. Indeed, rowhomes are built on a medieval model of housing: a tiny plot of land, where the family shop would be located on the ground floor and their living quarters above. In larger and denser European cities, owners would build extra working space and rent it out; eventually, wealthier renters would dispense with the workspace and simply provide a structure subdivided between distinct living spaces.

Modern flats as such probably originated sometime during the 17th or 18th centuries on the continent: the very different way that Europe would approach flat architecture compared to North America suggests that the technology was still in infancy when Britain came into ownership of the North America’s other Continental colonies. But they were also latecomers to Britain, and (this is important) had also spread to New France and New Netherlands before the British took them over. This explains why native British colonies did not have flats, nor New Sweden, but why Québéc and New York do.

Midwestern Interlude

Residential Vernaculars” mainly explored different modes of urbanization associated with different (Northern) crossings of the Alleghenies. Pittsburgh, Cincinnati, and St. Louis are clearly rowhome cities; Buffalo, Cleveland, and Chicago just as clearly … aren’t.

In fact, Chicago is also interesting for our discussion here, as it is home to three-flats. We have a fair grasp of their derivation in the area — the Great Chicago Fire would have resulted in masonry requirements for larger residential structures, and the three-flat appears to have already been a common multifamily variant of the balloon style common in the Northern Lakes.

New England triple-deckers and Chicago three-flats have a lot in common, actually. Both are fully-detached walk-up triplexes — a solution not found in European flats … or Montréalais plexes … or New York apartments … or, for that matter, anywhere else outside the US. And while the only thing we can say for sure about the triple-decker’s origin is that it was clearly not in New England, because the three-flat is closely tied to the Lower Lakes vernacular as a masonry variant on the region’s balloon-frame’s multifamily variant — and not the only one, at that — if we can figure out where the Lower Lakes vernacular developed, then it may well be possible that triple-deckers have the same place of origin.

Canal Cities

My thesis is that we can — and that we can see where.

It is the early 19th century, and New York is falling behind at opening up its frontier. Philadelphia is linked with much of Appalachia and the Ohio Valley by road by this time; New York has, until recently, been blocked from doing the same by Iroquoian strength in the Mohawk Valley. (It’s worth pointing out here that the Pittsburgh region was part of the British frontier even prior to the Seven Years’ War; the same was not true of the Buffalo or Cleveland areas.) With the diminution of Iroquoian power, the Mohawk Valley was opened to development, and a water connection between the Hudson and Lake Erie was completely feasible in a way that one between the Potomac or Susquehanna and Ohio was not.

This led to the construction of the Erie Canal, linking New York with the freshwater sea, a geographical advantage that Philadelphia and Baltimore would be hard-pressed to counter. When the Erie Canal began construction, most of the populations of Ohio, Indiana, and Illinois lay around the Ohio Valley. (We can see evidence of this by noting that Michigan was admitted to the Union some twenty years after even Illinois, suggesting it remained sparsely populated for some time relative to its neighbors.) But with the easy link between New York and the Great Lakes, it could make up for lost time through superior transportation — and even potentially edge out Pennsylvanian influence in what was, at the time, the western frontier.

So we can clearly see that the economic forces at work in the Mohawk Valley were clearly New York’s. Montréal has a better path into the Great Lakes and would have had its own Canadian issues to deal with. We also have a traceable path for the Lower Lakes vernacular back towards the Mohawk Valley area, just as there is a traceable path for early Ohio Valley architecture all the way back to Philadelphia. We can, however, note with some consternation that this path only goes back to the Mohawk Valley, with known social — but no physical — connections to New York.

Or aren’t there? One of the major features of the New Yorker brownstone is that it has single-family and multi-family configurations, where multi-family configuration was only later introduced to the Mid-Atlantic rowhome and was probably alien to Boston rowhomes until the 20th century, long after triple-deckers’ rise. What do we see with the Lower Lakes’ balloon frames? Single- and multi-family configurations. In fact, these two configurations exist side-by-side in upstate New York’s canalside cities: Rome, Utica, Syracuse, Rochester. It would appear, then, that builders in the Erie Canal area had a general sense of house-ness that came from the city.

This gives rise to the next question. Why detached structures? After all, no major urban vernacular in the ca. 1820 United States used detached structures. And using detached structures in was was then, as now, the snowiest part of the country doesn’t really make sense when one generally seeks to share warmth in wintertime.

Because the Lower Lakes vernacular is unrelated to any colonial vernacular at first glance, and reveals its deeper relationship to the New York vernacular only on closer examination, the answer is surely something that must have been in the air upstate in the 1820s. One possibility is that they were patterned after Iroquois longhouses; another, local farmhouses. However, the snowiness (upstate New York is among the world’s snowiest places) — and the fact that the balloon-frame vernacular’s earliest known realization was the gablefront house — points to another possibility: the gables kept snow from piling up on rooftops, and builders were forced to place side yards to provide a place for snow to collect to. In this way, the mechanics of snow solve the mystery surrounding the detached balloon-frame’s rise.

When we explore upstate’s older canalside cities, we can now read them like a book. Wood was a preferred building material because the more skilled craftsmen, with masonry experience, were working on the canal. Detaching the houses and adding gables were needed to deal with copious snow Lake Erie sends into the region every winter. And single-family and small flats were copiously intermixed in the way the builders knew back home.

Later, even as the skilled masons were freed up to work on other projects, the habituation to wooden dwellings — much cheaper and faster to build than masonry ones — led to their explosive growth across the regions newly accessible from the Erie Canal: Buffalo, Erie, Cleveland, Toledo, Detroit, Chicago, Milwaukee, and eventually across the Plains and into the West, where they easily outcompeted the older, more conservative Ohio Valley vernacular. Needing a value-added proposition, masons were freed up to work on ever-more-opulent commercial and public architecture, and masonry residential was only reinstated in Midwestern cities after fire ravaged their first phases (which had also become rarer due to better public services). And — for the purposes of this discussion — one also notes that the Mohawk Valley, where this vernacular first arose, is, conveniently, just west of New England.

The House Also Migrates

Rapid industrialization was problematic for British and British-derived vernaculars. British vernaculars show incredible adverseness to multifamily housing, resulting in patently awful working-class solutions like the back-to-back; trinities and Maritime rowhomes of the era were not much better. Because New York was much less averse to the flat, they were able to provide a roomier alternative (at least, until New York tenements, too, became overcrowded).

However, for western New England, the combination of improved living conditions and cheapness of construction the balloon-frame Mohawk Valley flats offered offered a much better solution to working-class housing than anything else in the area. These triplexes, increasingly disassociated from their single-family gablefront cousins, saw their roofs flattened (New England winters are much less severe in the snow department), but were increasingly constructed in neighborhoods consisting almost entirely of them.

Springfield, Massachusetts, is western New England’s largest city; its vernacular (or what remains of it) is also largely a gable-for-gable duplicate of that found across the Berkshires; the same is also true of Pittsfield, Holyoke, and even Worcester. Indeed, one could be forgiven for wondering whether Boston ever developed secondary cities the way Philadelphia did!

Conclusion

So by the mid-19th century, the style of housing first devised in the Mohawk Valley had expanded in just about every direction, including into New England and practically right up to Boston’s doorstep. The last piece of the puzzle now falls into place: Boston’s builders of the generation immediately after Back Bay’s developers simply took the multifamily style they saw in nearby Worcester and built it back in Boston. From a hearth several hundred miles away, from New Yorker ideas executed in wood and optimized for snow, Boston builders picked a low-hanging fruit that they integrated — in the same schizoid way that Lower Lakers integrated every classical style under the sun into their commercial architecture — into their own rowhome vernacular, a vernacular that their own city region‘s inland cities had been loath to develop.

The triple-decker is indeed an immigrant in New England. It is especially so in Boston. Its origins lie in an altogether different vernacular tradition, and its adoption by Bostonites to the point they have made it their own reminds us all that, while the United States has many architectural vernaculars, willingness to solve a practical problem with solutions from a different idea set trumps local loyalty in the vast majority of the country.

But it also cautions us against running with new solutions at the expense of our own traditions. Boston builders didn’t just wholeheartedly adopt the triple-decker; by the turn of the century, it — and the rest of the Lower Lakes residential package — had utterly displaced almost all know-how for developing the antebellum Boston vernacular, that same vernacular whose last hurrah was in Back Bay and the South End.

Football and the NFL

A Beautiful Game

The game of (American) football may be one of the most inscrutable popular pastimes ever devised. Unlike other games, such as baseball or cricket, which test athletes’ finesse and timing, or like basketball or (association) football, which are mainly contests of stamina, American football is subject to chaos like few other sports. In some ways, it’s the purest real-world realization of the concept behind J.K. Rowling’s wizard’s chess.

For football teams playing at a high level, each play is a match of wits between the offensive and defensive coordinators. Both rely on schemes designed to create and take advantage of mismatches, and for both — this is important — the scheme has to be developed around the available talent. (This is of course true in any sport, but even more so in a sport as complex as football, where, say, a single blown coverage assignment results in a sack.)

This is not to say the sport is easy on its players. In fact, part of its draw is its strange combination of finesse and brutality, of beautifully executed plays like deep throws contrasted with setbacks like sacks. It is, in essence, life in 60 minutes on a field.

And a huge part of that is the need to cooperate in football. In most goal sports — like basketball or association football or hockey — giving the ball (or puck) to the most athletically gifted talent on your team is usually a good way to win games. The Lakers were the most dominant team of the early 2000s because they had Kobe Bryant. (Traitor.) The Bulls were the mid-90s’ most dominant team because they had Michael Jordan. Wherever Wilt the Stilt went, his team was dominant. And so on.

Quarterbacks — football teams’ offensive leaders — are, by contrast, not necessarily the most athletically dominant person on the field. In fact, player roles are so varied that it’s hard to say who, exactly, the most athletically dominant person on the field is. Players like Brian Dawkins or Warren Sapp, who set themselves apart by their athletic dominance even for their positions, are at least as rare to come by as their counterparts in other professional sports. Instead, quarterbacks exert leadership by being intellectually dominant — the most skilled person on the field.

The best quarterbacks have to absorb, analyze, evaluate, and act on a tremendous amount of information, all in a jaw-droppingly short time. They have to communicate the play they’re supposed to be running from their coaches to their teammates. They have to read the opposing defense and adjust as they see fit. Sometimes, they’ll even change the play at the line of scrimmage — Peyton Manning excelled at this kind of cerebral quarterbacking. And they have to do all of this in the half-minute or so allotted between plays.

Stereotypes aside, it’s not at all surprising that football is becoming an increasingly international sport. For all that soccer styles itself the beautiful game, there is something truly beautiful in the way a football game is play — something truly beautiful in the way, any given game day, an athletically inferior team can dominate an athletically superior one, through smart coaching and smart play.

Outside the US (and Canada)

The NFL is largely saturated in its core markets. Theoretically, any 2-million-man metropolis can support an NFL team, and most of them have one. The only place the NFL can go, therefore, to expand its product and its brand is out of the US.

Canada has the CFL. There was a time, a while ago, when the CFL ran an American division that largely concentrated on those media markets the NFL ignores — cities like Memphis, Salt Lake, and Las Vegas. So, if not Canada, where else?

The answer has, increasingly, been London. It seems like two minor NFL teams play in London any given week. Wembley is regularly sold out for these affairs. (They’ve also been looking at Mexico City.)

The problem with this, however, is that — it’s London. There’s a 6-hour time difference between there and anything on the East Coast, a significant logistical hurdle. Mexico City represents a natural place for the NFL to begin franchising because games between “Aztecs” and NFC/AFC West teams on a regular schedule are feasible. The solution to this quandary is almost certainly a British league of some kind.

A “BFL”

British athletics have long operated under promotion and relegation — good teams rise to the top while bad ones sink down. It’s an effective system for managing parity (for the most part) while allowing managers to dream title dreams.

This is, in all likelihood, unworkable for a British American football league, however. There are few cities that can profitably support such a team to begin with; a deeper problem is that the support infrastructure (layers and layers of progressively more minor leagues) isn’t remotely as extensive for American football.

In fact, there are just three conurbations with more than two million people in the British Isles: London, Manchester, and Birmingham. If we’re generous and ask about urban areas with more than one million, there are just three more: Dublin (actually just shy of 2m), Leeds — yes, Leeds — and Glasgow. That’s six cities.

Let’s take a look at the other end. An eight-team league would have optimal scheduling: the league is split into two 4-team divisions, playing division rivals twice, the other division once, and an alternating slate with one of the NFL divisions (4) teams for a 12-game schedule. The division winners would then play each other for the championship.

So we can put two teams in London — of opposite divisions, of course — and then one each in Manchester, Birmingham, Dublin, Leeds, Glasgow, and … somewhere else. (Liverpool? Belfast?)

There is a subtle beauty in this system. First of all, eight teams is probably the smallest you can field to maintain a competitive league (at least, in American football). Second, you guarantee that each team plays an NFL team twice at home each season. This serves two roles — two games that are guaranteed sellouts for every BFL team every season (c’mon, a mediocre AFC South divisional game sold Wembley out this year), and secondly, a degree of legitimacy for the expansion teams (because they are given the opportunity to win against NFL teams). It’s an excellent setup for converting known intermittent popularity into permanent new fanbases.

It’s also an expandable system. Is the BFL entrenched and profitable? Perfect, let’s launch the same program in France — Italy — the Iberian peninsula — greater Germany — and so on. Something similar can be applied in Latin America and the Far East. Over time, the Super Bowl simply becomes the oldest of a set of regional championships and a dedicated world championship is needed.

But the thing is — whatever your opinions about the game — as a business, the NFL needs to expand its markets, sustainably. And that means figuring out how to develop secondary leagues abroad. It’s already a continental-scale league as things stand.

Switch Thoughts

Last week, Nintendo announced their next-generation console: the Switch.

Nintendo is in an intriguing position in the console wars — technically, the Wii U was the first console of the current generation, which makes the Switch the last console of its generation. By having two consoles out in a single generation, Nintendo now has a clear innovation edge on its competitors. The Switch will have to compete with the PS4 and Xbox One for, most likely, its entire run.

Like the Wii, though, the Switch is something different. Sony and Microsoft consoles are little-changed from the strategy that won them success in the late 1990s and early 2000s: being little more than stripped-down gaming towers. But the Switch is a bipartite system with a console component and a mobile component. This alone makes its competitors look dated, if not outright obsolete.

The core of the system is a thin tablet. Augmenting that are four key peripherals: (1) the dock, which functions as a hybrid charging port/TV data transmitter (probably with 720p-1080p upscaling), (2) left and (3) right “Joy-Con” controllers, and (4) a Joy-Con grip. (A fifth peripheral is a Pro Controller that looks visually identical to the ergonomic Xbox controller layout.)

After the primary tablet unit, the Joy-Cons are the Switch’s second most arresting feature. They can be slotted into the dummy grip for console play, or into either side of the Switch itself to play like a classic mobile gaming system. They can also be used independently, like the Wii’s motion-based control layout, or even be split into two controllers for local multiplayer. This gives the basic system unparalleled versatility, natively supporting every gameplay style any Nintendo game has ever used.

Except for one. The Switch doesn’t seem to currently support DS-like gameplay.

The Switch’s Potential

My goal here, however, is to suggest a potential design philosophy behind the Switch. Obviously, the semi-mobile platform makes traditional console gaming obsolete. It implies that the next video game generation will see the merger of the Xbox and Surface, and between the Playstation and Xperia, as the most effective way to compete with the Switch and its derivatives. That is: the Switch is leading the way in a tablet-console merger.

Here we must ask what the Switch will run on. Initiating the merger is one thing; following through, quite another. Nintendo must be well aware the kind of mergers the Switch will precipitate — PC and Xbox games will merge, and Sony’s Xperia tablet line will by necessity run Playstation games. A video game system that looks like a tablet is different from a tablet system that plays video games, and Nintendo’s competitors will be able to offer the latter. What about Nintendo?

A huge part of this will hinge on the OS. While Android is the dominant smartphone OS, the tablet game is a 3-way race between it, iOS, and Windows. And Nintendo has little brand recognition as a generalized tech company the way Apple does. That is: a custom OS essentially locks the Switch (and its successors) into a video game system that looks like a tablet, but an Android-based OS makes it a tablet that plays video games — a critical competitive edge once the innovation’s worn off.

The reason is: running Android unlocks a lot of doors with relatively limited downside. With it, the Switch automatically comes with full access to Google Play and its wealth of apps. Without it, Nintendo must either develop substitutes in-house or admit that, at the end of the way, the Switch is fundamentally a toy. With it, your Switch becomes the only tablet you ever need carry with you. Without it, it’s sharing space with your favorite Windows/iPad/Droid tablet.

Yes, running Droid raises the specter of easily-ported games. But this can be overcome with a custom peripheral that the games themselves are loaded on to — is this the reason behind the cartridge’s return? But consider this: Porting games is essentially a rewriting job. For the last three generations or so, Nintendo has lagged in the porting game because of its often-inferior specs, a deal-breaker in a market where porting a game is expensive.

Running the Switch on Android makes porting games cheap. Not in this generation, but the next, when the Playstation and Xperia are likely to merge. A third-party title written for the Switch can have its core be built around a generalized Android release, with extra features for the Switch’s unique capabilities. Switch games become, in this environment, Android games with extra features. And, if Playstation games soon follow, this leaves the Xbox at a tremendous disadvantage: while it may be cheap to port releases for Nintendo and Sony (remember, they’re the same core for the same OS in the same languages, just with slightly different specs, storage media, and peripherals in mind), it’ll be tremendously expensive to do so for Xbox (same core on different OSes in different languages for similar specs, storage media, and peripherals).

Needing to spend less on tedious porting overhead, Japanese developers — those most inclined to eschew the Xbox — have a competitive advantage in this environment, while American ones — who usually have to co-develop for Sony and Microsoft to begin with — have a competitive disadvantage. There is a very real risk embedded in the Switch that Microsoft becomes the 2000s Nintendo of the 2020s — dependent on its first- and second-party IP, as few new third-party houses are willing to expend the resources on developing for both it and its Japanese competitors.

A Path Forward for Nintendo

If the Switch is a true tablet, what does that imply for the DS? Nintendo has some twenty-five years of portable device experience embedded in its Game Boy/DS product line, long the most dominant in the market. And recall that the Switch does not seem designed to support DS-style gameplay (where the Wii U was an experiment to bring it to the console).

There are a lot of companies that run phones and tablets. Apple may be the most famous, with its iPhones and iPads, but nearly every major Android smartphone maker also makes tablets. Windows tablets don’t have nearly the market reach Microsoft wanted precisely because most tablet makers develop their tablets from their phones’ core architecture — not from their towers’. (And how many makers even make towers anymore, anyway?)

Recall here that, while the Switch may be a mobile platform, it isn’t as mobile as the pocket-sized Game Boy/DS line. And if tablets are often matched with smartphones … hmm …

Phones and tablets usually have similar architecture bases. So an Android Switch isn’t just a well-positioned gaming tablet — it’s also the same basic architecture that you would need for a smaller platform. The 3DS is an aging system. Could we see a “Nintendo Phone” in the cards?

It really makes sense, if you think about it. A Nintendo Phone gives them presence in the smartphone/tablet market that computer-derivative devices are converging on. It forces Sony to essentially integrate similar functionality into its smart-devices. And it deals Microsoft another setback — the Windows Phone’s failure still stings — as it’s unable to fully migrate to the new video-game-enabled devices that Nintendo is producing.

Moreover, the Nintendo Phone gives full capability for single-screen touchscreen games. And it works as a second-screen peripheral for the Switch. With its own miniaturized Joy-Cons, the Nintendo Phone and Switch can work in concert to produce DS-like gameplay

Two devices able to produce three (console/portable, touchscreen, DS) game types — as well as being go-to devices for your daily life. No doubt, Nintendo sees how Apple has achieved near-total vendor lock-in. How better to market your devices to similar effect when your killer apps are essentially built into your brand?

Negative Charisma

Perhaps one of the downside of republican governments is that their politics are dependent on charismatic politicians. Rule in republics is by the consent of the ruled (rather than by e.g. force, as in a dictatorship, or heredity, as in a monarchy), and every republican system — both historic and modern — has a periodic reaffirmation of that consent. This is an excessively technical and theoretical way of talking about elections.

Politicians depend on charisma to get elected and re-elected. An uncharismatic politician will never be able to convert oratory into votes. And charisma is not a learned skill: there is a distinct difference between naturally charismatic people and people who have learned to mimic naturally charismatic people. However, at the same time, all charismatic people — by simple dint of standing out in the crowd — will win both adorers and adversaries. In republics, having enough adorers to cancel out adversaries and then some is what gets you elected.

In 2007, the Huffington Post published an opinion piece suggesting that Hillary Clinton has “negative charisma”, in the sense that she has the opposite of charisma. He is right: Hillary is not exactly charismatic. She runs tough elections but is consistently highly rated once in office. For her, elections are — for all intents and purposes — a tedious chore to get through before returning to the real business of government, i.e. governing. She has largely succeeded so far by more skillfully mimicking naturally charismatic people than nearly anybody else in existence. But she is not naturally charismatic.

This is not, however, the sense I have in mind when I suggest “negative charisma”. If the positive effect of charisma is an innate ability to win friends and influence people, then the negative effect of charisma is an innate ability to win enemies and influence people. That is, a negatively charismatic person is someone whose natural charisma acts to their detriment rather than to their benefit. A negatively charismatic person is inherently, deeply self-sabotaging.

Donald Trump Is Negatively Charismatic

While the Constitution outlines the bare minimum to be qualified for the Presidency — according to Article II, a President must be a natural-born U.S. citizen, at least thirty-five years old, and an American resident for at least the past fourteen years — in practice we also expect our Presidents to have significant political experience, the ability to fund a campaign, and the charisma needed to win. Governors and Senators most frequently win major-party nominations for this reason. They fulfill both the implicit and explicit skillset needed for winning the Presidency, having successfully run for — and held — statewide office.

Obviously the septuagenarian New York-born Trump, who has held primary residency in Trump Tower’s penthouse suite for about as long as I’ve been alive, fulfills the Constitution’s explicit requirements. He does not fulfill the usual implicit requirements. He has never held public office — nor did he ever seek to prior to announcing his candidacy. CEO of an ostensibly real-estate company and media personality, he has never demonstrated the ability to hold public office of any sort, much less the most public public office in the US. Most “candidates” like him go away quickly, and if he was — indeed — running as a publicity stunt for his brand (as most in the media seem to think), he had no reason to expect the course of his candidacy to run any differently.

Something different happened. By tapping a regressive-populist core and running against a monumentally divided field, Trump was already galloping towards the nomination by the time Ted Cruz was able to mount a counterattack. It wasn’t enough. And so the Republican establishment, the whole infrastructure built around the declining Reagan coalition, had to grit its teeth and nominate someone who had — remember, with zero experience — developed an Appalachian coalition with extensions into the Old South’s unreconstructed whites and North’s undereducated ex-workforce. Of these, only one voting block was even R when Reagan was President.

This is evidence of powerful natural charisma. But for the negatively charismatic, the self-sabotage kicks in long before the ultimate goal is reached. And it’s inextricably linked to their personality. See, charisma requires treating other people as people to work. Outside of other white males, Trump can’t do that. He has repeatedly demonstrated failure to connect to people emotionally — a recent New York Times opinion piece suggests he has “narcissistic alexithymia” (not an easy-to-spell word!), an “inability to understand or describe the emotions in the self”. And so Trump treats people who do not look like him like, well, objects.

Consider the way he keeps referring to African-Americans as “the blacks”. Not just “blacks”. The blacks. Consider what he is saying, at a deep level. The English definite article is a subtle demonstrative — it points out. It selects an object, or class of objects. Not “some blacks”. “The blacks.” In doing so, Trump is quite literally distancing himself from black people. He is saying, implicitly, that he does not, at a fundamental level, consider black people, well, people — English actually has (at least) two noun classes, and the class that refers to other people behaves quite differently than the one that refers to (inanimate?) objects like, say, rocks. Trump refers to African-Americans more like rocks than people, and in so doing, casts a noun-class distinction that we never realized was there into stark distinction.

At least he refers to women as people! It’s too bad his interest in them begins and ends with their appearance and genitalia. In Trump’s own little world, we can see a clear class progression, with white males at the top of the hierarchy, white females are naturally inferior but useful for *cough* certain tasks *cough*, and nonwhites — who might as well not even be human. This is fertile ground for rapidly building a populist coalition, one that may well only hold together as long as he’s leading them, but it flies in the face of the reality that is American demographics.

This is how charisma turns toxic. Real estate development was — and, in many ways, still is — a bit of an old boys’ club. Even a personality-driven show like The Apprentice can — and quite obviously did — mask elements of media personalities that would harm ratings. There is a reason why Trump is the world’s oldest adolescent. His dad was rich enough and he was just good enough a businessman to indulge in puerile power fantasies long past their natural sell-by date. His ephebophilia actually means his women, such as they are, are the ones with “sell-by dates”. Trump has never, in his life, ever needed to learn how to interact with other people as people and not mere tools.

Hillary is uncharismatic because she doesn’t intuitively know how to interact with other people as people. She knows this is important and works hard to overcome this weakness. But Trump has negative charisma because he does intuitively know how to interact with other people as people — what he does not see, or understand, is why it’s important. And it’s biting him in the ass.

EDIT 10/25: Note: I wrote this post just before Trump’s sex-assault allegations went public.

Lessons from Philadelphia Media

Philadelphia is shockingly barren of hard-hitting investigative journalism. The dominant newspaper, the Inquirer (locally the “Inky”) prefers to sit back, generally focusing its limited investigative resources on police issues. This is useful in its own way — because local media have a long history of holding the Philadelphia Police Department to the fire, police brutality issues here seem not to be as severe as those in e.g. Baltimore or St. Louis — but at the same time it has cast deep shadows for political corruption. Meanwhile, attempts at creating an alternative to the Inky (often with an investigative focus on political corruption) have not met with sustained success.

Perhaps the longest-lasting, the alternative weekly City Paper, sold to the much less interesting, but more profitable, alt-weekly rag Philly Weekly a few years back and was excised from existence. City Paper had been — by far — the best source for local political news, and its writing pool easily boasted the best journalists in the city. After it went under, attempts at online platforms intensified. Patrick Kerkstra led the charge at Philadelphia magazine, developing a suite of daily blogs that mimicked newspaper sections — the front page, sports, real estate — and poaching the city’s best reporting talent (mostly from the recently-defunct City Paper) to run them. Meanwhile, PlanPhilly‘s erstwhile editor, Matt Golas, got local PBS affiliate WHYY to pick it up, and began reorganizing both it and WHYY’s Northwest Philly-focused outlet, Newsworks, into a journalism platform to rival the Inky’s.

Despite City Paper‘s untimely departure, the future of Philly investigative journalism — at least online — looked fairly bright in mid-2015.

Then — just as his efforts at WHYY were bearing fruit — Golas was forced out in late 2015. Kerkstra would follow a year later, as Philly mag’s showrunners decided to go in a different direction, favoring advertiser-pleasing copy over high-readership stories. That fallout has only just begun. And Philadelphia is left bereft of a high-quality investigative-journalism outlet — again.

Despite generations of reporters trying to change it, Philadelphia’s status quo has never favored investigative journalism. The “corrupt and content” city’s dominant paper, for more than a century, was the Philadelphia Evening Bulletin (often shortened to just the “Bulletin”). As its name implies, it never seems to have had much interest in investigative journalism, favoring instead a role as the dominant party machine’s mouthpiece. The Inky was merely a distant #2.

This all changed in the 1970s, when Knight Papers bought the Inky and heavily invested in it, modernizing its facilities and bringing in some of the country’s best investigative journalists. This new, more muckraking Inky quickly began to win Pulitzers — and readers. By the early 1980s, it had forced the staid Bulletin out of business entirely, and became the Philadelphia region’s paper of record. Knight Papers had believed in investigative news, and as the Inky’s editorial board was one of the last they had overhauled before selling to Ridder, it was one of the last that the new combined company would start tinkering with. Thus, the Inky carried on the Knight legacy through the 1980s — a period when it was arguably one of the country’s best papers.

By the early 1990s, however, the replacement of Knight editors with Knight Ridder ones had begun in earnest, and the paper’s quality had begun to suffer. Much like the Bulletin before it, the Inky stopped prioritizing muckraking. Investigative reporters moved on, into the alt-weekly scene or to friendlier paper-of-record locales. Readership and profitability began to suffer — unlike the Bulletin, the Inky did not have an enduring paper-of-record legacy, having only been the city’s dominant for a decade. Spearheaded by powers-that-be at the very top, the Inky turned away from the brand they had successfully built over the previous twenty years, and contented corruption returned to the very top of the local media.

So, by the early 2000s, the paper was treading water when the bottom fell out of its revenue stream. Most people attribute the rise of the Internet to the fall of American newspapers. This is only half-true: it was the rise of Craigslist, in particular, that led to the collapse of the newspaper revenue model — which depended on classified advertising. Easily half, if not more, of that revenue was lost — irrevocably — in every market Craigslist established a beachhead in — and it established a beachhead in every market. Quickly. The Inky’s parent, Knight Ridder, began losing money, shedding staff, and was forced to pivot its revenue model towards retail advertising (the circulars and other junk in the middle, as well as on-page ads) even as competition diversified.

Knight Ridder merged with McClatchy in 2006, and the new owners spun off most of their portfolio of either (a) weaker newspapers or (b) newspapers that did not fit the direction their corporate parent wished to take. The Inky was one of those. Coming under ownership of Philadelphia Media Holdings, its quality continued to worsen, sapping subscribers and readership revenue, in a penny-wise-pound-foolish attempt to trim its way to profitability. Finally, Comcast’s Gerry Lenfest stepped in and assumed control of the bankrupt paper, worried, perhaps, that it would go the way of the Times-Picayune and cease to be a daily affair.

It would be nice if the Inky became a bastion of investigative reporting again, but in all probability it won’t. Newspapers are not the only dominant media voices that tend to avoid investigation in the Philadelphia region. Action News, the dominant local news program, also follows Bulletin-esque editorial guidelines. Ironically enough, the best source for investigative local news is Fox 29, a position that so flagrantly opposes their national showrunners’ that almost every Fox 29-Fox News interaction rapidly becomes painfully awkward to watch.

But there is a strange lesson to be had here. Doubtless, Gilded Age politicians and robber barons disliked muckrakers’ nosing around. The idea of a corrupt and content city with enabling media must have been intoxicating to these people. As the TV replaced papers as the source of most peoples’ news, the trend towards showrunners replicating the ideas implicit in the Bulletin’s editorial guidelines — “the newspaper is the guest in the reader’s house; tell the news, nothing more, nothing less” — began to intensify in the more legitimate circuits. (It gave way to propaganda on Fox News; even liberally-focused MSNBC has yet to go so far down that route.) Corruption rages in the shade, and without muckraking, shadows grow deep.

So how do we monetize muckraking?

Decline and Fall

This past election season has felt truly surreal. Political commentators both left and right understood as early as late 2012 that the Democrats would be vulnerable in 2016, and a good Republican candidate, one who could maintain the party’s core demographics while simultaneously siphoning some black and Latino votes, had a nonzero shot at tipping the scales, someone bland and vaguely Hispanic like Marco Rubio — especially if the Democrats nominated Hillary Clinton.

The Democrats nominated Hillary Clinton.

So what did the Republicans do? They nominated a candidate who most observers agree is the single worst candidate ever fielded by a major party in the United States of America. While, for a moment in July, Donald Trump seemed terrifyingly electable, it lasted about three days into the DNC. And then he went after Gold Star father Khizr Khan.

Ever since then, his campaign has been in a state of utter collapse. Trump, quite literally a textbook narcissist, has seen to it that he utterly dominates the news cycle. This is quite unfortunate for Republicans because this dominance is rooted in petty attacks, like that against Mr. Khan, with a heaping spoonful of scandal, like murky Russian ties, and controversy, like assaying Trump’s true net worth in the increasingly noticeable absence of his tax returns — all of this leading to pundits calling him a fascist while the Republicans’ moderate class run from him. In droves.

Against all odds, Hillary Clinton, a candidate that against a normal candidate should receive 50% ±1% of the popular vote, has opened up a commanding 8-point lead on Trump. Purely by staying away from the media. Against a campaigner as self-evidently incompetent as Trump, Clinton has an excellent chance — currently 26.8%* according to FiveThirtyEight — of winning by a landslide, a victory type that Americans haven’t seen since the 1980s and many pundits did not even think possible in the modern, hyperpartisan political climate.

But if you think this is the Republicans’ bottom — hah! They haven’t even found their bottom yet!

The Green Screen

American politics have been cyclic, coinciding remarkably well with the Kondratieff cycle. The main political parties — the Democrats and Republicans — tend to assemble into coalitions during the primary and midterm phases, while the general election decides which coalition governs and which one opposes. These, in turn, tend to be focused around driving narratives — ideologies that animate coalitions for generations at a time.

The largest governing majorities — supermajorities in any sense of the word — were the Republican governing coalition of the Progressive Era and the Democratic New Deal governing coalition that followed. The post-Teddy Roosevelt Republicans were themselves a policy iteration on a Republican coalition that had largely stayed in power since 1865, mainly due to the era’s North-South politics, while New Deal coalition continued to follow Progressive politics until the Civil Rights Act and Southern Strategy fractured it.

It is also noteworthy that major governing coalitions become focused around uniquely charismatic Presidents. One could therefore say that American politics are divided into the Jefferson period, which defined Jefferson’s Democrat-Republicans and their opponents (initially Federalists and then Whigs); the post-Lincoln period, defined by the loose ends Lincoln had left; the first and second Roosevelt periods, when the progressives were the governing coalitions’ leaders; and the Reagan period, which actually started when Nixon won the Presidency and may or may not have ended in the mid-2000s.

But charisma is a two-edged sword, and Trump is certainly charismatic. Like Teddy Roosevelt, Trump is giving voice to a marginal faction; unlike Roosevelt, who was essentially kicked upstairs into the vice-presidency, thereby allowing him to be in the right place at the right time to implement his agenda, Trump is trying to win the Presidency rather than inherit it.

Trump is far better at inheriting things than winning them.

Because the core of his support is the populist right (aka alt-right aka Neo-Nazis aka proto-fascists), and because — unlike any of his interchangeable dozen-or-so opponents, he actually got his base fired up — Trump is hugely popular among a group approximately the same relative size as UKIP’s (ex-?)base in Britain. But because he espouses this particular ideology to the exclusion of all others, for a whole host of reasons, he was electable (in the sense Nixon was electable in ’68) in the primaries, but is wholesale unelectable — because he does espouse an ideology that is so profoundly foreign — to the left.

Trump needed a good handler to become remotely electable in the general, but his narcissism demands sycophants. Manafort couldn’t handle him, and at this point his primary advisors are mediamen Steve Bannon (formerly of the execrable Breitbart News) and … Roger Ailes. The rest of his inner circle reads like a who’s-who of Republican washouts, and the party’s big-name operatives aren’t interested in his campaign.

Whither Now?

When Bannon replaced Manafort, the Washington Post asked whether it was because (1) Trump was a fool, or (2) he was making a post-election play. Greg Sargent, the writer, thinks the answer is (1) — and perhaps to Trump and Bannon, it is — but Roger Ailes, now formerly of Fox News due to a harassment scandal, remember — is much savvier and much more opportunistic.

I would not be remotely surprised if Ailes was just the first one (or at least the first one in a position to act on it) to read the tea leaves: If Trump is only successful in attracting the regressive-populist alt-right, and literally toxic to anybody else, then simply by sticking to his message he can attract a following of (monetizable) zealous converts. The seed direct-mailing list is there, and Trump generates a not-insignificant amount of publicity — indeed, his own self-promotion is what is killing him this election — putting many of the ingredients in place. Lure in some known Trumpian TV and radio personalities, like Pat Buchanan and Sean Hannity, and — voilà!

But at this point the pattern starts to become clear. This “Trump News Network”, run by Bannon and Ailes, legitimizes the alt-right, in the process continuing to drive away social conservatives, libertarians, and the tattered last remnants of Northeasterner Republicans. The alt-right are American nationalists, but that in itself has the problem that nationalism is tied to ethnicity while American nationhood … isn’t. It is precisely because most Americans** agree, at some level, that openness to diversity is a fundamental defining feature of being American — an idea which no nationalist anywhere would ever be caught dead espousing — that Trump’s politics and agenda are so fundamentally foreign to Democrats and non-Trumpian Republicans alike.

A permanent Trump coalition effectively precludes the Republicans from retaking the White House in 2020, possibly ever. And Trump himself would continue to help the internal strife along. One side or the other*** will decide they’ve had enough and form their own third party, and that will be the end of the postwar Republican Party, the party of the Reagan governing coalition.

A New Start

It can’t happen soon enough! The Reagan coalition is dying. Literally. It has failed miserably at attracting young voters, or at producing black or Latino votes in an increasingly diverse American society, and its core voter is essentially an Angry White Pensioner. The 2012 Republican autopsy said as much. And Trump’s rise — and that of the alt-right in general — go backwards rather than forwards, firing up the core at the expense of alienating literally everyone else. Clearly, the Republicans — or their successors — will need a new base and a new charismatic politician to build a platform around.

It will take a while. Ike was a charismatic politician, but he didn’t do anything to rebuild the base; rather, after the 1932 election, the modern Republican governing majority did not get its charismatic leader for 48 years — 12 elections!

But the Republicans, once they’re severed from the toxic Trumpist wing, might be able to actually start attracting new voters. As a friend of mine puts it, the Reagan coalition is failing because it was made up of voters “on the wrong side of the animating question of postwar American history”, and the sooner it realizes this, the better. Because as long as they’re in denial (and the Trumpists are clearly in denial, to the tune of  a functionally nonexistent minority vote) …

… would be able to admit that yes, their last generation of governance was based around a coalition on the losing side of what is now a half-century-old issue, and that the so-called Party of Ideas needs some damned new ideas damned fast if they hope to remain relevant at all.

But they can’t do that until they finally succeed in cutting out the cancer at their core, which itself won’t happen until their activist base stops misidentifying what their party’s cancer actually is (hint: look in the mirror). Fortunately for all involved, Donald Trump has made it both obvious and damned easy. Republican leadership needs to take this chance, and recognize that it’s okay to lose the next three cycles or so if the party (or what remains of it) comes out stronger in the end.


* This number is derived by taking the average of the three forecast models’ chances for a Clinton landslide. Amazingly, the polls-only model, not the nowcast, shows Clinton furtherest in the lead.

** I.e. Americans who aren’t Trump supporters.

*** Most likely, either (a) because the Republicans get their shit together and find a candidate who can ensure the Trumpian nominee (probably Donald J. Trump) doesn’t get nominated in 2020, leading to The Donald making his own run and fragmenting the remnants of the Republican base, or (b) because the other Republicans finally have enough and defect en masse … possibly to the Libertarians?