Shutdowns are as American as apple pie − in the UK and elsewhere, they just aren’t baked into the process

Source: The Conversation – USA – By Garret Martin, Hurst Senior Professorial Lecturer, Co-Director Transatlantic Policy Center, American University School of International Service

The obligatory showing of the red briefcase containing budget details is as exciting as it gets in the U.K. Justin Tallis – WPA Pool/Getty Images

When it comes to shutdowns, the U.S. is very much an exception rather than the rule.

On Oct. 1, 2025, hundreds of thousands of federal employees were furloughed as the business of government ground to a halt. With negotiations in Congress seemingly deadlocked over a funding deal, many political watchers are predicting a lengthy period of government closure.

According to the nonpartisan nonprofit Committee for a Responsible Federal Budget, the latest shutdown represents the 20th such funding gap since 1976.

But it doesn’t have to be like this – and in most countries, it isn’t. Other Western democracies experience polarization and political turmoil, too, yet do not experience this problem. Take for example the U.K., traditionally one of Washington’s closest allies and home to the “mother of parliaments.”

In the British system, government shutdowns just don’t happen – in fact, there has never been one and likely never will be.

A sign reads 'The US Capitol Visitors Center is closed due to a lapse in appropriations.'
The U.S. Capitol Visitors Center is closed to visitors during the federal government shutdown on Oct. 01, 2025.
Chip Somodevilla/Getty Images

So why do they occur in Washington but not London? Essentially, it comes down to four factors: the relative power of the legislature; how easy it is to pass a budget; the political stakes at play; and distinctive appropriation rules.

1. Legislative power

There are significant differences in how the legislatures of the U.K. and U.S. shape the budgetary process.

In the U.K., only the executive branch – the party or coalition in power – has the authority to propose spending plans. Parliament, which consists of members from all political parties, maintains an oversight and approval role, but it has very limited power over the budgetary timeline or to amend spending plans. This is a stark contrast with the U.S., where Congress – which may be split or controlled by a party different to the executive – plays a far more consequential role.

The U.S. president starts the budget process by laying out the administration’s funding priorities. Yet, the Constitution grants Congress the power of the purse – that is, the power to tax and spend.

Moreover, past legislation has bolstered congressional control. The 1974 Congressional Budget Act helped curtail presidential involvement in the budgeting process, giving Congress more authority over the timeline. That gave Congress more power but also offered it more opportunities to bicker and derail the budgetary process.

2. Thresholds to pass a budget

Congress and the U.K. Parliament also differ when it comes to their voting rules. Passing the U.S. budget is inherently more complicated, as it requires the support of both the Senate and the House of Representatives.

In Parliament, however, the two houses – the elected House of Commons and unelected House of Lords – are not equally involved. The two Parliament Acts of 1911 and 1949 limited the power of the House of Lords, preventing it from amending or blocking laws relating to budgeting.

Additionally, approving the budget in Westminster requires only an absolute majority of votes in the House of Commons. That tends to be quite a straightforward hurdle to overcome in the U.K. The party in power will typically also command a majority of votes in the chamber or be able to muster one up with the support of smaller parties. It is not, however, so easy in Congress. While a simple majority suffices in the House of Representatives, the Senate still has a 60-vote requirement to close debates before proceeding with a majority vote to pass a bill.

3. Political stakes

U.S. and U.K. politicians do not face the same high stakes over budget approval. Members of Congress may eventually pay a political price for how they vote on the budget, but there is no immediate threat to their jobs. That is not so in the U.K.

Indeed, the party or coalition in power in the U.K. must maintain the “confidence” of the House of Commons to stay in office. In other words, they need to command the support of the majority for key votes. U.K. governments can actually fall – be forced to resign or call for new elections – if they lose formal votes of confidence. Since confidence is also implied in other major votes, such as over the annual budget proposals, this raises the stakes for members of Parliament. They have tended to think twice before voting against a budget, for fear of triggering a dissolution of Parliament and new elections.

4. Distinctive appropriation rules

Finally, rules about appropriation also set the U.S. apart. For many decades, federal agencies could still operate despite funding bills not being passed. That, however, changed with a ruling by then-Attorney General Benjamin Civiletti in 1980. He determined that it would be illegal for governments to spend money without congressional approval.

That decision has had the effect of making shutdowns more severe. But it is not a problem that the U.K. experiences because of its distinct rules on appropriation. So-called “votes on account” allow the U.K. government “to obtain an advance on the money they need for the next financial year.”

This is an updated version of an article that was first published by The Conversation U.S. on Sept. 28, 2023.

The Conversation

Garret Martin receives funding from the European Union for the Transatlantic Policy Center, which he co-directs.

ref. Shutdowns are as American as apple pie − in the UK and elsewhere, they just aren’t baked into the process – https://theconversation.com/shutdowns-are-as-american-as-apple-pie-in-the-uk-and-elsewhere-they-just-arent-baked-into-the-process-266553

Where George Washington would disagree with Pete Hegseth about fitness for command and what makes a warrior

Source: The Conversation – USA – By Maurizio Valsania, Professor of American History, Università di Torino

On Dec. 4, 1783, after six years fighting against the British as head of the Continental Army, George Washington said farewell to his officers and returned to civilian life. Engraving by T. Phillibrown from a painting by Alonzo Chappell

As he paced across a stage at a military base in Quantico, Virginia, on Sept. 30, 2025, Secretary of Defense Pete Hegseth told the hundreds of U.S. generals and admirals he had summoned from around the world that he aimed to reshape the military’s culture.

Ten new directives, he said, would strip away what he called “woke garbage” and restore what he termed a “warrior ethos.”

The phrase “warrior ethos” – a mix of combativeness, toughness and dominance – has become central to Hegseth’s political identity. In his 2024 book “The War on Warriors,” he insisted that the inclusion of women in combat roles had drained that ethos, leaving the U.S. military less lethal.

In his address, Hegseth outlined what he sees as the qualities and virtues the American soldier – and especially senior officers – should embody.

On physical fitness and appearance, he was blunt: “It’s completely unacceptable to see fat generals and admirals in the halls of the Pentagon and leading commands around the country and the world.”

He then turned from body shape to grooming: “No more beardos,” Hegseth declared. “The era of rampant and ridiculous shaving profiles is done.”

As a historian of George Washington, I can say that the commander in chief of the Continental Army, the nation’s first military leader, would have agreed with some of Secretary Hegseth’s directives – but only some.

Washington’s overall vision of a military leader could not be further from Hegseth’s vision of the tough warrior.

A man in front of a US flag, looking like he is shouting and holding out his fists.
U.S. Secretary of Defense Pete Hegseth speaks to senior military leaders at Marine Corps Base Quantico on Sept. 30, 2025.
Andrew Harnik/Getty Images

280 pounds – and trusted

For starters, Washington would have found the concern with “fat generals” irrelevant. Some of the most capable officers in the Continental Army were famously overweight.

His trusted chief of artillery, Gen. Henry Knox, weighed around 280 pounds. The French officer Marquis de Chastellux described Knox as “a man of thirty-five, very fat, but very active, and of a gay and amiable character.”

Others were not far behind. Chastellux also described Gen. William Heath as having “a noble and open countenance.” His bald head and “corpulence,” he added, gave him “a striking resemblance to Lord Granby,” the celebrated British hero of the Seven Years’ War. Granby was admired for his courage, generosity and devotion to his men.

Washington never saw girth as disqualifying. He repeatedly entrusted Knox with the most demanding assignments: designing fortifications, commanding artillery and orchestrating the legendary “noble train of artillery” that brought cannon from Fort Ticonderoga to Boston.

When he became president, after the Revolution, Washington appointed Knox the first secretary of war – a sign of enduring confidence in his judgment and integrity.

Beards: Outward appearance reflects inner discipline

As for beards, Washington would have shared Hegseth’s concern – though for very different reasons.

He disliked facial hair on himself and on others, including his soldiers. To Washington, a beard made a man look unkempt and slovenly, masking the higher emotions that civility required.

Beards were not signs of virility but of disorder. In his words, they made a man “unsoldierlike.” Every soldier, he insisted, must appear in public “as decent as his circumstances will permit.” Each was required to have “his beard shaved – hair combed – face washed – and cloaths put on in the best manner in his power.”

For Washington, this was no trivial matter. Outward appearance reflected inner discipline. He believed that a well-ordered body produced a well-ordered mind.

To him, neatness was the visible expression of self-command, the foundation of every other virtue a soldier and leader should possess.

That is why he equated beards and other forms of unkemptness with “indecency.” His lifelong battle was against indecency in all its forms. “Indecency,” he once wrote, was “utterly inconsistent with that delicacy of character, which an officer ought under every circumstance to preserve.”

More statesman than warrior

By “delicacy,” Washington meant modesty, tact and self-awareness – the poise that set genuine leaders apart from individuals governed by passions.

For him, a soldier’s first victory was always over himself.

“A man attentive to his duty,” he wrote, “feels something within him that tells him the first measure is dictated by that prudence which ought to govern all men who commits a trust to another.”

In other words, Washington became a soldier not because he was hotheaded or drawn to the thrill of combat, but because he saw soldiering as the highest exercise of discipline, patience and composure. His “warrior ethos” was moral before it was martial.

Washington’s ideal military leader was more statesman than warrior. He believed that military power must be exercised under moral constraint, within the bounds of public accountability, and always with an eye to preserving liberty rather than winning personal glory.

In his mind, the army was not a caste apart but an instrument of the republic – an arena in which self-command and civic virtue were tested. Later generations would call him the model of the “republican general”: a commander whose authority rested not on bluster or bravado but on composure, prudence and restraint.

That vision was the opposite of the one Pete Hegseth performed at Quantico.

A man on a white horse and in a uniform saluting a long line of soldiers in front of him.
Washington formally taking command of the Continental Army on July 3, 1775, in Cambridge, Mass.
Currier and Ives image, photo by Heritage Art/Heritage Images via Getty Images

Discipline and steadiness, not fury and bravado

The “warrior ethos” Hegseth celebrates – loud, performative – was precisely what Washington believed a soldier must overcome.

In March 1778, after Marquis de Lafayette abandoned an impossible winter expedition to Canada, Washington praised caution over juvenile bravado.

“Every one will applaud your prudence in renouncing a project in which you would vainly have attempted physical impossibilities,” he wrote from the snows of Valley Forge.

For Washington, valor was never the same as recklessness. Success, he believed, depended on foresight, not fury, and certainly not bravado.

The first commander in chief cared little for waistlines or whiskers, in the end; what concerned him was discipline of the mind. What counted was not the cut of a man’s figure but the steadiness of his judgment.

Washington’s own “warrior ethos” was grounded in decency, temperance and the capacity to act with courage without surrendering to rage. That ideal built an army – and in time, a republic.

The Conversation

Maurizio Valsania does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Where George Washington would disagree with Pete Hegseth about fitness for command and what makes a warrior – https://theconversation.com/where-george-washington-would-disagree-with-pete-hegseth-about-fitness-for-command-and-what-makes-a-warrior-266530

Moral panics intensify social divisions and can lead to political violence

Source: The Conversation – USA – By Ron Barrett, Professor of Anthropology, Macalester College

The day before Charlie Kirk was assassinated, I was teaching a college class on science, religion and magic. Our class was comparing the Salem witch trials of the 1690s with the McCarthy hearings of the early 1950s, when U.S. democratic processes were eclipsed by the Red Scare of purported communist infiltration.

The aim of the class was to better understand the concept of moral panics, which are societal epidemics of disproportionate fear of real or perceived threats. Such outsized fear can often lead to violence or repression against certain socially marginalized groups. Moral panics are recurring themes in my research on the anthropology of fear and discrimination.

Our next class meeting would apply the moral panic concept to a recent example of political violence. Tragically, there were many of these examples to choose from.

Minnesota State Representative Melissa Hortman and her husband were assassinated on June 14, 2025, which happened to be the eighth anniversary of the congressional baseball shootings in which U.S. House Majority Whip Steve Scalise and three other Republicans were wounded. These shootings were among at least 15 high-profile instances of political violence since Rep. Gabby Gifford was severely wounded in a 2011 shooting that killed five and wounded another 13 people.

Seven of these violent incidents occurred within the past 12 months. Kirk’s killing became the eighth.

In most of these cases, we may never fully know the perpetrator’s motives. But the larger pattern of political violence tracks with the increasing polarization of American society. While researching this polarization, I have found recurring themes of segregation and both the dehumanization and disproportionate fear of people with opposing views among liberals and conservatives alike.

Segregation and self-censorship

The first ingredient of a moral panic is the segregation of a society into at least two groups with limited contact between them and an unwillingness to learn from one another.

In 17th century Salem, Massachusetts, the social divisions were long-standing. They were largely based on land disputes between family factions and economic tensions between agriculturally-based village communities and commercially-based town communities.

Within these larger groups, a growing number of widowed women had become socially marginalized for becoming economically independent after their husbands died in colonial wars between New England and New France. And rumors of continuing violence led residents in towns and villages to avoid Native Americans and new settlers in surrounding frontier areas. Salem was divided in many ways.

A black-and-white copy of a painting depicts a trial in Salem, Massachusetts, in 1692.
The painting ‘Trial of George Jacobs of Salem for Witchcraft’ by Tompkins Harrison Matteson. Jacobs was one of the few men accused of witchcraft.
Tompkins Harrison Matteson/Library of Congress via AP

Fast forward to the end of World War II. That’s when returning American veterans used their benefits to settle into suburban neighborhoods that would soon be separated by race and class through zoning policies and discriminatory lending practices. This set the stage for what has come to be called The Big Sort, the self-segregation of people into neighborhoods where residents shared the same political and religious ideologies.

It was during the early stages of these sorting processes that the Red Scare and McCarthy hearings emerged.

The Big Sort turned digital in the early 2000s with the rise of online information and social media platforms with algorithms that conform to the particular desires and biases of their user communities.

Consequently, it is now easier than ever for conservatives and liberals to live in separate worlds of their own choosing. Under these conditions, Democrats and Republicans tend to exaggerate the characteristics of the other party based on common stereotypes.

Dehumanization and discrimination

Dehumanization is perhaps the most crucial ingredient of a social panic. This involves labeling people according to categories that deprive them of positive human qualities. This labeling process is often conducted by “moral entrepreneurs” – people invested by their societies with the authority to make such claims in an official, unquestionable and seemingly objective way.

In 1690s Massachusetts, the moral entrepreneurs were religious authorities who labeled people as satanic witches and killed many of them. In 1950s Washington, the moral entrepreneurs were members of Congress and expert witnesses who labeled people Soviet collaborators and ruined many of their lives.

In the 21st century, the moral entrepreneurs include media personalities and social influencers as well as the nonhuman bots and algorithms whose authority is derived by constructing the illusion of broad consensus.

Under these conditions, many U.S. liberals and conservatives regard their counterparts as savage, immature, corrupt or malicious. Not surprisingly, surveys reveal that animosity between conservatives and liberals has been at its highest over the past five years than any other time since the measurements first began in 1978.

Adding to the animosity, dehumanization can also justify discrimination against a rival group. This is shown in social psychology experiments in which conservatives and liberals discriminate against one another to a greater degree than by race when deciding on scholarships and job opportunities. Such discrimination lends credence to further animosity.

Exaggerating fear

There is a fine line between animosity and disproportional fear. The latter can lead to extreme policies and violent actions during a moral panic.

Such fear often takes the form of perceived threats. Rachel Kleinfeld, a scholar who studies polarization and political violence, says that one of the best ways to rally a political base is to make them think they are under attack by the other side. She says that “is why ‘They are out to take your x’ is such a time-honored fundraising and get-out-the-vote message.”

In the past few years, the “x” that could be taken has escalated to core freedoms and personal safety, threats which could easily trigger widespread fear on both sides of the political divide.

But the question remains whether exaggerated fears are sufficient to trigger political violence. Are assassins like Kirk’s killer simply pathological outliers among agitated but otherwise self-restrained populations? Or are they sensitive indicators of a looming social catastrophe?

A black-and-white photo shows two men dressed in suits sitting in front of a desk.
The House Committee on Un-American Activities investigates movie producer Jack Warner, right, in Washington on Oct. 20, 1947.
AP Photo

Countering the panic

We do not have the answers to that question yet. But in the interim, there are efforts in higher education to reduce animosity and encourage constructive interactions and discussion between people with different perspectives.

A nonpartisan coalition of faculty, students and staff – known as the Heterodox Academy – is promoting viewpoint diversity and constructive debates on over 1,800 campuses. The college where I teach has participated in the Congress to Campus program, promoting bipartisan dialogue by having former legislators from different parties engage in constructive debates with one another about timely political issues. These debates serve as models for constructive dialogue.

It was in the spirit of constructive dialogue that my class debated whether the Kirk assassination could be explained as the product of a moral panic. Many agreed that it could, and most agreed it was probably an assault on free speech despite having strong objections to Kirk’s views. The debate was passionate, but everyone was respectful and listened to one another. No witches were to be found in the class that day.

The Conversation

Ron Barrett does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Moral panics intensify social divisions and can lead to political violence – https://theconversation.com/moral-panics-intensify-social-divisions-and-can-lead-to-political-violence-265238

Trump scraps the nation’s most comprehensive food insecurity report − making it harder to know how many Americans struggle to get enough food

Source: The Conversation – USA (2) – By Tracy Roof, Associate Professor of Political Science, University of Richmond

Nearly 1 in 7 Americans had trouble consistently getting enough to eat in 2023. Patrick Strattner/fStop via Getty Images

The Trump administration announced on Sept. 20, 2025, that it plans to stop releasing food insecurity data. The federal government has tracked and analyzed this data for the past three decades, but it plans to stop after publishing statistics pertaining to 2024 data. The Conversation U.S. asked Tracy Roof, a political scientist who has researched the history of government nutrition programs, to explain the significance of the U.S. Household Food Security Survey and what might happen if the government discontinues it.

What’s food insecurity?

The U.S. Department of Agriculture defines food security as “access by all people at all times to enough food for an active, healthy life.”

People who are food insecure are unsure they can get enough food or unable to get enough food to meet these basic needs because they can’t afford it.

How does the government measure it?

The USDA has collected data on food insecurity since the mid-1990s. It includes the share of the population that is food insecure and a subset of this group considered to have very low food security.

People who are food insecure may not significantly reduce how much they eat, but they are likely to eat less balanced meals or lower-quality food. People with very low food security report eating less altogether, such as by skipping meals or eating smaller meals.

These statistics are based on answers to questions the USDA adds to the Current Population Survey, which the Census Bureau administers every December. There are 10 questions in the survey. Households with children are asked four more.

The questions inquire about access to food, such as whether someone has worried in the past year that their food would run out before they had enough money to buy more, or how frequently they have skipped meals, could not afford balanced meals, or felt hunger.

The U.S. food insecurity rate stood at 13.5% in 2023, the most recent year for which data is currently available. The final annual food security report, expected in October, will be issued for 2024 – based on data collected during the Biden administration’s last year.

Why did the government start measuring it?

Calls for creating the food stamp program in the 1960s led to an intense debate in Washington about the extent of malnutrition in the U.S. Until then, the government did not consistently collect reliable or national statistics on the prevalence of malnutrition.

Those concerns reached critical mass when the Citizens’ Board of Inquiry into Hunger and Malnutrition, launched by a group of anti-hunger activists, issued a report in 1968, Hunger USA. It estimated that 10 million Americans were malnourished.

That report highlighted widespread incidence of anemia and protein deficiency in children. That same year, a CBS documentary, “Hunger in America,” shocked Americans with disturbing images of malnourished children. The attention to hunger resulted in a significant expansion of the food stamp program, but it did not lead to better government data collection.

The expansion of government food assistance all but eliminated the problem of malnutrition. In 1977, the Field Foundation sent teams of doctors into poverty stricken areas to assess the nutritional status of residents. Although there were still many people facing economic hardship, the doctors found there was little evidence of the nutritional deficiencies they had seen a decade earlier.

Policymakers struggled to reach a consensus on the definition of hunger. But the debate gradually shifted from how to measure malnutrition to how to estimate how many Americans lacked sufficient access to food.

Calls for what would later be known as food insecurity data grew after the Reagan administration scaled back the food stamps program in the early 1980s. Despite the unemployment rate soaring to nearly 11% in 1982 and a steep increase in the poverty rate, the number of people on food stamps had remained relatively flat.

Although the Reagan administration denied that there was a serious hunger problem, news reports were filled with stories of families struggling to afford food.

Many were families of unemployed breadwinners who had never needed the government’s help before. During this period, the number of food banks grew substantially, and they reported soaring demand for free food.

Because there was still no government data available to resolve the dispute, the Reagan administration responded to political pressure by creating a task force on hunger in 1983. It called for improved measures of the nutritional status of Americans.

The task force also pointed to the difference between “hunger as medically defined” and “hunger as commonly defined.” That is, someone can experience hunger – not getting enough to eat – without displaying the physical signs of malnutrition. In other words, it would make more sense to measure access to food as opposed to the effects of malnutrition.

In 1990 Congress passed the National Nutrition Monitoring and Related Research Act, which President George H.W. Bush signed into law. It required the secretaries of Agriculture and Health and Human Services to develop a 10-year plan to assess the dietary and nutritional status of Americans. This plan, in turn, recommended developing a standardized measurement of food insecurity.

The Food Security Survey, developed in consultation with a team of experts, was first administered in 1995. Rather than focusing on nutritional status, it was designed to pick up on behaviors that suggested people were not getting enough to eat.

Did tracking food insecurity help policymakers?

Tracking food insecurity allowed the USDA, Congress, researchers and anti-hunger groups to know how nutritional assistance programs were performing and what types of households continued to experience need. Researchers also used the data to look at the causes and consequences of food insecurity.

Food banks relied on the data to understand who was most likely to need their help.

The data also allowed policymakers to see the big jump in need during the Great Recession starting in 2008. It also showed a slight decline in food insecurity with the rise in government assistance early in the COVID-19 pandemic, followed by another big jump with steeply rising food prices in 2022.

The big budget bill Congress passed in July will cut spending on the Supplemental Nutrition Assistance Program by an estimated US$186 million through 2034, an almost 20% reduction.

Supporters of SNAP, the new name for the food stamp program adopted in 2008, worry the loss of the annual reports will hide the full impact of these cuts.

Why is the administration doing this?

In the brief press release the USDA issued on Sept. 20 announcing the termination of the annual food insecurity reports, the USDA indicated that the Trump administration considers the food security survey to be “redundant, costly, politicized, and extraneous,” and does “nothing more than fear monger.”

While I disagree with that characterization, it is true that anti-hunger advocates have pointed to increases in food insecurity to call for more government help.

Is comparable data available from other sources?

Although the USDA noted there are “more timely and accurate data sets” available, it was not clear which datasets it was referring to. Democrats have called on the Trump administration to identify the data.

Feeding America, the largest national network of food banks, releases an annual food insecurity report called the Map the Meal Gap. But like other nonprofits and academic researchers that track these trends, it relies on the government’s food insecurity data.

There is other government data on food purchases and nutritional status, and a host of other surveys that use USDA questions. However, there is no other survey that comprehensively measures the number of Americans who struggle to get enough to eat.

As in the 1980s, policymakers and the public may have to turn to food banks’ reports of increased demand to get a sense of whether the need for help is rising or falling. But those reports can’t replace the USDA’s Food Security Survey.

The Conversation

Tracy Roof does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump scraps the nation’s most comprehensive food insecurity report − making it harder to know how many Americans struggle to get enough food – https://theconversation.com/trump-scraps-the-nations-most-comprehensive-food-insecurity-report-making-it-harder-to-know-how-many-americans-struggle-to-get-enough-food-266006

Why Major League Baseball keeps coming back to Japan

Source: The Conversation – USA (2) – By Jared Bahir Browsh, Assistant Teaching Professor of Critical Sports Studies, University of Colorado Boulder

When Shohei Ohtani stepped onto the field at the Tokyo Dome in March 2025, he wasn’t just playing a game – he was carrying forward more than 100 years of baseball ties between the U.S. and Japan.

That history was front and center when the Los Angeles Dodgers and Chicago Cubs opened their 2025 regular season facing off in the Tokyo Series on March 18 and 19. The two games featured several players from Japan, capping a slate of events that included four exhibition games against Japanese professional teams.

It was a massive financial success. Marking MLB’s first return to Tokyo since 2019, the series generated over US$35 million in ticket sales and sponsorship revenue and $40 million in merchandise sales.

The first game of the Tokyo Series broke viewership records in Japan.

For MLB, which has seen significant viewership growth this season, it was proof that its investment in Japan and international baseball over the past three decades has been paying off.

Baseball’s early journey to Japan

Baseball, which is by far the most popular sport in Japan, was introduced to the nation during the Meiji Restoration in the late 19th century.

American baseball promoters were quick to see the potential of the Japanese market, touring the country as early as 1908. The most famous such tour took place in 1934 and featured a number of American League All-Stars, including Babe Ruth and catcher Moe Berg, who was later revealed to be a U.S. spy.

That trip had a long legacy. The U.S. All-Stars faced a team called The Greater Japan Tokyo Baseball Club, which, a year later, barnstormed in the United States. When they played the San Francisco Seals, the Seals’ manager, Lefty O’Doul – who later trained baseball players in Japan – suggested a name change to better promote the team for an American audience.

Commenting that Tokyo is the New York of Japan, O’Doul suggested they take on one of their team names. And since “Yankee” is a uniquely American term, The Greater Japan Tokyo Baseball Club was reborn as the Tokyo (Yomiuri) Giants.

When the Giants returned to Japan, the Japanese Baseball League was formed, which was reorganized into Nippon Professional Baseball in 1950. The Giants have gone on to dominate the NPB, winning 22 Japan Series and producing Sadaharu Oh, who hit 868 home runs during his illustrious career.

Breaking into MLB

The first Japanese-born MLB player, Masanori Murakami, debuted for the San Francisco Giants in September 1964. But his arrival wound up sparking a contractual tug-of-war between the NPB and MLB. To prevent future disputes, the two leagues signed an agreement in 1967 that essentially blocked MLB teams from signing Japanese players.

By the 1990s, this agreement became untenable, as some Japanese players in NPB became frustrated by their lack of negotiating power. After the Kintetsu Buffaloes refused to give Hideo Nomo a multiyear contract after the 1994 season, his agent found a loophole in the “voluntary retirement clause” that would allow him to sign with an MLB franchise. He signed with the Los Angeles Dodgers in February 1995.

Nomo’s impact was immeasurable. His “tornado” windup and early success made him one of the most popular players in the major leagues, which was recovering from the cancellation of the World Series the previous year. In Japan, “Nomo fever” took hold, with large crowds gathering television screens in public to watch him play, even though his games aired in the morning. Nomo helped drive Japanese sponsorship and television rights as his first season ended with him winning National League Rookie of the Year.

But within a few years, disputes over contracts soon showed the need for new rules. This ultimately led to the establishment of posting rules for NPB players looking to transition to the major leagues.

The rules have shifted some since they were set out in late 1998, but if a player declares their intention to leave NPB, then MLB teams have a 45-day window to negotiate. If the player from NPB is under 25 or has less than nine years of professional experience, they’re subject to the limited MLB signing pool for international players. Otherwise, they’re declared a free agent.

A wave of stars

The new rules led many more Japanese players to join major league baseball from Nippon Professional Baseball: Of the 81 Japanese players who’ve played in the majors, all but four played in NPB before their debut. Ichiro Suzuki, who became the first Japanese player inducted into the National Baseball Hall of Fame, was also the first Japanese position player to make the leap.

Other players, like Hideki Matsui, the only Japanese player to be named World Series MVP, continued the success. And then came Ohtani, a two-way superstar who both hits and pitches, drawing comparisons to Babe Ruth.

For MLB, Japanese players haven’t just boosted performance on the field – they’ve expanded its global fan base. The Dodgers brought in over $120 million in increased revenue in Ohtani’s first year alone, easily covering his salary even with Ohtani signing the richest contract in baseball history. The franchise has also seen its value increase by at least 23% to nearly $8 billion. MLB has also seen a significant increase in viewership over the past two seasons, partially driven by the growing interest from Japan.

As American sports leagues deal with an increasingly distracted, fragmented domestic audience, it’s not surprising that they’re looking abroad for growth. And as MLB teams prepare to court another wave of Japanese stars this offseason, it’s clear that its decades-long investment in Japan is paying off.

The Conversation

Jared Bahir Browsh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Major League Baseball keeps coming back to Japan – https://theconversation.com/why-major-league-baseball-keeps-coming-back-to-japan-264668

Breastfeeding is ideal for child and parent health but challenging for most families – a pediatrician explains how to find support

Source: The Conversation – USA (3) – By Ann Kellams, Professor of Pediatrics, University of Virginia

Many new parents start out breastfeeding but switch to formula within a few days. JGI/Jamie Grill via Tetra Images

As a pediatrician, I thought my medical background and pediatric training meant I would be well prepared to breastfeed my newborn. I knew all about the research on how an infant’s diet can affect both their short- and long-term health. Compared to formula, breastfeeding is linked to a lower risk of sudden infant death syndrome, lower rates of infections and hospitalizations and a lower risk of developing diabetes later in life. Breastfeeding can also provide health benefits to the parent.

But I struggled to breastfeed my own firstborn. I was exhausted and in pain. My nipples were bleeding and my breasts swollen. I worried about whether my baby was getting enough to eat. And I was leaking breast milk all over the place. I found myself asking questions familiar to many new parents: What in the world is going on with breastfeeding? Can I keep this up when I go back to work? How does a breast pump even work? Why doesn’t anyone know how to help me? And why are some families able to start breastfeeding and never look back?

The American Academy of Pediatrics recommends caregivers breastfeed their child for up to two years. However, many new parents are unable to reach these breastfeeding goals and find it very difficult to get breastfeeding going. Combined with inadequate support, some blame themselves or feel like less than a good parent.

While over 80% of families start out breastfeeding their baby, roughly 19% of newborns have already received infant formula two days after birth. Around half of families are able to breastfeed their babies six months after birth and only 36% at 12 months.

Mother breastfeeding newborn, eyes closed in pain, lying in hospital bed at night
Breastfeeding can be painful – especially without support.
Yoss Sabalet/Moment via Getty Images

Inspired by my own and my patients’ experiences with breastfeeding, I sought extra training in the field of breastfeeding and lactation medicine. Now, as a board-certified physician in breastfeeding and lactation medicine, I wanted to understand how pregnant and breastfeeding parents – and those who care for them – perceive breastfeeding. How do they define breastfeeding success? What would make breastfeeding easier, especially for underserved communities with some of the lowest breastfeeding rates in the U.S.

Listening to new parents

In partnership with the Academy of Breastfeeding Medicine and Reaching Our Sisters Everywhere, a nonprofit focused on supporting breastfeeding among Black families, my team and I started a research project to identify the key components of a successful breastfeeding journey as defined by parents. We also wanted to determine what would enable families to achieve their breastfeeding goals.

To do this, we asked a range of parents and experts in the field of breastfeeding and lactation medicine about what would make breastfeeding easier for families. We recruited participants through social media, listservs and at the Academy of Breastfeeding Medicine’s annual international meeting, inviting them to provide feedback through virtual listening sessions, online surveys and in-person gatherings.

What we found is fascinating. From the perspective of the parents we talked to, success for breastfeeding had less to do with how long or to what extent they exclusively breastfed. Rather, success had a lot more to do with their experience with breastfeeding and whether they had the support they needed to make it possible.

Support included someone who could listen and help them with breastfeeding; communities that welcomed breastfeeding in public; and supportive loved ones, friends and workplaces. Having their questions about breastfeeding answered in accessible and practical ways through resources such as breastfeeding and lactation professionals in their area, peer support and websites with reliable, trustworthy information was also important to helping them feel successful in breastfeeding.

Parent sitting in chair with baby in lap, hand on temple, breast pumps in foreground
Figuring out how to make time and room for breastfeeding can be taxing.
FatCamera/iStock via Getty Images Plus

Important questions about breastfeeding also arose from these conversations. How can hospitals, clinics and health care workers make sure that breastfeeding support is available to everyone and is equitable? What education do health care professionals need about breastfeeding, and what are barriers to them getting that education? How should those in health care prepare families to breastfeed before the baby is born? And how can the care team ensure that families know when and how to get help for breastfeeding problems?

The good news is that most of the problems raised within our study are solvable. But it will take an investment in resources and support for breastfeeding, including training health care workers on troubleshooting common problems such as nipple pain, ineffective latch and concerns about breast milk production.

Corporate influences on feeding babies

Commercial infant formula is a US$55 billion dollar industry. And yet, most formula use would not be necessary were barriers to breastfeeding reduced.

Research shows that the marketing practices of commercial infant formula companies are predatory, pervasive and misleading. They target not only families but also health care workers. During my medical training, commercial infant formula companies would give us lectures, free lunches, and books and calculators, and my fellow residents and I knew the representatives by name. As a medical director of a newborn unit, I saw these companies stocking our hospital shelves with commercial infant formula and building relationships with our nursing staff. These companies profit when breastfeeding goes wrong.

The World Health Organization has advocated against aggressive commercial infant formula marketing.

This is not to say that commercial infant formula is a bad thing. When breastfeeding isn’t possible, it can be lifesaving. But in some cases, because the U.S. doesn’t provide universal paid maternity leave and not all workplaces are supportive of breastfeeding, parents may find themselves relying on commercial infant formula.

Thinking about breast milk and commercial infant formula less as a question of lifestyle or brand choices and more as an important health care decision can help families make more informed choices. And health care providers can consider thinking about infant formula as a medicine for when it is necessary to ensure adequate nutrition, putting more focus on helping families learn about and successfully breastfeed.

Breastfeeding is a team sport

As the saying goes, it takes a village to raise a child, and breastfeeding is no exception – it is a team sport that calls upon everyone to help new parents achieve this personal and public health goal.

What can you do differently to support breastfeeding in your family, neighborhood, workplace and community?

When I am educating new or expectant families about breastfeeding, I emphasize skin-to-skin contact whenever the parent is awake and able to monitor and respond to baby. I recommend offering the breast with every feeding cue, until the baby seems content and satisfied after each feeding.

Manually expressing drops of milk into the baby’s mouth after each feeding can boost their intake and also ensure the parent’s body is getting signaled to make more milk.

If your family has concerns about whether the baby is getting enough milk, before reaching for formula, ask a lactation consultant or medical professional who specializes in breastfeeding how to tell whether everything is going as expected. Introducing formula can lead to decreased milk production, the baby preferring artificial nipples over the breast and stopping breastfeeding earlier than planned.

Some parents are truly unable to continue breastfeeding for various reasons, and they should not feel ashamed or stigmatized by it.

Finally, give yourself time for breastfeeding to feel routine – both you and baby are learning.

The Conversation

Ann L. Kellams receives funding from NICHD for her research and Pediatric UptoDate as an author. She is the immediate past-president of the Academy of Breastfeeding Medicine.

ref. Breastfeeding is ideal for child and parent health but challenging for most families – a pediatrician explains how to find support – https://theconversation.com/breastfeeding-is-ideal-for-child-and-parent-health-but-challenging-for-most-families-a-pediatrician-explains-how-to-find-support-240396

NHS league tables are back – but turning rankings into better care is harder than it looks

Source: The Conversation – UK – By Catia Nicodemo, Professor of Health Economics, Brunel University of London

Andre Place/Shutterstock

The UK government has launched NHS league tables for every trust in England, promising transparency and an incentive for improvement. The idea is simple: rank providers of health care and reward the best.

But national health care is not a simple thing. And trying to convert something so complex into a single ladder of winners and losers could end up distorting medical priorities and resources.

For example, the way waiting times are measured for elective (non-emergency) surgery is (and needs to be) different to how they are measured for cancer treatment and A&E. Mixing these into one overall “score” for waiting times could encourage NHS trusts to focus on the most rank-sensitive elements of healthcare, even when bottlenecks exist elsewhere (such as diagnostics or community care).

This can lead to a kind of tunnel vision, where what’s measured is considered to be what matters most. Previous research on rating shows how rankings can shift hospital managers’ attention from broad quality to narrow score keeping.

Another challenge is that different NHS trusts operate in very different contexts. Patient populations vary in age, and in levels of affluence and deprivation – factors which can directly influence demand on a hospital and its clinical outcomes.

A hospital serving an older and poorer population may find it much harder to meet targets than one that serves a younger and healthier area. And while league tables are supposed to be compiled in such a way that they account for these kinds of differences, the adjustment calculations are never perfect.

If league tables fail to account for these realities, they risk labelling overstretched hospitals as “poor performers” when they may in fact be delivering strongly against difficult odds.

Evidence also shows that when patients are given more choice about where they receive their healthcare, some do explore their options. But distance and the availability of transport make a huge difference.

If you can’t get to the hospital you want, the choice is not really there. And “competition” between different trusts falls sharply outside dense urban markets. In practice, many patients simply take their GP’s recommendation and use the nearest viable hospital.

So while league tables designed to encourage choice and stimulate competition may help to raise quality, they also carry risks – most notably amplifying regional inequalities. Such rankings could then become magnets, drawing both patients and staff toward “elite” hospitals.

If rankings trigger “patient outflows” (people choosing to go elsewhere for care) and health professionals being reluctant to work in lower-ranked hospitals, a vicious circle develops, making that low ranking even more difficult to shake off.

And moves towards greater transparency require greater support as well, with extra staffing and diagnostics capacity, as well as targeted recruitment and retention schemes in hard-pressed areas. Otherwise, the policy risks deepening geographical inequalities.

For emergency care, for rural areas, or for people with limited mobility, improvement will depend on better coordination and sufficient capacity, such as ensuring that ambulance services are well linked to hospitals with intensive care beds.

Scoring points

League tables can shine a light. But light without lenses can distort. (The NHS itself acknowledges the risk of crude comparisons that league tables can bring.)

To avoid perverse incentives and widening gaps, rankings should be used as a starting point for deeper analysis, not treated as a final verdict. They need to adjust for differences in patient populations so that hospitals treating sicker or more challenging patients are not penalised.

A gloved hand holds a red heart behind digital NHS symbol.
A complex organisation.
Panchenko Vladimir/Shutterstock

They need to be designed to minimise gaming the system (by preventing hospitals from prioritising easy cases just to hit targets for example). They need to give GPs the tools and authority to direct patients to the most appropriate services, and pair transparency with extra support for areas of highest need.

Done badly, rankings reward already-advantaged hospitals and shift efforts towards chasing the scoreboard. Done well (using risk-adjusted, specialised dashboards) they can help tackle the real causes of long waits and uneven care.

Performance data needs to be used with caution, linked to GP referral systems where patients actually make choices, and accompanied by targeted support for those areas serving the most complex populations. Without these safeguards, league tables risk distorting behaviour, encouraging tunnel vision and amplifying existing inequalities in the NHS, rather than solving them.

The Conversation

Catia Nicodemo is affiliated with University of Oxford

ref. NHS league tables are back – but turning rankings into better care is harder than it looks – https://theconversation.com/nhs-league-tables-are-back-but-turning-rankings-into-better-care-is-harder-than-it-looks-265688

Acalculia: why many stroke survivors struggle with numbers

Source: The Conversation – UK – By Yael Benn, Senior Lecturer, Manchester Metropolitan University

Acalculia can have a huge impact on daily life. Lightspring/ Shutterstock

Numbers are all around us. In the morning, we wake up to an alarm that tells us it’s time to get out of bed. When deciding what to wear, we often check the temperature outside. We count out the vitamins or prescription pills we need to take while eating our breakfast, we estimate how long it will take to get to the station and then check what platform we need to be on catch the train to work.

Every single one of these examples involves using and understanding numbers. Being able to carry out such small calculations and estimations makes our life possible.

This is why acalculia, a neurological condition that impairs the ability to process and understand numbers, can have a devastating effect on a person’s life. The condition commonly afflicts people who have had a stroke or suffered a brain injury. It’s estimated that it affects between 30%-60% of stroke survivors.

The brain is a complex organ that controls both our movements and senses. It enables us to receive signals from the environment, process information and execute motor actions.

But a stroke or brain injury interrupts blood supply to the brain. If this stroke or injury happens on the left side of the brain, it can cause problems with language processing and other cognitive functions, such as memory. It can also affect movement on the right side of the body.

If it happens on the right side of the brain, movement on the left side of the body will be affected. There may also be cognitive deficits – typically those involved with processing visual information.

But acalculia can occur regardless of which area of the brain has been damaged. This is because processing numbers and performing calculations are done using many different areas of the brain.

This includes the left hemisphere, which helps us process language; the right hemisphere, which is involved in visuo-spatial processing; the posterior part of the brain, which is involved in comprehending magnitude (which of two numbers is smaller or bigger); and the front of the brain, which control executive function.

Lesions or damage to any of these areas can cause problems in how a person processes numbers.

For people with acalculia, sometimes the processing problem can just be surface level. They may feel that they know a number but can’t say it out loud. Or, a person may mean to say or write one number and instead another comes out.

In severe cases, a patient can altogether lose the meaning of numbers. So they may know a number has been mentioned or is written down, but they just can’t figure out what it actually means or how to make sense of it.

Effect on daily life

To understand the impacts of acalculia, my colleagues and I interviewed people with the condition alongside some of their carers to learn how it affected their lives and what support they received.

Stroke and brain injury survivors with acalculia reported being unable to manage their money. Some interviewees spoke of needing to depend on their carer to handle their money or having trouble accessing their internet banking because they struggled with common login questions such as “enter the third character of your pin”.

An older man looks at a sheet of paper in confusion. He has three vials of prescription medications on the table in front of him.
Acalculia can even make routine tasks – such as taking prescription medication – a challenge.
Burlingham/ Shutterstock

Worryingly, many participants reported difficulties managing their medications – with several totally relying on their pharmacist.

Simply managing their everyday lives was also made more difficult by the condition. Telling the time was difficult because of the digits. Even using the microwave was difficult because cooking times “are a jumble with numbers,” as one participant put it.

Importantly, acalculia had a detrimental effect on independence and wellbeing. As one participant said: “I feel dumb, embarrassed and frustrated.”

Overall, our findings highlighted just how substantial an effect acalculia had on stroke and brain injury survivors’ independence and quality of life. Acalculia left some unable to return to work, and many unable to live independently or manage their everyday lives, leaving them vulnerable. Our research also pointed out important gaps in how the condition is currently assessed and treated.

Acalculia awareness

One in four adults over the age of 25 are at risk of experiencing a stroke in their lifetime. Although we’re getting better equipped to help people recover from a stroke, acalculia remains overlooked in stroke rehabilitation guidelines. It’s not routinely tested for after a stroke (despite several dedicated assessments available) and there are currently no clinically-tested treatments for the condition.

The condition doesn’t appear to be taught in clinical training at present. One patient we interviewed in our study recalled asking their therapists for help with acalculia, saying: “What can you do to help me with my maths? Every therapist I’ve met says ‘I can’t help you’. Why? Because it’s not part of their training.”

This means healthcare workers aren’t able to recognise the problem – let alone be able to support patients who have it.

People with acalculia are currently left to support themselves. Many may not even know there’s a name for their condition. It’s clear more needs to be done to raise awareness so that it can be better assessed – and so patients can receive the help and support they need in overcoming acalculia.

The Conversation

Yael Benn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Acalculia: why many stroke survivors struggle with numbers – https://theconversation.com/acalculia-why-many-stroke-survivors-struggle-with-numbers-265643

Why it’s time to rethink the notion of an autism ‘spectrum’

Source: The Conversation – UK – By Aimee Grant, Associate Professor in Public Health and Wellcome Trust Career Development Fellow, Swansea University

The phrases “autism spectrum” or “on the spectrum” have become part of everyday language. They are often used as different ways of referring to someone who is “neurodivergent”.

The term was coined in the 1980s by psychiatrist Dr Lorna Wing, whose work transformed how autism was understood in the UK. At the time, her “autism spectrum” concept was groundbreaking. Instead of seeing autism as a rare, narrowly defined condition, she recognised a wide range of traits and experiences.

But the idea of a single spectrum, which stretches from “mild” to “severe”, may be misleading. And some autism experts, including me, argue the term has outlived its usefulness.

When most people hear the word “spectrum”, they may picture a straight line, like colours arranged from red to violet. Applied to autism, this suggests autistic people can be ranked from “more autistic” to “less autistic”. But that’s not how autism works.

Autism is made up of many different traits and needs, which show up in unique combinations. Some autistic people rely heavily on routine, while others find comfort in repetitive movements known as “stimming”. And some have an intense focus on particular topics, a concept researchers call “monotropism”.

There are also known links with physical conditions such as hypermobility. Because autism is made up of all these different elements, there can be no single line on which every autistic person is placed.




Read more:
Why the autism jigsaw puzzle piece is such a problematic symbol


Attempts to draw boundaries still persist, however. The American Psychiatric Association’s diagnostic manual divides autism into three “levels” based on the amount of support a person is judged to need. They run from level 1 “requiring support”, to level 2 “requiring substantial support” and level 3 “requiring very substantial support”.

But there is research that argues these levels are vague and inconsistently applied. They don’t always reflect someone’s real-world experiences.

Life circumstances can also change a person’s needs. An autistic person who usually copes well may experience “burnout” and have an accompanying increase in support needs, if their needs have been unmet for a long time.

In a recent research article, my colleagues and I show that life stages such as menopause can increase support needs. A static “level” cannot capture this evolving nature.

More recently, the label “profound autism” has been suggested by the Lancet commission – an international group of experts – for autistic people with learning disabilities or high support needs. But other experts say the phrase is unhelpful because it tells us nothing about a person’s particular challenges or the type of support they require.

One person sitting alone at the end of a jetty on the side of a misty lake
Autism is made up of many different traits and needs, which show up in unique combinations in each individual.
Eva Pruchova/Shutterstock

The legacy of Asperger’s

Dr Lorna Wing also introduced the term “Asperger’s syndrome” to the UK. Like the concept “profound autism”, using this term also divided autistic people into those with higher support needs and those with Asperger’s syndrome (lower support needs).

However, the label was drawn from the name of Austrian physician Hans Asperger, who in the 1940s identified a subgroup of children he called “autistic psychopaths”. During the Nazi period, Asperger was associated with a genocide of autistic people with higher support needs. For this reason, many autistic people don’t use the term any more, even if that is what they were originally diagnosed with.

Underlying all these debates is a deeper concern that dividing autistic people into categories, or arranging them on a spectrum, can slip into judgments about their value to society. In the most extreme form, such hierarchies risk dehumanising those with higher support needs. It’s something some autistic campaigners warn could fuel harmful political agendas.

In the worst case, those judged as less useful for society become vulnerable to future genocides. This may seem far fetched, but the political direction in the US, for example, is very worrying to many autistic people.

Recently, US health secretary Robert F. Kennedy Junior, said that he was going to “confront the nation’s (autism) epidemic”. So far, this has included strongly refuted claims that paracetamol use in pregnancy is linked to autism in children, urging pregnant women to avoid the painkiller.




Read more:
Paracetamol use during pregnancy not linked to autism, our study of 2.5 million children shows


Often people use the term “autism spectrum” or “on the spectrum” as a way of avoiding saying that somebody is autistic. While this is often well meaning, it is rooted in the idea that to be autistic is a negative thing. Many autistic adults prefer the words “autism” and “autistic” directly. Autism is not a scale of severity but a way of being. It’s a difference rather than a defect.

Language will never capture every nuance, but words shape how society treats autistic people. Moving away from the idea of a single spectrum could be a step towards recognising autism in all its diversity, and valuing autistic people as they are.

The Conversation

Aimee Grant receives funding from the Wellcome Trust and UKRI.

ref. Why it’s time to rethink the notion of an autism ‘spectrum’ – https://theconversation.com/why-its-time-to-rethink-the-notion-of-an-autism-spectrum-263243

Singapore’s national identity excludes those who don’t look like a ‘regular family’

Source: The Conversation – UK – By Pavan Mano, Lecturer in Global Cultures, King’s College London

Nationalism usually works on the basis that a nation should imagine itself as a “we”, with a common identity, history and culture. But it doesn’t always clearly say who the “we” are. Instead, it often works by saying who doesn’t belong – frequently by characterising these people in racialised ways.

Singapore is an interesting case study. Since independence in 1965, the small city-state has explicitly committed to a policy of multiracialism and multiculturalism. This principle is enshrined in its constitution, is widely accepted by Singaporeans and has become a firm pillar of national discourse.

Given this commitment, how does nationalism create exclusion in Singapore and what other forms could this take? In my March 2025 book, Straight Nation, I analyse Singapore’s version of a national identity to show how, while avoiding overtly racialised rhetoric and discrimination, it can define belonging in other ways.

Singaporean nationalism excludes some sections of society mainly through maintaining a set of heterosexual familial norms. This is one reason for the book’s title – it calls attention to how straightness sits at the heart of Singaporean identity. A certain kind of straight life is taken to be the model behaviour of a “normal” citizen.

Some of the things one is expected to do include starting a family – by meeting a member of the opposite sex, getting married and having children. This very specific version of heterosexuality is taken as the default in Singapore, and it ends up excluding a whole range of people.

Family and the nation

Heterosexuality being taken as normal and the expectations placed on the nuclear family are not uniquely Singaporean issues. But because of Singapore’s small size, the state has an outsize capacity to influence both how the “normal” Singaporean ought to live and the consequences that follow.

One of the most visible ways people are affected is through the public housing system. Almost 80% of Singaporean residents live in flats built by the country’s public housing authority, the Housing and Development Board (HDB). These flats are so ubiquitous that Singapore’s former prime minister, Lee Hsien Loong, referred to them as “national housing” in 2018.

The catch is that, with some small exceptions, one has to be married to buy a HDB flat. And because same-sex marriage is not recognised in Singapore, heterosexual marriage becomes a condition of access to this national symbol.

This obviously affects LGBTQ+ people, limiting their ability to access public housing and live independently. But the link between heterosexual marriage and public housing affects a whole range of other people. These include single people and parents, those who choose not to get married and people who are divorced.

A block of flats in the district of Punggol, Singapore.
Housing Development Board flats in the district of Punggol, Singapore.
happycreator / Shutterstock

There are other examples that demonstrate how it is taken as common sense that one’s life revolves around the nuclear family in Singapore – even though this might not be the case for everyone.

The opening anecdote in Straight Nation shows how the state treats the heterosexual nuclear family as containing the most important set of social relations. Like many other countries at the height of the COVID-19 pandemic, the Singaporean government imposed a lockdown from April to June 2020. When it ended, restrictions were lifted in stages.

Initially, only some in-person interactions were allowed. Singapore’s then-health minister and current deputy prime minister, Gan Kim Yong, said: “Children or grandchildren can visit their parents or grandparents”. He suggested this would “allow families to spend time and provide support to one another” after eight weeks of isolation.

Until the restrictions were further eased 17 days later, visiting one’s parents or grandparents was the only form of in-person social interaction permitted. There was no mention as to what people without a family or estranged from them were meant to do for support. The same applies to people reliant on extended family, such as those who have no have no surviving parents or grandparents, or even those who depend on a close friend.

Again, this assumption can produce exclusions that go beyond sexual difference. To be clear, not everyone will be affected in the same way. But reading Singapore as a straight nation and identifying how one particular kind of heterosexual expression is reified is helpful.

It allows onlookers to ask how these norms can place different kinds of pressure on different people. And perhaps identifying the way in which so many people are affected by this regime of straightness will also help Singapore imagine a future that is fairer and more liveable for everyone.

The Conversation

Pavan Mano does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Singapore’s national identity excludes those who don’t look like a ‘regular family’ – https://theconversation.com/singapores-national-identity-excludes-those-who-dont-look-like-a-regular-family-259427