Why US third parties perform best in the Northeast

Source: The Conversation – USA – By Bert Johnson, Professor of Political Science, Middlebury College

Hugh McTavish is running as the Independence-Alliance Party candidate for governor of Minnesota in 2026. UCG via Getty Images

A majority of Americans say they are “frustrated” or “angry” – or both – with Republicans and Democrats, according to the Pew Research Center. But that rarely translates into support for independent or third-party candidates.

One exception has been in the Northeast. Angus King of Maine and Bernie Sanders of Vermont are the Senate’s only independents. King, along with Lowell Weicker of Connecticut and Lincoln Chafee of Rhode Island, represent three of the five independent and third-party governors elected nationwide since 1990. And of the 23 current independent or third-party state legislators in the country, excluding technically nonpartisan Nebraska, 14 of them, or 61%, are in New England.

As a political scientist who has taught in Vermont for two decades, I was intrigued by the question of why third-party and independent candidates are so successful, relatively speaking, in the Northeast? And can this region teach us lessons about broadening the choices available to voters?

Market forces

In their classic book “Third Parties in America,” Steven Rosenstone, Roy Behr and Edward Lazarus argue that alternative parties succeed where motivation for third-party voting is high, constraints against doing so are low, or both.

Those may sound like obvious points, but let’s explore them individually. First, motivation. Third parties do better when voters are frustrated with the two major parties and see them as incapable or unwilling to respond to their needs.

Sen. Bernie Sanders holds and leans into a microphone, wearing a heavy coat at an outdoor event.
Bernie Sanders has represented Vermont in the Senate as an independent since 2007 but twice ran for president as a Democrat.
AP Photo/Andres Kudacki

In a polarized national political climate, New Englanders might appear to be good candidates for anger. Vermont gave Donald Trump his smallest share of the 2024 presidential vote of any state – less than a third. Massachusetts was not far behind.

This should not necessarily be interpreted as enthusiasm for the Democrats. Pew found that two-thirds of Democrats are frustrated with their own party.

Channeling some of this discontent, Vermont Gov. Phil Scott, although a Republican, has frequently criticized Trump and accused the president and other politicians in Washington of creating “chaos.”

Still, the idea that discontent explains New England’s openness to third parties and independents clashes with other pieces of the picture. Other states where most voters are hostile to Trump, such as California, Maryland and Illinois, have few successful third-party or independent candidates.

And the Northeast has been fairly friendly territory for third parties and independents in very different national contexts. New England elected far more third-party and independent legislators than other regions back in 2010 as well, at a point during Barack Obama’s presidency when political discontent was most famously centered within the conservative tea party movement.

Limits on minor parties

That brings us to the second possibility: constraints on third parties, or their absence.

Unlike parliamentary democracies, including Brazil and Spain, that use proportional representation – giving some proportion of the seats even to parties that garner small shares of the overall vote – the U.S. system is stacked against third parties because of its “first-past-the-post” electoral system, under which candidates can win with pluralities of the vote.

This type of voting encourages citizens to consider only the two major parties because other candidates are generally considered not to have any realistic shot of winning. This helps explain why Sanders ran for president as a Democrat in 2016 and 2020.

Ross Perot gestures with his left hand while standing between George H.W. Bush and Bill Clinton, both seated on a stage.
Ross Perot was the last third-party candidate to reach a presidential debate stage, here standing between Republican George H.W. Bush and Democrat Bill Clinton in 1992.
AP Photo/Doug Mills

In presidential voting, the Electoral College sinks third-party chances – even if they have wide support – if their voters are not concentrated enough to win individual states. Running as an independent in 1992, businessman Ross Perot won 19% of the national vote but received exactly zero votes in the Electoral College.

These constraints, while formidable in national politics, play out differently at the state and local levels. Absent the Electoral College, there is less of a guarantee that the Democrat and Republican will always be perceived as the two most viable candidates in local races, especially in regions with lopsided support for one party or the other.

In areas with overwhelming Democratic support, the next most viable option might not be a Republican but a progressive. In areas with overwhelming Republican support, Democrats could be less viable than libertarians.

Access to the ballot

But if this is true, why do we not see just as many third-party and independent victories in red states, such as Alabama and Mississippi, as we do in Vermont and Maine? The answer lies in a seemingly mundane but crucial factor: ballot access laws.

States set the rules governing which candidates quality for the ballot. In almost every state, Democrats and Republicans have advantages over other parties or independents. But in the Northeast it is easier for independents and candidates from other parties to get on the ballot.

In no New England state does an independent candidate for a state legislative seat have to collect more than 150 signatures to secure a ballot spot. In Georgia, by contrast, candidates must collect signatures equal to 5% of the total number of registered voters in the jurisdiction holding an election, which can translate into thousands of signatures.

To see the impact of ballot access rules on candidates outside of the major parties, you only need look at one of the few states outside of New England where such candidates have done as well: Alaska.

Alaska has long had ballot access rules that are among the most open in the nation. Candidates for state House races need only pay a filing fee of US$30 to get a ballot line, and it is nearly as easy for them to file as a recognized party or group.

That helps explain why five independents currently serve in the Alaska House, that the state elected as governor a third-party candidate in 1990 and an independent in 2014, and reelected U.S. Sen. Lisa Murkowski as a write-in candidate after she lost the Republican primary in 2010.

Ease of ballot access attracts outsider candidates, increases competition, and gives voters an outlet for their frustrations.

To sum up, if people want more choices in elections, they will need to change the rules.

The Conversation

Bert Johnson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why US third parties perform best in the Northeast – https://theconversation.com/why-us-third-parties-perform-best-in-the-northeast-273749

Making sense of a chaotic planet: How understanding weather and climate risks depends on supercomputers like NCAR’s

Source: The Conversation – USA (2) – By Antonios Mamalakis, Assistant Professor of Data Science and Environmental Science, University of Virginia

Have you ever stopped to wonder how forecasters can predict the weather days in advance, or how scientists figure out how the climate might evolve under different policies?

The Earth system is a vast web of intertwined processes, from microscopic chemical reactions to towering storms. Ocean currents circulating deep in the Atlantic, forests exchanging carbon with the atmosphere, and humans altering the composition of the air all have effects that ripple through the system. These processes are governed by physical laws, such as conservation of mass, energy and momentum.

All of this plays out on such a large scale that no single human mind can truly grasp it in full. And yet, the system is so sensitive that a small perturbation, given enough time, can steer its trajectory in a dramatically different direction. This sensitivity is called “chaos,” also known as the “butterfly effect.” The planet is, at once, immense and delicate.

Despite this complexity and scale, scientists are able to simulate and anticipate how the climate will change.

How is this even possible? Behind the long-term climate projections that affect our lives sits one of the most remarkable scientific achievements of the modern era: climate models that run on supercomputers.

I am a climate data scientist. My colleagues and I try to understand extreme weather and long-term climate risks by using virtual versions of Earth inside these machines.

What a climate model really is

Here is the simplest way to picture a climate model:

Imagine dividing the entire planet into 3D boxes. At the surface, each box might represent an area 50 to 100 kilometers across. Then we stack boxes upward into the atmosphere and downward into the oceans to create a 3D grid wrapping around the globe.

Each box contains numbers: temperature, wind speed, humidity, sea ice thickness, soil moisture and hundreds of other variables. The model contains mathematical expressions that describe how these variables influence one another: how heat moves, how air rises and sinks, how moisture condenses into clouds, how the ocean absorbs and redistributes energy.

A globe with boxes around it and a close-up of some calculations taking place in one of those boxes.
Climate models are systems of differential equations based on the basic laws of physics, fluid motion and chemistry. They divide the planet into a 3D grid, apply the equations and evaluate the results. Within these models, the atmosphere component, for example, calculates winds, heat transfer, radiation, relative humidity and surface hydrology.
NOAA

We then let the model march forward in time, solving the math and updating every variable in every box. Then again. And again.

Now scale that up. Millions of grid boxes. Hundreds of variables per box. Calculations carried out millions of times to simulate decades or even centuries.

And because the system is chaotic, we do not run the model just once. We run it many times with slightly different initial conditions – what scientists call an ensemble – to make sure the result is in fact a true system response to the considered scenario, such as warming temperatures due to increased emissions, and not an effect of chaos.

The result is an astronomical number of calculations. Performing them requires computers capable of executing quadrillions of operations per second – what are known as petaflop-scale supercomputers. A petaflop equals 1 quadrillion – 1,000,000,000,000,000 – calculations per second!

From simulation to real-world decisions

These simulations inform decisions that affect everyday life: how high to elevate homes in flood-prone areas, how to design power grids resilient to prolonged heat waves, how to manage water resources during drought.

Urban planners, engineers, emergency managers and policymakers all rely on information derived from these models.

Dozens of major climate models have been developed around the world by universities, national laboratories and government agencies. Each modeling center builds its own code, makes its own physical assumptions, chooses its own grid resolution and operates its own supercomputing systems. Through international efforts such as the Coupled Model Intercomparison Project, modeling centers agree on common experiments: the same greenhouse gas scenarios and the same volcanic eruptions, for example.

When you hear that extreme rainfall is projected to intensify in a warmer world, or that the Arctic Ocean could become seasonally ice-free within decades, those conclusions are not the result of calculations carried out by a single scientist, a single team of scientists, or even a single model run. They emerge from dozens of independently developed models, run on room-sized supercomputers, under pre-agreed and carefully coordinated experiments.

A map created by an ensemble with multiple computer models shows areas of agreement.
In this example of the use of multiple models, areas in color and without hashmarks indicate regions with high agreement among models, where more than 80% of the models agree on the signs of change. The projections for annual maximum daily precipitation change were made using the Multi-model Coupled Model Intercomparison Project Phase 5 (CMIP5).
IPCC

This global collaboration is one of the reasons scientists know so much about climate change. These shared simulations allow scientists around the world to test hypotheses and explore future risks based on models’ consensus.

It is no surprise that the 2021 Nobel Prize in physics recognized pioneers of climate modeling. These models fundamentally transformed humanity’s ability to understand a complex planet.

There is no alternative way to answer “what if” questions about the future climate system. What happens if carbon dioxide doubles? What if emissions decline rapidly? What if a major volcanic eruption injects aerosols into the stratosphere? Because the climate system is so complex, and forces can push it outside the range of historical experience, the past is no longer a reliable guide to the future. So statistical models aren’t enough.

Artificial intelligence cannot replace this foundation either. AI has made impressive progress in short-term weather prediction, learning patterns from vast historical datasets, and producing forecasts with remarkable speed.

But climate projections require extrapolating to conditions the planet has not experienced in modern history – such as higher greenhouse gas concentrations. AI can accelerate simulations and analyze massive amounts of data today, but it cannot replace solving the physical equations that govern the system.

National supercomputing centers are essential

In the United States, major climate modeling efforts have been supported by national laboratories and federal centers, including NASA and the National Center for Atmospheric Research, or NCAR, along with a few research universities.

At NCAR, scientists developed the Community Earth System Model, a comprehensive climate model that’s arguably one of the best models to date and is used by researchers across the country and around the world to study climate change, severe weather, climate effects on wildfires, and atmospheric patterns. It has helped position the United States at the forefront of climate science and enabled the global research community to tackle some of the most pressing challenges of our time.

Running large ensembles with this model requires powerful hardware, data storage systems capable of handling petabytes of output, and engineers who keep these systems operational. This is not a matter of downloading and running a program on a laptop. It is a national-scale scientific enterprise that makes NCAR and its supercomputer essential.

In a warming climate, the stakes are high. The ability to simulate the Earth system at scale is one of the most powerful tools humanity has to prepare for the risks ahead.

The Conversation

Antonios Mamalakis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Making sense of a chaotic planet: How understanding weather and climate risks depends on supercomputers like NCAR’s – https://theconversation.com/making-sense-of-a-chaotic-planet-how-understanding-weather-and-climate-risks-depends-on-supercomputers-like-ncars-276376

How protecting wilderness could mean purposefully tending it, not just leaving it alone

Source: The Conversation – USA (2) – By Clare E. Boerigter, Wilderness Fire Research Fellow at the Aldo Leopold Wilderness Research Institute, Rocky Mountain Research Station, United States Forest Service

A rare prescribed fire in a wilderness area burns in the Scapegoat Wilderness in Montana in 2011. Michael A. Munoz, CC BY-NC-ND

More than 110 million acres of land across the U.S. are protected in 806 federally designated wilderness areas – together an area slightly larger than the state of California. For the most part, these places have been left alone for decades, in keeping with the 1964 Wilderness Act’s directive that they be “untrammeled by man.”

But in a time when lands are experiencing the effects of climate change and people are renewing their understanding of Indigenous knowledge and stewardship practices, protecting these places may require action, not inaction.

New Mexico’s Gila Wilderness, where the Chihuahuan Desert converges with the Rocky Mountains, was the first to receive a formal wilderness designation in 1924. Now, all but six U.S. states contain wilderness. In Minnesota, the Boundary Waters Canoe Area Wilderness protects more than a thousand lakes and several hundred miles of streams. In Florida, the marshes and saltwater bays of the Marjory Stoneman Douglas Wilderness are home to flamingos, manatees and alligators.

These diverse ecosystems are the country’s most protected lands, where human activity is severely restricted. Federal regulations exclude resource extraction such as logging and mining; developments such as the building of roads and structures; low-level overflights by planes and helicopters; and mechanized equipment such as chain saws. People can walk, ride horses, canoe, fish and camp temporarily in these areas, but that’s about it.

Yet, research my colleagues and I have conducted indicates that this approach can make it difficult to address two of the biggest challenges facing wilderness.

First, the dominant American ideal of wilderness – as wildlands that flourish best in the absence of human management – conflicts with the growing understanding that many wilderness areas were and remain part of the ancestral homelands of Indigenous peoples, who in fact tended those lands for thousands of years.

And second, as climate change and other ecological stressors affect wilderness, some forms of human intervention could help sustain the very ecological qualities that led to these areas being so strictly protected.

A view of rolling hills with low vegetation.
Repeated severe fires have changed what was once a forest into a shrubfield in the Dome Wilderness in New Mexico.
Jonathan Coop, CC BY-NC-ND

Indigenous influence on landscapes

Many wilderness areas have long histories as homelands where Indigenous peoples lived, hunted and gathered.

In Alaska, the Inland Dena’ina people marked vast trail networks by physically modifying trees, including by scarring bark and cutting limbs. Many of these marked trees can be found within Lake Clark National Park, two-thirds of which is designated wilderness.

In Washington’s Indian Heaven Wilderness, Northwest tribes gathered to pick and then burn the area’s huckleberry fields, a practice that increased the abundance of both plants and berries.

In the Southwest, Indigenous peoples bred six species of agave plants to be more palatable than wild agaves; researchers have found four of these domesticated species in six wilderness areas.

These lands may seem wild to some, but as Indigenous ecologists Robin Wall Kimmerer and Frank Kanawha Lake observed in 2001, “Every landscape reflects the history and culture of the people who inhabit it.”

An aerial view of a landscape of standing dead tree trunks.
The Castle Fire in California’s Sierra Nevada in 2020 killed roughly 10% of the world’s population of sequoia trees.
Curtis Kvamme, CC BY-NC-ND

Ecological stressors intensify

The Wilderness Act’s strict rules are not able to protect wilderness areas in the U.S. from new and unprecedented ecological stressors.

For instance, many wilderness areas are experiencing uncharacteristically severe wildfires. These events are a result of climate change, fire suppression and the prevention of traditional Indigenous forest management practices, including burning. Together, those forces have resulted in large-scale disruptions of historical cycles of fire, in which wildfires were often more frequent but less severe.

Scholars recognize prescribed burning as an effective strategy to protect forests from catastrophic fires, though it remains controversial in wilderness as human intervention. Government policy allows lightning-ignited wildfires to burn in federal wilderness areas in certain circumstances, but most of these fires are still suppressed – a human intervention that is widely accepted.

In California’s Sequoia-Kings Canyon and John Krebs wilderness areas, recent intense wildfires have killed unprecedented numbers of giant sequoias, a species that historically thrived because of more frequent, less-intense fires. The 2020 Castle Fire is estimated to have killed between 7,500 and 10,600 large sequoias – or 10% to 14% of all sequoias in the Sierra Nevada – many of them in wilderness.

In New Mexico’s Dome Wilderness, repeated intense fires have killed entire forests, transforming these lands into shrublands. Models indicate that up to 30% of forested landscapes in the Southwest are vulnerable to this type of change.

A dark black tree trunk stands amid green plants and pink and purple flowers.
A fire-blackened tree stands in the Selway-Bitterroot Wilderness in Idaho and Montana, one of the few wilderness areas that allows many lightning-ignited fires to burn, with careful oversight and management by firefighters and land managers.
Mark Kreider, CC BY-NC-ND

The absence of fire can also be a problem for wilderness ecosystems. In the Boundary Waters Canoe Area Wilderness, researchers anticipate a significant decline in the area’s pine-dominated forests unless fire is reintroduced – with the potential for these forests to disappear within 150 years.

Helping fire resume its natural role on the landscape – through prescribed burning or letting natural fires burn, overseen by firefighters and land managers – isn’t easy. Tree-ring histories and archaeological, paleoecological and ethnographic records show that frequent burning of resting areas and campsites by the Anishinaabe people along commonly traveled waterways helped create the Boundary Waters’ open red pine forests. But the wilderness-protection group Wilderness Watch says that prescribed burning by federal land managers today constitutes “a prime example of humans imposing their will on Wilderness to try to create managers’ desired conditions rather than allowing nature to shape the area.”

And fire isn’t the only concern. A combination of climate change, invasions by a nonnative fungus called white pine blister rust and outbreaks of mountain pine beetles have led to whitebark pines’ listing as a threatened species. An iconic tree that can live between 500 and 1,000 years, whitebark pines are common in high-elevation wilderness areas in the West, where they provide key habitat and food for wildlife, help regulate snowmelt and reduce soil erosion.

For the Confederated Salish and Kootenai Tribes, whitebark pines are culturally significant, with their seeds serving as an important traditional food. The tribes have declared they feel a responsibility “to do all that we can to ensure the survival of this beautiful and ancient tree,” and developed a restoration plan for the Flathead Reservation in Montana, which includes the Mission Mountains Tribal Wilderness. But in federal wilderness, their approach – active tending through prescribed fire and replanting – would likely not be allowed.

Smoke climbs above a wooded mountainside, with higher peaks in the background.
A lightning-ignited fire in 2022 in the Stephen Mather Wilderness in Washington is allowed to burn, with oversight and intervention as needed by federal land managers and firefighters.
Cedar Drake, CC BY-NC-ND

Reimagining federal wilderness management

Within tribal wildernesses, Indigenous nations honor spiritual connections between people and the land through relationships of reciprocity, as seen in the Mission Mountains Tribal Wilderness. There, members of the Confederated Salish and Kootenai Tribes are guaranteed the right not only to use the resources by hunting and fishing but also to connect with the landscape through cultural, spiritual and religious practices.

In recent years, managers at several federal wilderness areas have worked to include tribes in decisions about how these lands are stewarded. In California, a 2021 agreement gives the Federated Indians of Graton Rancheria a voice in the management of native tule elk at Tomales Point, most of which is part of the Phillip Burton Wilderness. In 2024, after pressure from the tribal community and others, the National Park Service began removing a 2-mile-long fence that prevented the tule elk from roaming freely and introduced new signs and interpretive programs that incorporated traditional ecological knowledge.

The long-debated question of how to best steward wilderness is increasingly urgent. In addition to its “untrammeled by man” provision, the Wilderness Act also says wilderness areas should be “protected and managed so as to preserve (their) natural conditions.” So the question remains whether people should leave small slices of nature entirely alone, even as humans alter the conditions of the planet, or whether some careful actions could help protect these precious places for generations to come.

Sean Parks, Jonathan Long, Jonathan Coop, Serra Hoagland, Melanie Armstrong and Don Hankins contributed to this article.

The Conversation

Clare E. Boerigter receives funding from the USDA Forest Service.

ref. How protecting wilderness could mean purposefully tending it, not just leaving it alone – https://theconversation.com/how-protecting-wilderness-could-mean-purposefully-tending-it-not-just-leaving-it-alone-272412

From moral authority to risk management: How university presidents stopped speaking their minds

Source: The Conversation – USA (2) – By Austin Sarat, William Nelson Cromwell Professor of Jurisprudence and Political Science, Amherst College

A growing number of colleges and universities have adopted policies in the last few years to remain politically neutral. kid-a/iStock / Getty Images Plus

Throughout the 20th century, college and university presidents spoke out on everything, from wars to civil rights struggles, with a sense of moral authority attempting to guide the course.

Their language was typically direct and free of jargon.

“Democracy is the best form of government. It is worth dying for,” Robert M. Hutchins, president of the University of Chicago, said during a June 1940 convocation address, a year and a half before the U.S. formally entered World War II.

Since 2023 and the start of the Israel-Hamas war, a growing number of university and college presidents have remained silent on politics. Others have used ambiguous language that makes them seem like “neutral bureaucrats,” as Wesleyan University President Michael S. Roth wrote in 2023.

Nearly 150 universities adopted “institutional neutrality” pledges from 2023 through the end of 2024. This coincided with university leaders responding to Palestinian rights protests on their campuses.

This kind of neutral approach was on display in December 2023, when Republican Congresswoman Elise Stefanik asked several university presidents during a House of Representatives committee hearing if “calling for the genocide of Jews” would violate their schools’ rules.

The presidents of Massachusetts Institute of Technology, Harvard University and the University of Pennsylvania all answered vaguely, with hesitation.

“If the speech turns into conduct it can be harassment, yes,” said Elizabeth Magill, then president of University of Pennsylvania. “It is a context-dependent decision, Congresswoman,” she continued.

Hedging, evading and speaking in platitudes has become the order of the day for university leaders, who are facing political and financial pressure under the Trump administration. Their communication style seems scripted by lawyers and communications officials, who are tasked with trying to keep universities out of trouble.

My scholarship on language and rhetoric suggests that how people speak – not just what they say – matters. This is especially true for university presidents and others in leadership positions.

A row of four women dressed formally are seen sitting at a table together.
Liz Magill, former president of the University of Pennsylvania, center left, is seen with other university presidents during a House Education and Workforce Committee hearing in December 2023.
Kevin Dietsch/Getty Images

Moral leadership in higher education

In 1921, Alexander Meiklejohn, then president of Amherst College, understood the importance of speaking on moral and political issues. He spoke out forcefully during a raging national controversy – namely, how the U.S. should respond to rising numbers of immigrants.

Calvin Coolidge, an Amherst grad and then vice president of the U.S., was among the political leaders who advocated for an immigration quota system favoring northern Europeans over immigrants from southern Europe or Asia.

Coolidge backed xenophobic immigration policies in 1921, then writing: “There are racial considerations too grave to be brushed aside for any sentimental reasons. Biological laws tell us that certain divergent people will not mix or blend.”

Meiklejohn opposed immigration quotas, and he publicly said in 1921 that America could either “be an Anglo-Saxon aristocracy of culture or a Democracy,” but not both.

One year after he became president, Coolidge made his choice when he signed the Immigration Act into law in 1924. This law created strict immigration quotas, dependent on people’s nationality, and barred people from Asia from entering the U.S.

College presidents oppose the Vietnam War

Decades later, university presidents like Kingman Brewster Jr. at Yale and Theodore Hesburgh at Notre Dame publicly opposed the U.S. becoming involved in the Vietnam War – without hesitation or legalistic qualifiers.

“We cannot urge students to have the courage to speak out unless we are willing to do so ourselves,” Hesburgh said in 1970.

In 1971, Brewster publicly criticized the U.S. attacks on Southeast Asia, saying the bombings showed that “America had no concern for the sanctity of human life.”

His views made headlines in The New York Times and attracted the ire of Vice President Spiro Agnew, who criticized him in several speeches.

Twenty-five years later, Howard Shapiro, at the time the president of Princeton University, praised the vocal, “moral” leadership that Brewster and Hersburgh showed.

He noted: “There was a time when great figures presided over our nation’s campuses – intellectual giants who led their faculty, students, alumni, trustees, and nation with grace, vision, and moral purpose.”

Risk management takes center stage

Current university presidents who are choosing neutral and cautious approaches to political issues have reason to watch what they say.

The Trump administration has made widespread cuts to university funding, pressured schools into deals to restore their funding, and launched investigations into several schools for civil rights violations.

Others in higher education leadership roles have seen how the presidents of Harvard and the University of Pennsylvania dramatically resigned in 2023 amid widespread criticism over their response to campus protests and reports of antisemitism.

The presidents of Columbia University and the University of Virginia also resigned in 2024 and 2025, respectively.

When university presidents do speak publicly on the Trump administration’s cuts to research funding and resulting job losses on their campuses, their language is rife with ambiguity and familiar slogans.

Princeton President Christopher Eisgruber, for example, assured Princeton’s community in a February 2026 letter that “We will sustain our commitments to excellence in teaching and research … and our other defining values.”

“As always, we will be guided by the values and principles set out in the University’s mission statement and strategic framework,” Eisgruber added.

Other prominent university and college presidents, meanwhile, write phrases like “sustaining our capacity” or make a promise to “do everything I can to ensure we continue to live by our values.”

These words sound good, but, to me at least, ultimately mean nothing.

It matters what college presidents say

It is hard to disentangle the full influence that college and university presidents have, and why what they say matters.

A 2001 survey by the American Council on Education found that “the vast majority of Americans rarely hear college presidents comment on issues of national importance, and when they do, they believe institutional needs rather than those of the students or the wider community drive such comments.”

Today, the same seems to be true.

Their choices about when and how to speak are important because, as law professor James Boyd White writes, what people say and write “helps establish an identity, or what the Greeks called an ethos – for oneself, for one’s audience, and for those one talks about.”

On college campuses and beyond, leaders’ words create “a community of people, talking to and about each other,” according to White.

That is never an easy job.

But, as Wesleyan University President Roth noted, it is always an important one, especially in a place like a university.

The Conversation

Austin Sarat does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. From moral authority to risk management: How university presidents stopped speaking their minds – https://theconversation.com/from-moral-authority-to-risk-management-how-university-presidents-stopped-speaking-their-minds-276581

Pittsburgh nurses are fighting for better staffing ratios — and the research backs them up

Source: The Conversation – USA (3) – By Anna Mayo, Assistant Professor of Organizational Behavior, Carnegie Mellon University

New York nurses went on strike in January 2026, protesting unsafe staffing levels while demanding better patient safety, increased wages, improved working conditions and fairer contracts. Timothy A. Clary/AFP via Getty Images

Since nursing contract negotiations heated up in January 2026 at UPMC Magee-Womens Hospital in Pittsburgh and at UPMC Altoona, the debate shifted from standard wage disputes to a more fundamental question of patient safety: the nurse-to-patient ratio.

The New York State Nurses Association’s approach has become a primary blueprint for nursing labor strategy nationwide. By framing staffing ratios as a nonnegotiable safety standard, NYSNA shifted the focus of contract negotiations from simple wage increases to enforceable clinical mandates. In January, the new union held its first meeting with UPMC management to negotiate a contract. At the time of publication of this article, the NYSNA and the New York-Presbyterian/Columbia hospital had reached a tentative deal, though the provisions of the agreement have not been made public.

In fall 2025, 900 nurses at UPMC’s main hospitals in Pittsburgh voted to be represented by the Service Employees International Union, or SEIU.

Anna Mayo, assistant professor of organizational behavior at Carnegie Mellon University, explains the workload and staffing concerns that nurses face both in Pittsburgh hospitals and nationwide.

What are the key concerns in the nursing contract negotiations at Magee?

One big concern relates to nurse staffing, and specifically the nurse-to-patient ratio. Other issues include wages, health benefits, parental and sick leave, work hours and workplace violence mitigation measures. Magee is one of Pittsburgh’s biggest labor and delivery and neonatal centers, and nurses there say they’ve been working with what they describe as “unsafe patient loads.”

Magee nurses held a news conference in January 2026 advocating for more time with their patients by establishing minimum nurse-to-patient ratios. The main issue the nurses want resolved in their first collective bargaining agreement is a cap on how many patients a nurse can be assigned per shift. If Magee were to follow recommended industry standards set by the Association of Women’s Health, Obstetric and Neonatal Nurses, that would be one nurse assigned for every patient in active labor.

An outdoor building shot of UPMC Magee Women's Hospital.
UPMC Magee Womens Hospital is one of Pittsburgh’s biggest labor and delivery and neonatal centers.
AP Photo/Gene J. Puskar

Is there evidence linking nursing staffing levels to patient outcomes like mortality, infections or readmissions?

The short answer is yes. There is general agreement that having “safe” nursing staffing levels is related to better patient outcomes, but what exactly constitutes safe staffing is less clear.

These ratios commonly account for a nurse’s workload based on both numbers of patients and patient acuity – a measure of how much time a nurse needs to spend with a patient. Relevant patient factors include the severity of the case and need for medication or other interventions, patient mobility and status as a new admission or being close to discharge. Factors like a nurse’s experience level and the floor layout might also be considered in a measure of acuity. For example, patients who are farther away from each other can require more time for one nurse to monitor.

Even with advances in the use of artificial intelligence and electronic health record data to generate real-time predictions of acuity, current modeling is imperfect.

A 2025 study shows that how busy a nurse feels is often more important than the number of patients they have or current estimates of how much care those patients require. Even if the official numbers look OK, a nurse’s personal experience of the workload is a better predictor of whether they will miss a care task. Because there is not yet a clear and agreed-upon way to measure this, nurses and hospital leadership – who view the problem from their distinct positions – often disagree on what safe staffing actually looks like, which can lead to conflict.

A group of Black nurses gather around a smartboard patient chart.
Having safe staffing is better for patient outcomes, but the definition of ‘safe’ varies at each hospital.
Visual Vic/Moment Collection via Getty Images

As someone who studies the coordination of health care teams, I see a missing piece in the conversation about nurse staffing: the rest of the team. This could include other medical providers, therapists, dietitians, social workers and diagnostic staff.

In reality, you could have two nurses in the same unit with the same number of patients who appear to need the same amount of care. But one might be overtaxed while the other is doing fine, at least in part because of how the broader patient care teams are structured and working together.

When nursing units are understaffed, what happens to other health care workers on their team?

Evidence about understaffing and use of replacement workers is largely focused on patient outcomes, and it is mixed. One 2022 meta-analysis found no difference in patient outcomes during or outside of health care worker strikes. However, a research study using data from New York that focused on nursing strikes specifically suggests an increased risk of both mortality and readmission.

Research on health care teams, though, suggests there is also risk for teamwork breakdowns. Having replacement workers during a strike inherently creates patient care teams where team members haven’t worked together before. This lack of shared experience can negatively affect teamwork.

Are there any solutions?

Negotiations research suggests the key to conflict management is to understand the other party’s underlying interests. Nurses are clearly burnt out, and that should be taken seriously. However, accounting for the bigger picture – staffing decisions at the team level – could reduce the stress on nurses.

Three nurses work on patient charts outside patient rooms in a hospital.
The use of temporary replacement nurses when hospitals are understaffed is a common tactic.
David L. Ryan/The Boston Globe via Getty Images

For instance, how care teams are grouped can have serious implications as well. A nurse’s experience will depend on how difficult and time-consuming it is to coordinate and care for each patient. If a nurse has three patients and three different care teams instead of the same care team for all patients, the coordination costs are more burdensome.

There is some evidence of the benefits of team-based staffing in primary care and emergency departments. It could mitigate how drastic the difference in a nurse’s workload feels when comparing a load of one patient to two, three, and so on. Additionally, my research suggests low-cost interventions that spark increased nurse involvement can improve team coordination and patient outcomes, and so might also be a useful lever for affecting a nurse’s felt workload.

Looking at how patient care teams work together – instead of just focusing on nurses – might reveal new ways to help patients and staff. Solving these problems could reduce the need for strikes or protests in the first place and help hospital leaders better support their employees, their patients and the organization as a whole.

The Conversation

Anna Mayo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Pittsburgh nurses are fighting for better staffing ratios — and the research backs them up – https://theconversation.com/pittsburgh-nurses-are-fighting-for-better-staffing-ratios-and-the-research-backs-them-up-274577

The cost of casting animals as heroes and villains in conservation science

Source: The Conversation – USA (3) – By Adam Meyer, PhD Candidate in Ecosystem Ecology, Memorial University of Newfoundland

When species are described as ‘destructive’ or ‘harmful’ without sufficient context, it can shape how people perceive and treat them. Beata Whitehead/Moment via Getty Images

Scientists are philosophers, explorers, data collectors and number crunchers. They are also storytellers, placing data within a broader scientific and societal context. How they tell these stories matters.

In our work as ecologists, we find that the “hero-villain” narrative trope is a popular tool in ecology and conservation writing. For example, wild pigs – a hybrid of human-introduced wild boars and domesticated pigs – are often characterized in science articles as “pest animals” that “devastate” or “destroy” ecological communities by preying on “vulnerable” species. One study deemed them the real “big bad wolf.”

This framing does not reflect technical terms but storytelling decisions meant to help readers understand the data and results to come.

But this way of storytelling has costs. In our recent paper, Beyond hero and villain narratives in ecology and conservation science, published in 2025 in the journal BioScience, we demonstrate that simplifying complex ecological stories into good guys and bad guys is limiting the way ecologist and conservation scientists understand and communicate science.

When villains don’t fit the script

In our paper, we show that using the hero-villain trope in ecology and conservation writing has three problems.

First, by definition, a villain is not only doing bad but is morally bad. As a result, villains are judged and held accountable for their deeds. But plants, animals and ecosystems are not morally responsible for their actions because they do not operate within human-constructed moral frameworks. The hero-villain trope therefore invites an inappropriate moral interpretation of nature.
When species are reported as destructive or harmful without careful context, the audience can easily internalize the species as inherently “bad” or “malicious,” which informs how we treat them.

For example, human-introduced predators such as rats and stoats in New Zealand are often villainized in academic literature, described as “disaster on four paws” and pitted against the “fragile populations of unique birds, lizards and insects.”

This framing then can convince people that excessively painful or violent eradication methods, such as slow-acting poison, are justified.

No clear-cut roles

Second, real ecosystems don’t have clear-cut heroes or villains. Rather, species’ roles in ecosystems are complex. For example, white-tailed deer perform ecosystem functions such as helping disperse seeds throughout their habitat, yet their presence can also lead to biodiversity loss due to high levels of plant consumption.

Therefore, reducing a species to “good” or bad” can misrepresent the multidimensional roles of animals in ecosystems, which frequently shift.

An animals with a thick, shaggy coat, standing in a snow-covered landscape.
A musk ox can affect ecosystems in very different ways, depending on its environment.
imageBROKER/Martina Melzer via Getty images

For example, due to the complex interplay between animals and soil properties, in wet tundra environments musk ox can lead to an increase in ecosystem carbon storage, while in dry tundra environments they can lead to a decrease in ecosystem carbon storage.

‘Good’ or ‘bad’ depends on human values

Finally, the hero–villain framing embeds cultural and ethical assumptions without always acknowledging them. These assumptions often reflect culturally specific beliefs about which species and ecosystems are valued.

For instance, many cultures value native species – typically meaning a species that has evolved in and occupied an ecosystem without human introduction. As a result, introduced animals are frequently deemed responsible for native species extinctions, even when evidence is lacking.

But whether a species is “native” is not automatically good or bad. Nonnative species can change ecosystems in ways that people value, such as restoring ecosystem diversity and functioning that was lost from human-driven extinctions. At the same time, nonnative species can also cause changes that people do not value, such as reducing abundance of native species.

The key point is that deciding which of these outcomes is “good” or “bad” depends on human values. When scientists describe species as villains without explaining these values, the framing can present values as objective scientific conclusions.

A different way to tell the story

Our paper highlights alternative narrative structures that scientists can use to engage readers without creating heroes and villains in academic writing and storytelling.

For example, a place-based narrative structure focuses on the description of a place and the characters within – think “Planet Earth,” the BBC’s landmark nature documentary series that immerses viewers in different ecosystems around the world.
This narrative structure guides the audience through a landscape and allows for the exploration of many characters in a nuanced, value-neutral and compelling way. A classic ecological example is Henry Chandler Cowles’ study of the Michigan sand dunes, which frames ecological dynamics through the instability of place itself. “Perhaps no topographic form is more unstable than a dune,” Cowles wrote, as plants must adapt “within years rather than centuries, the penalty for lack of adaptation being certain death.” The drama within the narrative comes from place – its constraints, its pressures, its opportunities.

Another powerful narrative tool we highlight that can be applied to academic storytelling is the “Will they, or won’t they?” structure, the kind of tension you see in “Pride and Prejudice” or “When Harry Met Sally.” This structure can work surprisingly well in ecology.

In our paper, we highlight partial migration – whereby some animals in a population migrate while others don’t – as an example of how someone could use this narrative tool.

Scientists are still figuring out why certain individuals make different choices. Is it driven by food availability, the presence of predators, or behaviors acquired by social learning?

Framing research narratives around that central, unresolved question – will an individual animal migrate or won’t they? – builds suspense and keeps readers engaged, without casting a hero or villain.

There’s no final battle scene in conservation. No singular villain to defeat, no final victory for the hero. Scientists know that understanding nature requires humility and a willingness to revise their stories as new information is gained.

By moving beyond heroes and villains, scientists can tell narratives that make space for nuance, recognize their own biases, and acknowledge conflict without caricature.

The Conversation

Adam Meyer receives funding from the National Science and Engineering Research Council of Canada.

Kristy Ferraro receives funding from the Natural Sciences and Engineering Research Council of Canada.

ref. The cost of casting animals as heroes and villains in conservation science – https://theconversation.com/the-cost-of-casting-animals-as-heroes-and-villains-in-conservation-science-263883

Taboo tics like shouting curses and slurs are uncommon in Tourette syndrome − but people who have them suffer harsh social stigma

Source: The Conversation – USA (3) – By Rena Zito, Associate Professor of Sociology, Elon University

Tourette’s tics can include obscenities and slurs. These taboo words are emotionally charged and socially significant, so they tend to be more strongly encoded in the brain’s wiring. Dominic Lipinski/Stringer via Getty Images

John Davidson, whose life inspired the award-winning biopic “I Swear,” involuntarily shouted a racial slur during Michael B. Jordan and Delroy Lindo’s speech at the BAFTA film awards in London on Feb. 22, 2026. The moment went viral, and the ensuing backlash ignited public debate about Tourette syndrome and its most shocking symptom.

Davidson has been a familiar figure to British audiences since his teenage years, when he first appeared in a BBC documentary about Tourette syndrome. He has since devoted decades to public education about the condition, earning him a distinguished honor from Queen Elizabeth II in 2019.

The reactions to Davidson’s tics at the BAFTA awards make clear that Tourette syndrome remains a deeply misunderstood condition, especially when it comes to obscene language tics, called coprolalia.

I am a sociologist who studies the social dimensions of Tourette syndrome, including the stigma of coprolalia. I also live with Tourette syndrome. Most people with Tourette’s will never experience these taboo tics, but those who do bear the weight of society’s judgment.

What is Tourette syndrome?

Tourette syndrome is a neurodevelopmental condition that affects about 0.5% to 0.7% of the population. It is characterized by involuntary movements and sounds called tics that usually begin in childhood and, for some people, continue into adulthood.

Tics consist of movements, such as eye blinking or shoulder shrugging, or vocalizations, such as throat clearing or brief sounds. Some involve a single movement or sound, while others combine several movements or involve longer verbalizations – for example, finger snapping followed by a head jerk, or repeated words or phrases.

Coprolalia, or involuntary obscene or offensive speech, is one of the most widely misunderstood features of Tourette’s. About 10% to 20% of people with Tourette syndrome experience this type of tic.

Fewer than 1 in 5 people with Tourette’s experience taboo tics, such as coprolalia, but they can have an outsized effect on people’s lives.

Tics often change over time in intensity, frequency and form, with relatively quiet periods followed by phases when symptoms are more severe. Many people feel an unpleasant building sensation before a tic, called a premonitory urge, describing it like an itch that needs to be scratched. Others experience tics more suddenly, like an unexpected sneeze. Some can temporarily suppress their tics, often at the cost of greater discomfort later, while others are unable to suppress them.

Tics can be physically taxing, leading to acute and chronic pain and injury. People with Tourette syndrome also frequently face stigma, discrimination and the pressure to monitor or hide their tics, which can take a serious psychological toll. People with Tourette syndrome are at increased risk of self-harm and suicide.

The causes of Tourette syndrome aren’t fully understood, but it has a strong genetic component. Although it often runs in families, it can also be caused by birth complications or infections.

Understanding taboo tics like coprolalia

Even though a minority of people with Tourette syndrome experience coprolalia, media portrayals of Tourette’s disproportionately focus on outbursts of profanity. This “swearing disease” stereotype misrepresents how most people with the condition experience it. But because taboo tics are shocking and unexpected, they loom larger in the public imagination than more common, less dramatic tics.

Coprolalia is only one form of taboo tic. Others include copropraxia, or obscene gestures, and non-obscene but socially inappropriate tics, such as making kissing sounds, spitting or touching others.

Baylen Dupree, star of TLC show Baylen Out Loud
Baylen Dupree, star of the TLC show ‘Baylen Out Loud,’ has severe Tourette’s and experiences coprolalia.
Slaven Vlasic/Stringer via Getty Images

One of the most confusing aspects of taboo tics is that they can be contextually relevant while also being involuntary. Consider, for example, the person who tics “I have a gun!” when stopped by law enforcement. Cues in the social environment can trigger tics, especially in moments of heightened stress.

Why profanity in particular? Tics arise from dysfunction in neural circuits involved in movement and impulse control. Taboo words are emotionally charged and socially significant, so they tend to be more strongly encoded in the brain’s language and emotional networks than neutral words. This helps explain why coprolalia can also occur, albeit rarely, in people with brain lesions, neurodegenerative conditions and seizure disorders.

The challenges of living with coprolalia

The social world can be precarious for people with Tourette syndrome who experience taboo tics like coprolalia. These tics are often associated with more severe symptoms overall, more co-occurring conditions and greater social difficulty.

My research on coprolalia stigma reveals the depths of distress public misconceptions can cause.

A common misconception is that tics reveal what people “really” think and feel. In reality, tics often compel people to say or do precisely what they most wish to avoid. The stakes are especially high when tics involve slurs or insults. As one interview participant told me, “It’s like my brain weaponizes my most polite intentions and turns them into the cruelest things. And it’s scary to go outside … to have this sudden confrontation mechanism inside of me that I absolutely do not want.”

These socially inappropriate tics can draw unwanted attention and lead to exclusion, bullying, hostile encounters and barriers to employment. As another participant put it, “There’s no jobs I can work where I can get the accommodation that it’s okay for me to cuss at my boss.”

Anticipating these reactions, many people with prominent coprolalia withdraw from public life or carry the burden of constant disclosure and education.

A second misconception is that coprolalia always looks like someone shouting obscenities in public. While that does happen for some people, like Davidson at the BAFTA awards, others can suppress, mask or carefully manage their tics in social settings. Both experiences of coprolalia are stressful. Like other tics, coprolalia can come and go over time.

The stress of taboo tics extends beyond the individual. Families frequently describe feeling helpless in the face of their child’s distress, unsupported by schools and judged by others when these tics occur.

People with Tourette syndrome, and especially those with taboo tics, need understanding and support to participate fully and safely in public life.

The Conversation

Rena Zito does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Taboo tics like shouting curses and slurs are uncommon in Tourette syndrome − but people who have them suffer harsh social stigma – https://theconversation.com/taboo-tics-like-shouting-curses-and-slurs-are-uncommon-in-tourette-syndrome-but-people-who-have-them-suffer-harsh-social-stigma-276721

Why does pain last longer for women? Immune cells may be the culprit

Source: The Conversation – USA – By Geoffroy Laumet, Associate Profesor of Physiology and Neuroscience, Michigan State University

Why some people recover more quickly from pain may come down to hormone levels. andreswd/E+ via Getty Images

Pain is something most people experience after an injury, whether from a sprained ankle, surgery or car accident. Normally pain fades as the body heals. But it may last longer in women than in men, making women more likely to develop chronic pain.

For decades, differences in pain between men and women have often been attributed to psychological, emotional or social factors. Because of that, persistent pain in women is often overlooked in care.

However, my research team’s newly published study suggests that the immune system may play a role in why recovery from pain differs in men and women. Doctors have thought that the immune system increases pain by causing inflammation, which is often experienced as redness and swelling.

But recent work from my lab and others suggests that immune cells may also be critical to helping pain resolve, and differences in how these cells function between men and women may influence how quickly pain goes away.

Hormones and immune cells

I am a neuroimmunologist who studies how the nervous and immune systems communicate. My research team aims to understand why pain sometimes persists long after an injury has healed, eventually becoming chronic.

To study this process, we combined experiments in mice with data from people who had been involved in motor vehicle collisions. This type of injury is a common trigger for long-term musculoskeletal pain, making it an ideal situation to study how acute pain becomes chronic.

We focused on a specific molecule called interleukin-10 that helps reduce inflammation, measuring its levels in both mice after skin injury and in people in the emergency room after a motor vehicle accident. Surprisingly, we found that IL-10 doesn’t just calm inflammation. It also communicates directly to pain-sensing nerve cells to switch them off. In other words, IL-10 helps pain to go away.

We identified that IL-10 was mostly produced by a type of immune cell called monocytes that circulate in the blood and travel to injured tissues.

Person lying on couch, hands over forehead, eyes and stomach
A variety of factors influence how long pain lasts.
Ekaterina Goncharova/Moment via Getty Images

Across both mice and humans, we found that males tended to recover from pain more quickly than females. The reason appears to lie in how monocytes behave after injury. In males, these immune cells were more likely to produce IL-10, the molecule that helps resolve pain. In females, this response was less pronounced.

Importantly, we also found that testosterone influences how much IL-10 these immune cells produce. Higher levels of testosterone in males promoted higher production of IL-10 by monocytes.

This finding suggests that hormonal signals may shape the body’s ability to naturally turn off pain after injury.

Avenues for treatment

Our results point to a shift in how scientists think about pain: Rather than viewing the immune system only as a driver of pain, it may also be a key player in resolving it. Differences in immune cell function could explain why some people recover quicker from injury while others go on to develop chronic pain.

Understanding these biological pathways could eventually lead to new treatments. Instead of simply blocking pain signals, future therapies might aim to boost the body’s own pain resolution system. Helping immune cells calm down pain-sensing neurons more effectively could more quickly restore comfort after injury.

While more research is needed, these results highlight a promising new direction in the effort to prevent and treat chronic pain and better understand sex differences in pain.

The Conversation

Geoffroy Laumet receives funding from US NIH and DoD CPMRP. He is a member of the US Association for the Study of Pain.

ref. Why does pain last longer for women? Immune cells may be the culprit – https://theconversation.com/why-does-pain-last-longer-for-women-immune-cells-may-be-the-culprit-276591

1 protein to rule them all – why crowning the protein that makes jellyfish glow green as a model can help scientists streamline biology

Source: The Conversation – USA – By Marc Zimmer, Professor of Chemistry, Connecticut College

Green fluorescent protein has an iconic structure. National Institute of General Medical Sciences/National Institutes of Health via Flickr, CC BY-NC

Fruit flies, mice, zebra fish, yeast and the tiny worm C. elegans are model organisms that have carried modern biology on their backs.

Scientists did not choose them for their charisma. They were chosen because their similarities illuminate biological principles across many species. Their biology is simple enough for researchers to master yet deep enough to keep delivering new insights centuries later.

But biologists don’t have a common reference point for a vast area of the field: proteins, the cell’s doers. Proteins catalyze chemical reactions, give cells their structure and help them communicate with each other. Most organisms use tens of thousands of protein types, and each can be mutated, modified and measured in different ways and in countless environments. Thanks in part to artificial intelligence, researchers are also generating new proteins faster than they can study them.

Without a shared reference point, study results are hard to compare. Two labs can study the same protein under different experimental conditions and end up with findings that do not line up. The result is a scientific literature full of isolated findings that are sometimes duplicated and difficult to generalize.

As a computational chemist who studies fluorescent proteins, I argue that labs also need a set of model proteins. Like how fruit flies and mice anchor whole fields, model proteins can help researchers build on each other’s findings and better understand the fundamentals of biology.

Two mice with glowing eyes, ears and tails flanking a non-glowing mice
Green fluorescent protein illuminates what’s under study.
Moen et al/BMC Cancer, CC BY-SA

Green fluorescent protein as a model

If model proteins are to be yardsticks, the best place to start is with proteins researchers already reach for when they need a reliable standard. Green fluorescent protein is at the top of that list.

Green fluorescent protein, first isolated from a jellyfish, glows bright green when under a blue light. Biologists fuse green fluorescent protein to other proteins to track where the proteins go and when they are made.

Green fluorescent protein is already a de facto reference point for the field, used as a practice protein in experiments before attempting bigger goals. In the early 2000s, researchers used the protein and a yellow version in cloned pigs to show that foreign genes could be added to large mammals and reliably work. Green fluorescent protein made it obvious that the new gene was successfully incorporated because researchers could literally see that the pigs’ cells were making the protein encoded by the fluorescence genes.

Green fluorescent protein is a Nobel Prize-winning discovery.

The long-term aim of these experiments was to engineer pigs to produce specific human proteins that help the immune system accept a pig organ rather than reject it. Green fluorescent protein helped show that the basic engineering of this idea could work, which eventually led to the first pig-to-human kidney transplants.

The use of green fluorescent protein is not the endpoint of most studies but the proof step. It allows researchers to say, yes, the new gene is there, the cell is making the protein, the protein is working and will probably work with other proteins.

AI is forcing benchmarks

When researchers are hunting for new proteins to use as enzymes, treatments or materials, protein language models and other generative AI methods can propose huge numbers of plausible protein sequences for them to test. While some AI-designed proteins do work in the lab and can help reduce trial and error, many candidate proteins fail.

Fluorescent proteins can be a useful stress test for protein language models. The hardest part of using AI to generate proteins is proving that the sequences it suggests can become a properly folded, working protein.

Green fluorescent protein makes that proof straightforward because fluorescence allows you to quickly see that the protein has folded correctly. You can predict the brightness, stability or color of fluorescent proteins, then directly check whether the AI-generated protein matches. Like a mouse study that hints a drug might work in humans, green fluorescent protein doesn’t guarantee an AI model will succeed on every protein, but it’s a quick, widely trusted sign that the design pipeline is doing something right.

Row of test tubes with neon liquids of various colors glowing in the black light
Fluorescence proteins make experimentation visual.
Erik A. Rodriguez/Wikimedia Commons, CC BY-SA

Calling green fluorescent protein a model protein would also improve how biology is taught. Like classic model organisms, green fluorescent protein is safe and visual. It is also forgiving, producing a clear, fluorescent signal even when student study designs aren’t perfect.

These traits make it an educational gateway to ideas such as gene expression, protein folding and bioengineering. It can turn an abstract concept into something you can see in a test tube or under a microscope.

Model organisms work because scientific communities agreed to build around common reference points. I believe protein science is now vast enough to need the same, and naming green fluorescent protein as a model protein could make it easier to connect discoveries, teach students and assess new tools.

The glow, in other words, can still guide scientists – not just by dazzling, but by helping the whole field add up.

The Conversation

Marc Zimmer received funding from NIH to research fluorescent proteins.

ref. 1 protein to rule them all – why crowning the protein that makes jellyfish glow green as a model can help scientists streamline biology – https://theconversation.com/1-protein-to-rule-them-all-why-crowning-the-protein-that-makes-jellyfish-glow-green-as-a-model-can-help-scientists-streamline-biology-274385

‘Probably’ doesn’t mean the same thing to your AI as it does to you

Source: The Conversation – USA – By Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

Are you sure you and the AI chatbot you’re using are on the same page about probabilities? Malte Mueller/fStop via Getty Images

When a human says an event is “probable” or “likely,” people generally have a shared, if fuzzy, understanding of what that means. But when an AI chatbot like ChatGPT uses the same word, it’s not assessing the odds the way we do, my colleagues and I found.

We recently published a study in the journal NPJ Complexity that suggests that, while large language model AIs excel at conversation, they often fail to align with humans when communicating uncertainty. The research focused on words of estimative probability, which include terms like “maybe,” “probably” and “almost certain.”

By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models. While the models do tend to agree with humans on extremes like “impossible,” they diverge sharply on hedge words like “maybe.” For example, a model might use the word “likely” to represent an 80% probability, while a human reader assumes it means closer to 65%.

This could be because humans can interpret words such as “likely” and “probable” based more on contextual cues and personal experiences. In contrast, large language models may be averaging over conflicting usages of those words in their training data, leading to divergences with human interpretations.

Our study also found that large language models are sensitive to gendered language and the specific language used for prompting. When a prompt changed from “he” to “she,” the AI’s probability estimates often became more rigid, reflecting biases embedded in its training data. When a prompt changed from English to Chinese, the AI’s probability estimates often shifted, possibly due to differences between English and Chinese in how people express and understand uncertainty.

a multicolor three-pane graphic with icons representing humans and robots, and text and arrows
AI chatbots don’t interpret ‘probably’ and ‘maybe’ the same way you do.
Mayank Kejriwal

Why it matters

Far from being a linguistic quirk, this misalignment is a fundamental challenge for AI safety and human-AI interaction. As large language models are increasingly used in high-stakes fields like health care, government policy and scientific reporting, the way they communicate risk becomes a matter of public trust.

If an AI assistant helping a doctor, for instance, describes a side effect as “unlikely,” but the model’s internal calculation of “unlikely” is much higher than the doctor’s interpretation, the resulting decision could be flawed.

What other research is being done

Scientists have studied how humans quantify uncertainty since the 1960s, a field pioneered by CIA analysts to improve intelligence reporting. More recently, there has been an explosion in large language model literature seeking to look under the hood of neural networks to better understand their “behaviors” and linguistic patterns.

Our study adds a layer of complexity by treating the interaction between humans and artificial intelligence as a biological-like system where meaning can degrade. It moves beyond simply measuring if an AI is “smart” and instead asks if it is aligned.

Other researchers are currently exploring whether so-called chain-of-thought prompting – asking the AI to show its work – can fix these errors. However, our study found that even advanced reasoning doesn’t always bridge the gap between statistical data and verbal labels.

What’s next

A goal for future AI development is to create models that don’t just predict the next likely word but actually understand the weight of the uncertainty they are conveying. Researchers are calling for more robust consistency metrics to ensure that if a model sees a 10% chance in the data, it chooses the same word every time.

As we move toward a world where AI summarizes scientific papers and manages people’s schedules, making sure that “probably” means “probably” is a vital step in making these systems reliable partners rather than just sophisticated parrots.

The Research Brief is a short take on interesting academic work.

The Conversation

Mayank Kejriwal receives funding from the Defense Advanced Research Projects Agency and the National Institutes of Health.

ref. ‘Probably’ doesn’t mean the same thing to your AI as it does to you – https://theconversation.com/probably-doesnt-mean-the-same-thing-to-your-ai-as-it-does-to-you-275626