What are climate tipping points? They sound scary, especially for ice sheets and oceans, but there’s still room for optimism

Source: The Conversation – USA (2) – By Alexandra A Phillips, Assistant Teaching Professor in Environmental Communication, University of California, Santa Barbara

Meltwater runs across the Greenland ice sheet in rivers. The ice sheet is already losing mass and could soon reach a tipping point. Maria-José Viñas/NASA

As the planet warms, it risks crossing catastrophic tipping points: thresholds where Earth systems, such as ice sheets and rain forests, change irreversibly over human lifetimes.

Scientists have long warned that if global temperatures warmed more than 1.5 degrees Celsius (2.7 Fahrenheit) compared with before the Industrial Revolution, and stayed high, they would increase the risk of passing multiple tipping points. For each of these elements, like the Amazon rain forest or the Greenland ice sheet, hotter temperatures lead to melting ice or drier forests that leave the system more vulnerable to further changes.

Worse, these systems can interact. Freshwater melting from the Greenland ice sheet can weaken ocean currents in the North Atlantic, disrupting air and ocean temperature patterns and marine food chains.

World map showing locations for potential tipping points.
Pink circles show the systems closest to tipping points. Some would have regional effects, such as loss of coral reefs. Others are global, such as the beginning of the collapse of the Greenland ice sheet.
Global Tipping Points Report, CC BY-ND

With these warnings in mind, 194 countries a decade ago set 1.5 C as a goal they would try not to cross. Yet in 2024, the planet temporarily breached that threshold.

The term “tipping point” is often used to illustrate these problems, but apocalyptic messages can leave people feeling helpless, wondering if it’s pointless to slam the brakes. As a geoscientist who has studied the ocean and climate for over a decade and recently spent a year on Capitol Hill working on bipartisan climate policy, I still see room for optimism.

It helps to understand what a tipping point is – and what’s known about when each might be reached.

Tipping points are not precise

A tipping point is a metaphor for runaway change. Small changes can push a system out of balance. Once past a threshold, the changes reinforce themselves, amplifying until the system transforms into something new.

Almost as soon as “tipping points” entered the climate science lexicon — following Malcolm Gladwell’s 2000 book, “The Tipping Point: How Little Things Can Make a Big Difference” — scientists warned the public not to confuse global warming policy benchmarks with precise thresholds.

A tall glacier front seen from above shows huge chunks of ice calving off into Disko Bay.
The Greenland ice sheet, which is 1.9 miles (3 kilometers) thick at its thickest point, has been losing mass for several years as temperatures rise and more of its ice is lost to the ocean. A tipping point would mean runaway ice loss, with the potential to eventually raise sea level 24 feet (7.4 meters) and shut down a crucial ocean circulation.
Sean Gallup/Getty Images

The scientific reality of tipping points is more complicated than crossing a temperature line. Instead, different elements in the climate system have risks of tipping that increase with each fraction of a degree of warming.

For example, the beginning of a slow collapse of the Greenland ice sheet, which could raise global sea level by about 24 feet (7.4 meters), is one of the most likely tipping elements in a world more than 1.5 C warmer than preindustrial times. Some models place the critical threshold at 1.6 C (2.9 F). More recent simulations estimate runaway conditions at 2.7 C (4.9 F) of warming. Both simulations consider when summer melt will outpace winter snow, but predicting the future is not an exact science.

Bars with gradients show the rising risk as temperatures rise that key systems, including Greenland ice sheet and Amazon rain forest, will reach tipping points.
Gradients show science-based estimates from the Global Tipping Points Report of when key global or regional climate tipping points are increasingly likely to be reached. Every fraction of a degree increases the likeliness, reflected in the warming color.
Global Tipping Points Report 2025, CC BY-ND

Forecasts like these are generated using powerful climate models that simulate how air, oceans, land and ice interact. These virtual laboratories allow scientists to run experiments, increasing the temperature bit by bit to see when each element might tip.

Climate scientist Timothy Lenton first identified climate tipping points in 2008. In 2022, he and his team revisited temperature collapse ranges, integrating over a decade of additional data and more sophisticated computer models.

Their nine core tipping elements include large-scale components of Earth’s climate, such as ice sheets, rain forests and ocean currents. They also simulated thresholds for smaller tipping elements that pack a large punch, including die-offs of coral reefs and widespread thawing of permafrost.

A few fish swim among branches of a white coral skeleton during a bleaching event.
The world may have already passed one tipping point, according to the 2025 Global Tipping Points Report: Corals reefs are dying as marine temperatures rise. Healthy reefs are essential fish nurseries and habitat and also help protect coastlines from storm erosion. Once they die, their structures begin to disintegrate.
Vardhan Patankar/Wikimedia Commons, CC BY-SA

Some tipping elements, such as the East Antarctic ice sheet, aren’t in immediate danger. The ice sheet’s stability is due to its massive size – nearly six times that of the Greenland ice sheet – making it much harder to push out of equilibrium. Model results vary, but they generally place its tipping threshold between 5 C (9 F) and 10 C (18 F) of warming.

Other elements, however, are closer to the edge.

Alarm bells sounding in forests and oceans

In the Amazon, self-perpetuating feedback loops threaten the stability of the Earth’s largest rain forest, an ecosystem that influences global climate. As temperatures rise, drought and wildfire activity increase, killing trees and releasing more carbon into the atmosphere, which in turn makes the forest hotter and drier still.

By 2050, scientists warn, nearly half of the Amazon rain forest could face multiple stressors. That pressure may trigger a tipping point with mass tree die-offs. The once-damp rainforest canopy could shift to a dry savanna for at least several centuries.

Rising temperatures also threaten biodiversity underwater.

The second Global Tipping Points Report, released Oct. 12, 2025, by a team of 160 scientists including Lenton, suggests tropical reefs may have passed a tipping point that will wipe out all but isolated patches.

Coral loss on the Great Barrier Reef. Australian Institute of Marine Science.

Corals rely on algae called zooxanthellae to thrive. Under heat stress, the algae leave their coral homes, draining reefs of nutrition and color. These mass bleaching events can kill corals, stripping the ecosystem of vital biodiversity that millions of people rely on for food and tourism.

Low-latitude reefs have the highest risk of tipping, with the upper threshold at just 1.5 C, the report found. Above this amount of warming, there is a 99% chance that these coral reefs tip past their breaking point.

Similar alarms are ringing for ocean currents, where freshwater ice melt is slowing down a major marine highway that circulates heat, known as the Atlantic Meridional Overturning Circulation, or AMOC.

Two illustrations show how the AMOC looks today and its expected weaker state in the future
How the Atlantic Ocean circulation would change as it slows.
IPCC 6th Assessment Report

The AMOC carries warm water northward from the tropics. In the North Atlantic, as sea ice forms, the surface gets colder and saltier, and this dense water sinks. The sinking action drives the return flow of cold, salty water southward, completing the circulation’s loop. But melting land ice from Greenland threatens the density-driven motor of this ocean conveyor belt by dilution: Fresher water doesn’t sink as easily.

A weaker current could create a feedback loop, slowing the circulation further and leading to a shutdown within a century once it begins, according to one estimate. Like a domino, the climate changes that would accompany an AMOC collapse could worsen drought in the Amazon and accelerate ice loss in the Antarctic.

There is still room for hope

Not all scientists agree that an AMOC collapse is close. For the Amazon rain forest and the North Atlantic, some cite a lack of evidence to declare the forest is collapsing or currents are weakening.

In the Amazon, researchers have questioned whether modeled vegetation data that underpins tipping point concerns is accurate. In the North Atlantic, there are similar concerns about data showing a long-term trend.

A map of the Amazon shows large areas along its edges and rivers in particular losing tree cover
The Amazon forest has been losing tree cover to logging, farming, ranching, wildfires and a changing climate. Pink shows areas with greater than 75% tree canopy loss from 2001 to 2024. Blue is tree cover gain from 2000 to 2020.
Global Forest Watch, CC BY

Climate models that predict collapses are also less accurate when forecasting interactions between multiple tipping points. Some interactions can push systems out of balance, while others pull an ecosystem closer to equilibrium.

Other changes driven by rising global temperatures, like melting permafrost, likely don’t meet the criteria for tipping points because they aren’t self-sustaining. Permafrost could refreeze if temperatures drop again.

Risks are too high to ignore

Despite the uncertainty, tipping points are too risky to ignore. Rising temperatures put people and economies around the world at greater risk of dangerous conditions.

But there is still room for preventive actions – every fraction of a degree in warming that humans prevent reduces the risk of runaway climate conditions. For example, a full reversal of coral bleaching may no longer be possible, but reducing emissions and pollution can allow reefs that still support life to survive.

Tipping points highlight the stakes, but they also underscore the climate choices humanity can still make to stop the damage.

The Conversation

Alexandra A Phillips does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What are climate tipping points? They sound scary, especially for ice sheets and oceans, but there’s still room for optimism – https://theconversation.com/what-are-climate-tipping-points-they-sound-scary-especially-for-ice-sheets-and-oceans-but-theres-still-room-for-optimism-265183

The limits of free speech protections in American broadcasting

Source: The Conversation – USA – By Michael J. Socolow, Professor of Communication and Journalism, University of Maine

FCC Chairman Brendan Carr testifies in Washington on May 21, 2025. Brendan Smialowski/AFP via Getty Image

The chairman of the Federal Communications Commission is displeased with a broadcast network. He makes his displeasure clear in public speeches, interviews and congressional testimony.

The network, afraid of the regulatory agency’s power to license their owned-and-operated stations, responds quickly. They change the content of their broadcasts. Network executives understand the FCC’s criticism is supported by the White House, and the chairman implicitly represents the president.

I’m not just referring to the recent controversy between FCC Chairman Brendan Carr, ABC and Jimmy Kimmel. The same chain of events has happened repeatedly in U.S. history.

President Franklin Delano Roosevelt’s FCC chairman, James Lawrence Fly, warned the networks about censoring news commentators.

Then there was John F. Kennedy’s FCC chairman, Newton Minow, who criticized the networks for not airing more news and public affairs programming to support American democracy during the Cold War.

And there was George W. Bush’s FCC chairman, Michael Powell. He decided that a fleeting “wardrobe malfunction” during the 2004 Super Bowl halftime show – when Janet Jackson’s breast was exposed – was sufficient to punish CBS with a fine.

In each of those cases, the FCC represented the views of the White House. And in each case, the regulatory agency was employed to pressure the networks into airing content more aligned with the administration’s ideology.

But what’s interesting in those four examples is that two of the FCC chairmen were Democrats – Fly and Minow – and two were Republicans – Powell and Carr.

As a media historian, I’m aware of the long-existing bipartisan enthusiasm for exploiting the fact that no First Amendment exists in American broadcasting. Pressuring broadcasters by leveraging FCC power occurs regardless of which party controls the White House. And when the agency is used in partisan fashion, the rival party will criticize such politicization of regulation as a threat to free speech.

This recurring cycle is made possible by the fact that broadcasting is licensed by the government. Since a Supreme Court decision in 1943, the supremacy of the FCC in broadcast regulation has been unquestioned.

Such strong governmental oversight separates broadcasting from any other medium of mass communication in the United States. And it’s the reason why there’s no “free speech” when it comes to Kimmel, or any other performer, on U.S. airwaves.

The FCC’s empowerment

Since its establishment in 1934, the FCC’s primary role in broadcasting has been to authorize local station licenses “in the public interest, convenience, or necessity.”

In 1938, the FCC began its first investigation into network practices and policies, which resulted in new regulations. One of the new rules stated that no network could own and operate more than one licensed station in any single market. This forced NBC, which owned two networks that operated stations in several markets, to divest itself of one of its networks. NBC sued.

In the first serious constitutional test of the FCC’s full authority, in 1943, the Supreme Court vindicated the FCC’s expansive power over all U.S. broadcasting in its 5–4 verdict in National Broadcasting Co. v. United States. The ruling has stood since.

That’s why there’s no First Amendment in broadcasting. The Supreme Court ruled that, due to spectrum scarcity – the idea that the airwaves are a limited public resource and therefore not every American can operate a broadcast station – the FCC’s power over broadcasting must be expansive.

The 1934 act, the 1943 Supreme Court decision read, “gave the Commission … expansive powers … and a comprehensive mandate to ‘encourage the larger and more effective use of radio in the public interest,’ if need be, by making ‘special regulations applicable to radio stations engaged in chain (network) broadcasting.’”

The ruling also explains why the FCC can be credited with having created the American Broadcasting Company. Yes, the same ABC that suspended Kimmel in the face of FCC threats was the network that emerged from NBC’s forced divestiture of its Blue Network as a result of the 1943 Supreme Court decision.

The empowerment of the FCC by NBC v. U.S. led to such content restrictions as the Fairness Doctrine, which intended to ensure balanced political broadcasting, instituted in 1949, and later, additional FCC rules against obscenity and indecency on the airwaves. The Supreme Court decision also encouraged FCC chairmen to flex their regulatory muscles in public more often.

A Black woman and white man sing onstage.
A federal appeals court ruled on Nov. 2, 2011, that CBS should not be fined US$550,000 for Janet Jackson’s infamous ‘wardrobe malfunction.’
AP Photo/David Phillip

For example, when CBS suspended news commentator Cecil Brown in 1943 for truthful but critical news commentary about the U.S. World War II effort, FCC Chairman Fly expressed his displeasure with the network’s decision.

“It is a little strange,” Fly told the press, “that all Americans are to enjoy free speech except radio commentators.”

When FCC Chairman Minow complained about television in the U.S. devolving into a “vast wasteland” in 1961, the networks responded both defensively and productively. They invested far more money into news and public affairs programming. That led to significantly more news reporting and documentary production throughout the 1960s and 1970s.

A ‘hands-off’ FCC

In the early 2000s, FCC Chairman Powell promised to “refashion the FCC into an outfit that is fast, decisive and, above all, hands-off.”

Yet his promise to be “hands-off” did not apply to content regulation. In 2004, his FCC concluded a contentious legal battle with Clear Channel Communications over comments ruled “indecent” by shock jock Howard Stern. The settlement resulted in a US$1.75 million payment by Clear Channel Communications – the largest fine ever collected by the FCC for speech on the airwaves.

Powell apparently enjoyed policing content, as evidenced by the $550,000 fine his FCC levied against CBS for the fleeting exposure of singer Janet Jackson’s breast during the Super Bowl. The fine was eventually overturned. But Powell did successfully lobby Congress to significantly hike the amount of money the FCC could fine broadcasters for indecency. The fine for a single incident increased from $32,000 to $325,000, and up to $3 million if a network broadcasts it on multiple stations.

Powell’s regulatory activism, done mostly to curb the outrageous antics of radio shock jocks, resulted in some of the most significant and long-lasting restrictions on broadcast freedom in U.S. history. Thus, Carr’s 2025 threats toward ABC can be viewed in a historical context as an extension of established FCC activism.

Demonstrators holds signs in front of a building with columns.
Demonstrators hold signs on Sept. 18, 2025, outside Los Angeles’ El Capitan Entertainment Centre, where the late-night show ‘Jimmy Kimmel Live!’ is staged.
AP Photo/Damian Dovarganes

But Carr’s threat also appeared to contradict his previously espoused values.

As the author of the FCC section in Project 2025, a conservative blueprint for federal government policies, Carr wrote: “The FCC should promote freedom of speech … and pro-growth reforms that support a diversity of viewpoints.” In exploiting the FCC’s licensing power to threaten to penalize speech he found offensive, Carr failed to promote either freedom of speech or diversity of viewpoints.

If there’s one thing the Carr-Kimmel episode teaches us, it’s that more Americans should know the structural constraints in the U.S. system of broadcasting. Media literacy has proved essential as curbs to free expression – both official and unofficial – have become more popular.

When the FCC threatens a broadcaster, it does so in Americans’ name.

If Americans applaud regulatory activism when it supports their partisan beliefs, consistency demands they accept the same regulatory activism in the hands of their political opponents. If Americans prefer their political opposition show restraint in the regulation of broadcasting, then they need to promote restraint when their preferred administration is in power.

The Conversation

Michael J. Socolow does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The limits of free speech protections in American broadcasting – https://theconversation.com/the-limits-of-free-speech-protections-in-american-broadcasting-266206

Industrial facilities owned by profitable companies release more of their toxic waste into the environment

Source: The Conversation – USA (2) – By Mahelet G Fikru, Professor of Economics, Missouri University of Science and Technology

Toxic chemical pollution can come in many forms, including compounds that float on top of water. Brett Hondow/iStock / Getty Images Plus

How much pollution a facility engaged in production or resource extraction emits isn’t just based on its location, its industry or the type of work it does. That’s what our team of environmental and financial economists found when we examined how corporate characteristics shape pollution emissions.

Pollution emissions rates also vary with specific characteristics of the company that owns the facility – such as how many patents it holds, how profitable it is and how many employees it has, according to an analysis we have conducted of corporate pollution data.

We found that industrial and mining facilities owned by profitable companies with relatively few patents and fewer employees tend to release higher proportions of their toxic waste into the environment – into the air, into water or onto soil.

By contrast, industrial sites owned by unprofitable companies with higher levels of innovation and more personnel tend to handle higher proportions of their toxic waste in more environmentally responsible ways, such as processing them into nontoxic forms or recycling them, or burning them to generate energy.

Corporations publish their pollution data

A 1986 federal law requires companies that are in certain industries, employ more than 10 people and make, use or process significant amounts of certain toxic or dangerous chemicals to tell the government where those chemicals go after the company is done with them.

That data is collected by the U.S. Environmental Protection Agency in a database called the Toxics Release Inventory. That data includes information about the companies, their facilities and locations, and what they do with their waste chemicals.

The goal is not only to inform the public about which dangerous chemicals are being used in their communities, but also to encourage companies to use cleaner methods and handle their waste in ways that are more environmentally responsible.

Overall, U.S. companies reported releasing to the environment 3.3 billion pounds of toxic chemicals (1.5 billion kg) in 2023, a 21% decrease from 2014. The decline reflects increased waste management, adoption of pollution prevention and cleaner technologies, in addition to the fact that disclosure requirements motivate companies to reduce releases.

The 2023 releases came from over 21,600 industrial facilities in all 50 states and various U.S. territories, including Puerto Rico, the U.S. Virgin Islands, Guam and American Samoa. One-fifth of the facilities reporting toxic releases in 2023 were in Texas, Ohio and California.

What kinds of businesses release toxic pollution?

Metal mining, chemical manufacturing, primary metals, natural gas processing and electric utilities represent the top five polluting industrial sectors in the U.S. Combined, businesses in those sectors accounted for 78% of the toxic chemicals released in 2023.

Research has found that, often, higher levels of toxic chemical releases come from industrial facilities in less populated, economically disadvantaged, rural or minority communities.

But geography and population are not the whole story. Even within the same area, some facilities pollute a lot less than others. Our inquiry into the differences between those facilities has found that corporate characteristics matter a lot – such as operational size, innovative capacity and financial strength.

In our analysis, we combined the data companies reported to the EPA about toxic chemical releases with financial information on those companies and ZIP-code level geographic and demographic data. We found that corporate characteristics like profitability, employment size and number of patents are more strongly connected with toxic chemical releases than a community’s population density, minority-group percentage or household income.

We looked at what percentage of its toxic chemical waste a facility or mine released to the environment versus how much it treated, recycled or incinerated.

The average facility in our sample, which included 1,976 facilities owned by companies for which financial data is available, released about 39% of its toxic chemical waste to the environment, whether to air, water or land – with the remaining 61% of it managed through recycling, treatment or energy recovery either on-site or off-site.

But facilities in different industries have different release rates. For example, about 99% of toxic chemicals from coal mines are released to the environment, compared with 81% for natural gas extraction, recovery and processing; 25% for power-generating electric utilities; and less than 3% for electrical equipment manufacturers.

The role of innovation

One corporate attribute we examined was innovation, which we measured by counting corporations’ patent families, which are groups of patent documents related to the same invention, even if they are filed in different countries. We found that companies with more patent families tend to release less of their toxic waste to the environment.

Specifically, facilities owned by the top 25% of companies, when rated by innovation, released an average of 32.5% of their toxic waste to the environment, which is 8 percentage points lower than the average of facilities owned by the remaining companies in the sample.

We hypothesize that innovation may give firms a competitive advantage that also enables them to adopt cleaner production technologies or invest in more environmentally conscious methods of handling waste containing toxic chemicals, thereby preventing toxic chemicals from being directly released to the environment.

Size and profitability matter, too

We also looked at companies’ size – in terms of number of employees – and their profitability, to see how those connected with pollution rates at the facilities the company owns.

We found that larger companies, those with more than 19,000 employees, own facilities that release an average of 31% of their toxic chemical waste to the environment. By contrast, facilities owned by midsized companies, from 1,000 to 19,000 workers, release 45%, on average. Those owned by smaller companies, with less than 1,000 employees, release an average of 42% of their toxic chemical waste to the environment.

An important note is that those larger companies, which are more likely to have multiple locations, often own facilities that handle larger volumes of chemicals. So even if they release smaller proportions of their toxic waste to the environment, that may still add up to larger quantities.

We also found that industrial facilities owned by profitable firms have higher average rates of releasing toxic chemicals to the environment than those owned by unprofitable companies.

Facilities owned by companies with positive net income, according to their income statements obtained from PitchBook, a company that collects data on corporations, released an average of 40% of their toxic-chemical-containing wastes to the environment. Facilities owned by companies with negative net income released an average of 31% of their toxic chemical waste to the environment. To us, that indicates that financially strong companies are not necessarily more environmentally responsible. That may be evidence that profitable firms make money in part by contaminating the environment rather than paying for pollution prevention or cleanup.

Our analysis shows that geography and demographics alone do not fully account for industries’ and facilities’ differing levels of pollution. Corporate characteristics are also key factors in how toxic waste is handled and disposed of.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Industrial facilities owned by profitable companies release more of their toxic waste into the environment – https://theconversation.com/industrial-facilities-owned-by-profitable-companies-release-more-of-their-toxic-waste-into-the-environment-265227

Does the First Amendment protect professors being fired over what they say? It depends

Source: The Conversation – USA (2) – By Neal H. Hutchens, University Research Professor of Education, University of Kentucky

Employees at public and private colleges do not have the same First Amendment rights. dane_mark/Royalty-free

American colleges and universities are increasingly firing or punishing professors and other employees for what they say, whether it’s on social media or in the classroom.

After the Sept. 10, 2025, killing of conservative activist Charlie Kirk, several universities, including Iowa State University, Clemson University, Ball State University and others, fired or suspended employees for making negative online comments about Kirk.

Some of these dismissed professors compared Kirk to a Nazi, described his views as hateful, or said there was no reason to be sorry about his death.

Some professors are now suing their employers for taking disciplinary action against them, claiming they are violating their First Amendment rights.

In one case, the University of South Dakota fired Phillip Michael Cook, a tenured art professor, after he posted on Facebook in September that Kirk was a “hate spreading Nazi.” Cook, who took down his post within a few hours and apologized for it, then sued the school, saying it was violating his First Amendment rights.

A federal judge stated in a Sept. 23 preliminary order that the First Amendment likely protected what Cook posted. The judge ordered the University of South Dakota to reinstate Cook, and the university announced on Oct. 4 that it would reverse Cook’s firing.

Cook’s lawsuit, as well as other lawsuits filed by dismissed professors, is testing how much legal authority colleges have over their employees’ speech – both when they are on the job and when they are not.

For decades, American colleges and universities have traditionally encouraged free speech and open debate as a core part of their academic mission.

As scholars who study college free speech and academic freedom, we recognize that these events raise an important question: When, if ever, can a college legally discipline an employee for what they say?

A university campus with various buildings and trees is seen from above.
An aerial view of University of South Dakota’s Vermillion campus, one of the places where a professor was recently fired for posting comments about Charlie Kirk, a decision that was later reversed.
anup khanal – CC BY-SA 4.0

Limits of public employees’ speech rights

The First Amendment limits the government’s power to censor people’s free speech. People in the United States can, for instance, join protests, criticize the government and say things that others find offensive.

But the First Amendment only applies to the government – which includes public colleges and universities – and not private institutions or companies, including private colleges and universities.

This means private colleges typically have wide authority to discipline employees for their speech.

In contrast, public colleges are considered part of the government. The First Amendment limits the legal authority they have over their employees’ speech. This is especially true when an employee is speaking as a private citizen – such as participating in a political rally outside of work hours, for example.

The Supreme Court ruled in a landmark 1968 case that public employees’ speech rights as private citizens can extend to criticizing their employer, like if they write a letter critical of their employer to a newspaper.

The Supreme Court also ruled in 2006 that
the First Amendment does not protect public employees from being disciplined by their employers when they say or write something as part of their official job duties.

Even when a public college employee is speaking outside of their job duties as a private citizen, they might not be guaranteed First Amendment protection. To reach this legal threshold, what they say must be about something of importance to the public, or what courts call a “matter of public concern.”

Talking or writing about news, politics or social matters – Kirk’s murder – often meets the legal test for when speech is about a matter of public concern.

In contrast, courts have ruled that personal workplace complaints or gossip typically does not guarantee freedom of speech protection.

And in some cases, even when a public employee speaks as a private citizen on a topic that a court considers a matter of public concern, their speech may still be unprotected.

A public employer can still convince a court that its reasons for prohibiting an employee’s speech – like preventing conflict among co-workers – are important enough to deny this employee First Amendment protection.

Lawsuits brought by the employees of public colleges and universities who have been fired for their comments about Kirk may likely be decided based on whether what they said or wrote amounts to a matter of public concern. Another important factor is whether a court is convinced that an employee’s speech about Kirk was serious enough to disrupt a college’s operations, thus justifying the employee’s firing.

Academic freedom and professors’ speech

There are also questions over whether professors at public universities, in particular, can cite other legal rights to protect their speech.

Academic freedom refers to a faculty member’s rights connected to their teaching and research expertise.

At both private and public colleges, professors’ work contracts – like the ones typically signed after receiving tenure – potentially provide legal protections for faculty speech connected to academic freedom, such as in the classroom.

However, the First Amendment does not apply to how a private college regulates its professors’ speech or academic freedom.

Professors at public colleges have at least the same First Amendment free speech rights as their fellow employees, like when speaking in a private citizen capacity.

Additionally, the First Amendment might protect a public college professor’s work-related speech when academic freedom concerns arise, like in their teaching and research.

In 2006, the Supreme Court left open the question of whether the First Amendment covers academic freedom, in a case where it found the First Amendment did not cover what public employees say when carrying out their official work.

Since then, the Supreme Court has not dealt with this complicated issue. And lower federal courts have reached conflicting decisions about First Amendment protection for public college professors’ speech in their teaching and research.

A large gray stone plaque shows the First Amendment in front of a green grassy field and buildings in the distance.
The First Amendment is on display in front of Independence Hall in Philadelphia.
StephanieCraig/iStock via Getty Images Plus

Future of free speech for university employees

Some colleges, especially public ones, are testing the legal limits of their authority over their employees’ speech.

These incidents demonstrate a culture of extreme political polarization in higher education.

Beyond legal questions, colleges are also grappling with how to define their commitments to free speech and academic freedom.

In particular, we believe campus leaders should consider the purpose of higher education. Even if legally permitted, restricting employees’ speech could run counter to colleges’ traditional role as places for the open exchange of ideas.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Does the First Amendment protect professors being fired over what they say? It depends – https://theconversation.com/does-the-first-amendment-protect-professors-being-fired-over-what-they-say-it-depends-266128

Growing cocktail of medicines in world’s waterways could be fuelling antibiotic resistance

Source: The Conversation – UK – By April Hayes, Microbiologist, Public Health and Sport Sciences, University of Exeter

tawanroong/Shutterstock

Scientists have long been worried about the buildup of antibiotics in the environment.

But in a recent study I led, we wanted to know what happens when bacteria are exposed not just to antibiotics, but to antibiotics and another type of medicine – together, at the low concentrations now typically found in nature.

Up to 90% of the medicines we take pass straight through our bodies, and most are not removed by wastewater treatment plants. These drug residues end up in rivers, lakes and other freshwater systems. In fact, traces of medicines have now been detected on every continent, at concentrations that vary from place to place.




Read more:
Environmental antibiotic resistance unevenly addressed despite growing global risk, study finds


Even tiny amounts of antibiotics can help bacteria evolve defences that make them harder to kill later. These bacteria become fitter, more adaptable, and able to survive doses strong enough to treat human infections. When that happens, the result is antibiotic resistance – a major global health threat. Already, over a million people die each year from infections that no longer respond to treatment, and that number is expected to rise.

What’s less well known is that many other medicines, including drugs for diabetes, depression and pain relief, can also encourage bacteria to become resistant to antibiotics.

Most previous studies, however, have focused on single drugs in isolation. For example, researchers might test how one antidepressant affects bacterial resistance to antibiotics and usually at doses much higher than those found in the environment.

But in the real world, medicines mix together in complex cocktails at low levels, and we still know little about how those combinations behave.

In our latest research, we tested whether a community of bacteria would become more resistant to antibiotics after being exposed to a mixture of drugs. These mixtures included ciprofloxacin – a common antibiotic frequently detected in waterways – combined with one of three other medicines: diclofenac (a widely used painkiller), metformin (a diabetes medication) and an oestrogen hormone used in hormone replacement therapy.

All three combinations changed how the bacteria behaved. We analysed how the bacterial community shifted: which species declined, which thrived and what resistance genes became more common.

We found that these mixtures made the bacterial community less able to grow overall, but also more likely to contain genes that conferred resistance to multiple antibiotics – not just ciprofloxacin, but others that were chemically different. The bacterial mix itself also changed: new species flourished in the presence of the drug combinations that hadn’t done so under antibiotic exposure alone.

I’d tested these same medicines individually in an earlier study, using the same bacteria and similar experimental conditions. On their own, none of the non-antibiotic drugs increased bacterial resistance. But when combined with an antibiotic, the story changed.

Taken together, these studies reveal something important: medicines that seem harmless on their own can amplify each other’s effects when mixed. That’s a big deal, because scientists often test pharmaceuticals one by one and if a single drug shows no obvious effect, it’s typically ignored. Our findings suggest we shouldn’t be so quick to dismiss them.

In the environment, where countless drugs and chemicals coexist, these mixtures may be quietly shaping the evolution of antibiotic resistance. Understanding this hidden interaction is crucial if we want to protect both our health and our ecosystems in the years ahead.

The Conversation

April Hayes receives funding from the Natural Environment Research Council. Her PhD work was supported by AstraZeneca but all work was carried out without input from any funder.

ref. Growing cocktail of medicines in world’s waterways could be fuelling antibiotic resistance – https://theconversation.com/growing-cocktail-of-medicines-in-worlds-waterways-could-be-fuelling-antibiotic-resistance-266945

Could further education colleges get involved with university mergers? It might help meet Keir Starmer’s education goals

Source: The Conversation – UK – By Chris Millward, Professor of Practice in Education Policy, University of Birmingham

Rawpixel.com/Shutterstock

The merger of Kent and Greenwich universities is set to produce the UK’s first “super-university”. This structure will help the universities manage financial risks, while sustaining their distinctive identities. And the merger could also provide a model for the prime minister’s vision for post-compulsory education, outlined recently at the Labour party conference.

Keir Starmer wants two-thirds of young people to enter higher or technical education or apprenticeships. This embraces both further and higher education, and it demands coherence between them. Building on the model agreed between Kent and Greenwich, that could be achieved by colleges joining universities within a single group.

Further education colleges offer a high proportion of the nation’s technical qualifications and apprenticeships, which are central to the prime minister’s target. In towns without universities, colleges provide the route through post-compulsory education. This is often within group structures.

Some already have links with higher education. London South East Colleges, for instance, has seven campuses, which reach south from Greenwich. The group also has a partnership with the University of Greenwich.

Colleges have experienced equal financial challenges to universities, but for longer. They might be wary of joining universities because it could dissipate their distinctive vocational mission. But the model agreed by Kent and Greenwich shows how that can be sustained.

Combining different traditions

While both are universities, the merger of Kent and Greenwich shows it is possible for institutions with very different identities to combine.

Group of students in a study space
Mergers mean institutions can share resources.
Rawpixel.com/Shutterstock

The University of Kent was established in 1965, in the wake of the meritocratic vision for higher education laid out in the 1963 Robbins Report.

This report, produced by the government’s Committee on Higher Education, stated that “university places should be available for all who are qualified by ability and attainment”. It argued that universities should provide a liberal education, rather than meeting employers’ immediate needs. This was embodied in the new maps of learning developed by universities like Kent and their greenfield residential campuses.

Greenwich originates from Woolwich Polytechnic. This was the site from which Labour education minister Tony Crosland announced the expansion of polytechnics in 1965.
Crosland wanted to meet “an ever-increasing need and demand for vocational, professional and industrially based courses”. He also opposed the hierarchy of post-compulsory education, which diminished the status of these courses.

Polytechnics became universities from 1992. Their applied courses then made a pivotal contribution to Tony Blair’s 2001 target for 50% of young people to enter higher education. Blair argued that this would create a society “genuinely based on merit”.

By the time this threshold was passed in 2017, Conservative-led governments had established more universities. Citing Robbins, they expected this to drive higher education expansion through competition and student choice.

Reducing polarisation

Starmer’s speech to the Labour conference signals a different approach. “While you will never hear me denigrate the aspiration to go to university, I don’t think the way we currently measure success in education – that ambition to get to 50% … is right for our times,” he said.

Part of the motivation for this approach comes from a desire to counter Reform UK. People without higher education qualifications are more likely to vote for Reform.

Tackling the dissatisfaction of Reform supporters with highly educated elites requires Starmer to depart from previous assumptions about higher education and meritocracy – that a university education is superior to other pathways through lives and careers. That means placing a higher value on apprenticeships and technical education.

Mergers can improve the financial sustainability of universities and colleges by pooling their risks, operations and investment capacity. For example, a recruitment shortfall in one part of a group can be absorbed by others. Services can be provided at greater scale and lower cost within a group. If investment is needed to build provision in one location, that may be secured through the balance sheet of the whole group.

Investment of this kind is crucial for enhancing teaching quality, learner experiences and reputational standing. But group structures can also minimise course duplication and improve progression arrangements. Rather than competing with each other, colleges and universities within a group can agree course content and admissions requirements.

That enables learners to move seamlessly between different levels and types of education. It also builds connections between towns with colleges and the cities where most universities are based, broadening both study options and job prospects.

Group structures could advance separately in higher and further education. That would encourage competition and hierarchy, rather than coherence and progression. But bringing the two streams of post-compulsory education closer together could help achieve Starmer’s ambition to reduce polarisation. It might also give both universities and colleges some financial breathing room.

The Conversation

Chris Millward does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Could further education colleges get involved with university mergers? It might help meet Keir Starmer’s education goals – https://theconversation.com/could-further-education-colleges-get-involved-with-university-mergers-it-might-help-meet-keir-starmers-education-goals-266820

Almost 75,000 farmed salmon in Scotland escaped into the wild after Storm Amy – why this may cause lasting damage

Source: The Conversation – UK – By William Perry, Postdoctoral Research Associate at the School of Biosciences, Cardiff University

When Storm Amy battered the Scottish Highlands in early October, it tore through a salmon farm’s sea pens, releasing around 75,000 fish into open water in Loch Linnhe. The scale of the escape is alarming. It comes at a time when wild Atlantic salmon – already classified as “endangered” in Great Britain – are in decline.

For an animal so central to the UK’s ecology, culture and economy, the incident has serious implications.

At first glance, it might sound like a rare bit of good news: thousands of fish freed from captivity, perhaps even helping to bolster wild populations. But the reality is far less heartwarming.

These fish are not wild salmon in any meaningful sense. They are highly domesticated animals, selectively bred over decades for traits that make them profitable in captivity but poorly equipped for survival in the wild.

Aquaculture – the farming of fish and other aquatic species – has become one of the fastest-growing forms of food production in the world. The most valuable of all farmed marine species is the Atlantic salmon, which accounted for 18% of global marine aquaculture production value in 2022. The UK is the third largest producer, with almost all production centred around Scotland’s coast.

Modern salmon farming typically involves rearing young fish in freshwater hatcheries before transferring them to sea cages or pens. Each farm may hold six to ten large nets, each containing up to 200,000 fish.

Having salmon nets open to strong tidal currents is key to their design, allowing clean oxygenated water to enter and waste to be removed. However, this also means that they are vulnerable to adverse weather conditions.

To combat this, more sheltered coastal regions are used, like fjords or lochs, but this only offers so much protection. Storm Amy demonstrated that vulnerability all too clearly.

From wild fish to livestock

Atlantic salmon farming began in the 1970s. Since then, the species has undergone intensive selective breeding, much like sheep, dogs or chickens. Fish have been chosen for faster growth, delayed sexual maturity, disease resistance and other commercially desirable traits.

Around 90% of the salmon used in Scottish aquaculture originate from Norwegian stock. After 15 generations of selection, these farmed salmon are now among the most domesticated fish species in the world. They no longer resemble their wild relatives in important ways.




Read more:
Wild salmon are the Zendayas of the fish world – what that tells us about conservation


Farmed salmon differ genetically, physiologically and behaviourally. They are often larger, mature differently and feed on pellets instead of hunting live prey. Changes which make them more vulnerable to predators.

Farmed salmon even have traits which will make them less attractive to wild counterparts. Many would struggle to survive for long in the wild.

The problem isn’t just that farmed salmon die when they escape but what happens when some of them don’t. Studies show that in certain Scottish and Norwegian rivers, more than 10% of salmon caught are of farmed origin, with numbers highest near intensive farming areas.

Although these fish are maladapted to wild conditions, a few survive long enough to reach rivers and attempt to spawn.

When they breed with wild salmon, their offspring inherit a mix of traits – neither truly wild nor farmed – leaving them less suited to their natural environment. This process, known as “genetic introgression”, gradually damages the genetic integrity of wild populations.

An underwater portrait of a wild Atlantic salmon
A wild Atlantic salmon.
willjenkins/Shutterstock

Timing makes this latest incident particularly concerning. Wild salmon are now returning to Scottish rivers to spawn. The sudden influx of tens of thousands of farmed escapees increases the chance of interbreeding, and of long-term genetic damage.

The scale of this single escape is extraordinary. Scotland’s total returning wild salmon population is estimated at around 300,000 fish. The release of 75,000 farmed salmon represents roughly a quarter of that number.

Even if only 1% of the escapees survive and breed, that would mean around 750 fish entering rivers and potentially mixing with wild populations. A 2021 Marine Scotland report found that rivers near some fish farms are in “very poor condition”, with evidence of major genetic changes. Worryingly, other nearby rivers previously classed as being in “good condition” could now be at risk too.

Wild Atlantic salmon already face multiple human-driven threats like climate change, habitat loss, pollution and invasive species. Genetic pollution from farmed escapees is yet another blow. It’s one that undermines the species’ resilience to other forms of environmental change.

The release caused by Storm Amy may be one incident, but it’s symptomatic of a wider problem. As storms intensify with a changing climate, the likelihood of future escapes grows. Without tighter regulation, better containment measures and effective genetic monitoring of wild populations, these events could continue to erode what’s left of UK’s wild salmon.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

William Perry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Almost 75,000 farmed salmon in Scotland escaped into the wild after Storm Amy – why this may cause lasting damage – https://theconversation.com/almost-75-000-farmed-salmon-in-scotland-escaped-into-the-wild-after-storm-amy-why-this-may-cause-lasting-damage-267354

Young people around the world are leading protests against their governments

Source: The Conversation – UK – By Sanwal Hussain, PhD Candidate in the Department of Politics and Society, Aston University

The spate of public demonstrations against unemployment, corruption and low quality of life around the world is striking because of who is leading them. Young people have used social media platforms such as Facebook, TikTok, Instagram and YouTube to spread information and arrange their demonstrations.

While some of these protests have remained peaceful, others – such as the youth-led demonstrations in Indonesia and Nepal – have become violent. Ten people died in Indonesia’s protests in late August, when public anger over the cost of living and social inequality boiled over after police killed a delivery driver.

And 72 people were killed in Nepal, which saw demonstrations against a government social media ban in early September escalate into widespread protests over political instability, elite corruption and economic stagnation. The gen Z groups leading these protests said the movement had been hijacked by “opportunist” infiltrators.

Here are three more places where young people, apparently inspired by the youth-led movements in Indonesia and Nepal, have been demonstrating against their governments in recent weeks.

Peru

Hundreds of young people marched in the Peruvian capital, Lima, in late September against the government’s introduction of pension reforms which require young Peruvians to pay into private pension funds. These protesters were joined a week later by transport workers, who marched towards Congress in the centre of Lima.

In a clash on September 29 – during a protest organised by a youth collective called Generation Z – crowds threw stones and petrol bombs at the police, who responded with tear gas and rubber bullets, injuring at least 18 protesters.

These protests came a few months after Peru’s president, Dina Boluarte, issued a decree doubling her salary. The move, which came despite Boluarte’s historically low approval rating of only 2%, was declared “outrageous” by many observers on Peruvian social media.

Young people there are facing job insecurity and high unemployment, while many say the government is not doing enough to combat extortion by gangs, corruption and rising insecurity.

Reports of extortion in Peru have increased sixfold over the past five years. Figures released by Peru-based market research company Datum Internacional in 2024 suggest around 38% of Peruvians have reported knowing about cases of extortion in their area.




Read more:
Peru is losing its battle against organised crime


The recent pension reforms added fuel to existing anger. On October 9, after weeks of calls for Boluarte’s government to resign, lawmakers in Peru voted to remove her from office. New elections are due to be held in April 2026.

Morocco

An anonymous collective of young people called Gen Z 212 – a reference to Morocco’s international dialling code – has been at the centre of protests that have spread across ten Moroccan cities since September 27.

The group has organised and coordinated demonstrations through TikTok and Instagram, as well as the gaming and streaming platform Discord. Membership of Gen Z 212 on Discord grew from fewer than 1,000 members at its launch on September 18 to more than 180,000 by October 8.

This movement began in August after eight women died while receiving maternity care in a public hospital in Agadir, a city on Morocco’s southern coast. This sparked outrage over the state of public services in the country.

World Bank statistics from 2023 suggest there are only 7.8 doctors in Morocco for every 10,000 people – far below the 23 doctors for every 10,000 inhabitants recommended by the World Health Organization.

At the same time, Morocco is spending US$5 billion (£3.7 billion) to build the world’s biggest football stadium, as part of its preparations to co-host the 2030 World Cup with Portugal and Spain. Moroccans see their government as having got its priorities wrong. Crowds have chanted slogans such as “We want hospitals, not football stadiums”.

Police have responded to these protests by arresting hundreds of people, with clashes with protesters becoming violent in some parts of the country. Three people were killed on October 1 in what authorities described as “legitimate defence”, after protesters allegedly tried to storm a police station in the village of Lqliâa, near Agadir.

Morocco’s prime minister, Aziz Akhannouch, has invited Gen Z 212 to participate in dialogue with his government, and the group has shared a list of demands focused on basic needs such as education, healthcare, housing, transportation and jobs. However, the protest movement has continued.

Madagascar

At least 22 people were killed and more than 100 injured in anti-government protests across Madagascar in the first week of October. These protests were coordinated by an online movement known as Gen Z Mada – although labour unions, civil society organisations and several politicians became involved once the protests began.

The movement was sparked by the arrest of two Malagasy politicians, Clémence Raharinirina and Baba Faniry Rakotoarisoa, on September 19. Both politicians had publicly called for citizens to stage peaceful demonstrations in the capital, Antananarivo, against water and power supply problems on the island.

The demonstrations focused initially on shortages of basic necessities, an electricity crisis, unemployment and corruption. But they soon escalated into calls for the Malagasy president, Andry Rajoelina, to resign. Protesters have held him responsible for the problems facing their country.

Rajoelina attempted to satisfy the protesters by dissolving his government and calling for “national dialogue” with Gen Z Mada. In a speech on state broadcaster Televiziona Malagasy, he said: “We acknowledge and apologise if members of the government have not carried out the tasks assigned to them.”

However, this move did not stop the demonstrations. Rajoelina subsequently appointed Ruphin Fortunat Zafisambo, an army general, as his prime minister and imposed a strict curfew in Antananarivo, with a heavy presence of security forces, in a bid to end the protests.

The protesters have vowed to continue their struggle and, at time of writing, some are still waving flags with the words “Rajoelina out”. Rajoelina has now fled the country after factions of the army rallied behind the protesters.

In leading the fight against inequality, young people in developing countries are following a well-trodden path. Youth-led protests in Sri Lanka and Bangladesh have both toppled governments in recent years. These movements seem to have encouraged others across the globe to empower themselves and demand more from entrenched elites.

The Conversation

Sanwal Hussain does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Young people around the world are leading protests against their governments – https://theconversation.com/young-people-around-the-world-are-leading-protests-against-their-governments-266950

The medieval folklore of Britain’s endangered wildlife ‘omens’ – from hedgehogs to nightjars

Source: The Conversation – UK – By Jessica Lloyd May, PhD Candidate in History, University of Nottingham

A hedgehog illustration from a medieval bestiary (1270) by an unknown illuminator. Courtesy of Getty’s Open Content Program, CC BY-SA

As the seasons turn and the nights draw in, the countryside of the British Isles seems alive with omens: an owl’s screech, or a bat above the hedgerows.

For centuries, such creatures were cast as messengers of fate, straddling the boundary between the natural and the supernatural. Yet today, the omens these animals bring are no longer warnings of ghosts or witchcraft, but of something far more tangible: their own survival.

The very species that once haunted our imagination and foretold ill-fated futures are now haunted by habitat loss, climate change and pressure from urbanisation. In the stories of these creatures, we glimpse both our fear of the wild past and our responsibility for the future. Now is the time to revisit some of Britain’s iconic “omen animals”, tracing their folklore and asking what their fate tells us about our shared environment.

Hedgehogs

Hedgehogs, though voted Britain’s favourite mammal, were previously deemed to be milk thieves.

A medieval illustration of a hedgehog
A hedgehog in the medieval Recueil des Croniques d’Engleterre (1471-1483).
Quirk Books

A widespread folkloric belief of the early modern period, likely exacerbated by the European witch hunts, was that witches would transform into hedgehogs to steal milk from cows’ udders. This belief was so prevalent that a campaign to hunt and eradicate hedgehogs was backed by English parliament, with a bounty of a tuppence placed on the head of each hog.

Though their public image has recovered in recent years, hedgehogs are now classed as “vulnerable” to extinction in the UK. Their key threats are linked with habitat loss and fragmentation. Their natural prey, insects and invertebrates, are also in decline due to increased use of pesticides.

Declines in hedgehogs have been particularly steep in rural habitats, with populations reduced by 30–75% since 2000. Conservation priorities focus on restoring lost habitats for hedgehogs and understanding how best to protect them.

Adders

As the only venomous snake in the UK, it is unsurprising that the adder would attract some negative publicity over the years. The species is increasingly a conservation concern and now locally extinct across much of England due to habitat loss.

An “adder’s fork” was a spell ingredient listed by the witches in Shakespeare’s Macbeth (1606). He invoked them too in A Midsummer Night’s Dream (1600) as a way for one character to accuse another of treachery and deceit.

A man fighting a snake
Snakes frequently appear in medieval manuscripts.
British Library Harley MS

Even more sinister, finding an adder on your doorstep was considered a death omen. It is now unlikely for your threshold to be crossed by an adder, as they are now mostly found in small, isolated populations. Even they could be lost by 2032.

Conservation efforts are focusing on the creation, restoration and management of suitable grassland, but are not currently widely implemented. Increasing public awareness and appreciation of the species is a key goal for adder preservation.

Wildcats

Once widespread across Britain, wildcats are now considered our most endangered animal species. They have a long reputation in Scottish folklore for being untameable, serving as the namesake of the Pictish province of Cataibh when it was formed in 800BC. They were often adopted as symbolic emblems or mascots in early clan lore due to their fierce fighting spirit. Their ominous cry is thought to have inspired ghost stories across the ages.

two cats hunting mice in a medieval illustration
Cats hunting mice in a 13th-century manuscript.
British Library, Royal 12 C XIX

Deforestation and persecution, especially by Victorian gamekeepers, eradicated wildcats from England, Wales and much of Scotland. In 2019, experts concluded that breeding with feral domestic cats has compromised their genetic integrity and that the remnant populations are too small, isolated and genetically degraded to have a long-term future.

But some hope does remain for the wildcat. Saving Wildcats, a European partnership project dedicated to wildcat conservation, is leading efforts to breed the species in captivity. As of 2023, a number of wildcats have been into Scotland’s Cairngorm National Park.

Mountain hares

The mountain hare is the UK’s only native member of the hare and rabbit family. Once widespread across Britain, mountain hares are now confined to upland regions of Scotland and the Peak District.

An illustration of a hare hunt
Dogs shown hunting a hare in an illustration from a medieval Bestiary manuscript.
The Medieval Bestiary

Hares have a long history of superstitious and folkloric attachments. They were seen as shape-shifters, or familiars of witches, which would bring doom and misfortune to any person unfortunate enough to have their path crossed. Their shape-shifting abilities were referenced in The Mabinogion, a collection of Welsh stories compiled in the 12th and 13th centuries, across Celtic folklore before. Numerous regional hare-witches were referenced across England.

While fear of wronging a witch historically offered hares some protection, they have faced decline and range reduction from competition with brown hares, hunting pressures and land use change. Recent surveys suggest a 70% crash in the Peak District population over just seven years. Under current rates of decline, the mountain hare will become extinct from the region within five years.

Nightjars

Summer visitors to the UK, nightjars were once thought to drink milk from goats and in doing so poison them and cause their udders to wither away. These birds were also said to snatch up lost souls wandering between worlds with their unearthly call.

illustration of a bird drinking from a goat's udder
A nightjar drinks from a goat’s udder in an illustration from a medieval Bestiary manuscript.
The Medieval Bestiary

Nightjars suffered a catastrophic population decline in excess of 50% and range contraction of around 51% during the latter half of the 20th century. However, surveys conducted in 1992 and 2004 saw welcome population increases of 50% and 36% respectively. Nightjar were recorded making use of new clear-felled and young conifer plantations and benefiting from long-term habitat management projects in their southern strongholds. Although recent recoveries offer hope, nightjars have reclaimed only a fraction of their former range – around 18%.

These species, and far more besides, have been instrumental in the stories people have woven across time. So the next time you hear the screech of an owl outside your bedroom window or glimpse the wings of a bat flapping over your garden, pause to think about the omens of our wild country – and how their stories might yet continue.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. The medieval folklore of Britain’s endangered wildlife ‘omens’ – from hedgehogs to nightjars – https://theconversation.com/the-medieval-folklore-of-britains-endangered-wildlife-omens-from-hedgehogs-to-nightjars-267085

In defense of ‘surveillance pricing’: Why personalized prices could be an unexpected force for equity

Source: The Conversation – USA (2) – By Aradhna Krishna, Dwight F. Benton Professor of Marketing, University of Michigan

Surveillance pricing has dominated headlines recently. Delta Air Lines’ announcement that it will use artificial intelligence to set individualized ticket prices has led to widespread concerns about companies using personal data to charge different prices for identical products. As The New York Times reported, this practice involves companies tracking everything from your hotel bookings to your browsing history to determine what you’re willing to pay.

The reaction has been swift. Democratic lawmakers have responded with outrage, with Texas Rep. Greg Casar introducing legislation to ban the practice. Meanwhile, President Donald Trump’s new chair of the Federal Trade Commission has shut down public comment on the issue, signaling that the regulatory pendulum may swing away from oversight entirely.

What’s missing in this political back-and-forth is a deeper look at the economics. As a business school professor who researches pricing strategy, I think the debate misses important nuances. Opponents of surveillance pricing overlook some potential benefits that could make markets both more efficient and, counterintuitively, more equitable.

What surveillance pricing actually is

Surveillance pricing differs from traditional dynamic pricing, where prices rise for everyone at times of peak demand. Instead, it uses personal data – browsing history, location, purchase patterns, even device type – to charge a unique price based on what algorithms predict you’re willing to pay.

The goal is to discover each customer’s “reservation price” – the most they’ll pay before walking away. Until recently, this was extremely difficult to do, but modern data collection has made it increasingly feasible.

An FTC investigation found that companies track highly personal consumer behaviors to set individualized prices. For example, a new parent searching for “baby thermometers” might find pricier products on the first page of their results than a nonparent would. It’s not surprising that many people think this is unfair.

The unintended progressive tax

But consider this: Surveillance pricing also means that wealthy customers pay more for identical goods, while lower-income customers pay less. That means it could achieve redistribution goals typically pursued through government policy. Pharmaceutical companies already do this globally, charging wealthier countries more for identical drugs to make medications accessible in poorer nations. Surveillance pricing could function as a private-sector progressive tax system.

Economists call it “price discrimination,” but it often helps poorer consumers access goods they might otherwise be unable to afford. And unlike government programs, this type of redistribution requires no taxpayer funding. When Amazon’s algorithm charges me more than a college student for the same laptop, it’s effectively running a means-tested subsidy program – funded by consumers.

PBS NewsHour featured a segment on the Delta Air Lines news.

The two-tier economy problem

In my view, the most legitimate concern about surveillance pricing isn’t that it exists, but how it’s implemented. Online retailers can seamlessly adjust prices in real time, while physical stores remain largely stuck with uniform pricing. Imagine the customer fury if Target’s checkout prices varied by person based on their smartphone data: There could be chaos in the stores. This digital-physical divide could also create unfair advantages for tech-savvy companies while leaving traditional retailers behind. That would raise fairness considerations for consumers as well as retailers.

This is related to another force that could limit how far surveillance pricing can go: arbitrage, or the practice of buying something where it is cheaper and selling it where it is more expensive.

If a system consistently charges wealthy customers $500 for items that cost poor customers $200, it creates opportunities for entrepreneurial intermediaries to exploit these price gaps. Personal shopping services, buying cooperatives or even friends and family networks could arbitrage these differences, providing wealthy customers access to the lower prices while splitting the savings. This means surveillance pricing can’t discriminate too aggressively – market forces will erode excessive price gaps.

That’s why I believe the solution isn’t to ban surveillance pricing entirely, but to monitor how it is put in practice.

The regulatory sweet spot

The current political moment offers a strange opportunity. With Republicans focused on AI innovation and Democrats fixated on bans, there’s space for a more sophisticated position that embraces market-based redistribution while demanding strong consumer protections.

In my view, smart regulation would require companies to disclose when personal data influences pricing, and would prohibit discrimination based on protected characteristics such as race, color or religion – and this list needs to be created extremely carefully. This would preserve the efficiency benefits while preventing abuse.

Surveillance pricing based on desperation or need also raises unique ethical questions. Charging a wealthier customer more for a taxi ride is one thing; charging someone extra solely because their battery is low and they risk being stranded is another.

As I see it, the distinction between ability to pay and urgency of need must become the cornerstone of regulation. While distinguishing the two may seem challenging, it’s far from impossible. It would help if customers were empowered to report exploitative practices, using mechanisms similar to existing price-gouging protections.

A solid regulatory framework must also clarify the difference between dynamic pricing and surveillance-based exploitation. Dynamic pricing has long been standard practice: Airlines charge all last-minute travelers higher fares, regardless of their circumstances. But consider two passengers buying tickets on the same day – one rushing to a funeral, another planning a spontaneous vacation. Right now, airlines can use technology to identify and exploit the funeral attendee’s desperate circumstances.

The policy challenge is precise: Can we design regulations that prevent airlines from exploiting the bereaved while still allowing retailers to offer discounts on laptops to lower-income families? The answer will determine whether surveillance pricing becomes a tool for equity or exploitation.

The Conversation

Aradhna Krishna does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. In defense of ‘surveillance pricing’: Why personalized prices could be an unexpected force for equity – https://theconversation.com/in-defense-of-surveillance-pricing-why-personalized-prices-could-be-an-unexpected-force-for-equity-266293