Why banning pro-Palestine marches is a risky response to antisemitic violence

Source: The Conversation – UK – By Joel Busher, Professor of Political Sociology, Coventry University

Pete Speller/Shutterstock

Following recent antisemitic violence and aggression, calls from some quarters for a temporary ban on pro-Palestine marches have gained traction. Conservative party leader Kemi Badenoch has firmly supported a ban, while
Keir Starmer, the prime minister, has suggested that some protests may need to be stopped. The government’s independent reviewer of terrorism legislation has called for a moratorium on such marches.

Those who have made such calls do so on the grounds that pro-Palestine marches, whatever their intent, are contributing to a “tone of Jew hatred within our country”, in the words of Chief Rabbi Sir Ephraim Mirvis. Starmer has also expressed concern about the “cumulative” effect of the marches on Jewish communities.

This is an understandable position in some ways. There can be little denying that some participants in pro-Palestine events have articulated antisemitic positions. And in a period where more clearly needs to be done to address antisemitic violence and aggression, a ban appears to provide a way for authorities to send a clear message that there is no place for antisemitism in Britain today.

Yet there are also problems with such proposals. As policymakers consider their options, it is important that these problems are taken seriously.

Evidence on the relationship between protest activity and targeted violence outside of the protest arena is limited. The available evidence points to a complex and context-dependent relationship.

Some studies have found that when protests increase, extremism and extremist violence can also rise, especially when society is more divided. Such a pattern has been observed, for example, in the US, where the bipartisan thinktank the Center for Strategic and International Studies identified heightened protest activity and rising domestic terrorism during the early 2020s.

However, many studies of nonviolent protest show that it reduces political violence, by providing nonviolent means of pursuing social and political objectives.

Where heightened protest activity coincides with increased extremist violence, it is often unclear whether protests or marches themselves are the cause. Today, people participating in social movements are likely to access and share information through a range of (often unregulated) spaces both offline and online. It is difficult to assess how important protests themselves might be in influencing people to go on to engage in targeted violence.

This is not simply academic nitpicking. It means that it is possible that a ban on marches would have little to no effect on the use of targeted violence against Jewish communities.

In fact, there is a distinct possibility that banning pro-Palestine marches, even if only temporarily, might actually increase violence.

Studies show that violence is less likely to escalate when moderate groups within protest movements are present and have influence. This has been observed, for example, in research into the escalation or inhibition of violence during waves of far-right protest.

Expanded state repression – such as bans on certain forms of previously legal protest – can weaken the position of moderate factions. When this happens, calls for restraint and advocacy of non- or less-violent strategies can lose credibility within the movement, weakening the “internal brakes” on violence.

Practicalities of enforcement

A moratorium on pro-Palestine marches would also raise many questions about the practicalities of any restrictions. For one, calls on the police to ban other contentious demonstrations that risk hostility towards different groups would increase.

What particular types of action would be banned? Marches? Demonstrations? Would size be a factor? Would it cover a protest against the ban on the protest? What about other forms of action such as sit-ins, information stands or coordinated online action? And what sanctions would be imposed on those who did not comply?

Attempting to enforce such bans could become a significant drain on already stretched public resources, not least because activists would probably seek to increase pressure on authorities because of those costs. This is one of the most obvious lessons to draw from responses to the government’s attempts to ban the group Palestine Action.




Read more:
Labour wants to restrict repeat protests – but that’s what makes campaigns successful


In addition to this, police have also recently been authorised to consider the “cumulative impact” of protests on local areas when policing. They have had to grapple with how and when to incorporate this in addition to their usual powers.

Before introducing a ban, it’s important to think about the example it would set and how it could influence future decisions about the right to protest. The UK would be less able to criticise authoritarian countries and illiberal democracies that misuse counterextremism and counter-terrorism powers that limit people’s freedom.

None of this is to deny the urgency of confronting antisemitic violence and aggression in the UK. This requires sustained political commitment, effective policing and community protection. But restricting the right to protest is a blunt and risky instrument.

The available evidence suggests it may do little to reduce harm and could, in some circumstances, make matters worse. Politicians should therefore be cautious before treating bans on marches as a solution to complex and deeply rooted problems.

The Conversation

Joel Busher has received funding from the Centre for Research and Evidence on Security Threats (CREST) for his work on the escalation and inhibition of political violence.

Tufyal Choudhury does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why banning pro-Palestine marches is a risky response to antisemitic violence – https://theconversation.com/why-banning-pro-palestine-marches-is-a-risky-response-to-antisemitic-violence-282168

Additional learning needs present a key challenge for the incoming Senedd

Source: The Conversation – UK – By Emily Roberts-Tyler, Lecturer in Education, Bangor University

Rawpixel.com/Shutterstock

The upcoming Senedd elections may shift the balance of power in Wales. Any new government must immediately grapple with the significant ongoing challenges of embedding educational reforms across the additional learning needs system.

Recent policy proposals to change the system of support for children with special educational needs in England have brought a heightened focus on how education systems might best support all learners. In Wales, special educational needs and disabilities are referred to as additional learning needs (ALN).

Wales reached a major milestone in August 2025 when the ALN code came fully into effect, four years after its publication.

Despite the devolution of education and increasing divergence in education policy between Wales and England, the ALN code in Wales shares some similar ambitions to England’s recent policy plans.

These reforms in Wales sought to increase the rights and autonomy of children and young people. They provide statutory individual development plans for those needing anything additional to universal learning provision. They also extend support for learners aged up to 25. The intention is to improve consistency and strengthen multi-agency collaboration across education, health and social care.

The progress of reform

The additional learning needs reforms in Wales reflect a commendable shift towards rights-based, person-centred planning and autonomy for children and young people and their families.

This is also a key tenet of the Curriculum for Wales. This has been implemented since 2022 in primary schools, and gradually over subsequent years in secondary schools. The curriculum framework has a focus on learner voice and providing a broad, purpose-led and flexible curriculum. It is designed to ensure that even those from disadvantaged backgrounds or with complex needs are supported to access a meaningful education.

However, a number of challenges remain with embedding the ALN reforms across Wales.

A key issue relates to the identification of learners with ALN. Under the new system, there has been a 53% decrease in the number of learners being identified as having ALN. This is despite a reported increase in children presenting with more complex needs, indicating that learning needs may in fact be increasing. Data also suggests that it is those with low to moderate needs who are much less likely to be formally identified.

It has been suggested that this reduction could be due to children who might previously have been identified with ALN being catered for through an improved universal offering.

Girl looking worried in class
Some students may be missing out on support.
New Africa/Shutterstock

However, teachers have reported that the proportion of learners in their classes with ALN has increased over the past five years. A majority – 65% – of teachers in Wales reported that there were learners in their classes who still needed additional support, but were no longer identified as having ALN following changes in identification criteria.

Lacking resources

This has caused hugely increased workloads in attempts to provide adequate learner support. At the same time, the number of in-house specialist staff to advise and support delivery has dramatically reduced. Without the resources to support more learners with additional needs, many teachers have reported that children are often not receiving the education they are entitled to.

There have been significant strides towards developing inclusive schools across Wales. Even in the best cases, though, there is a long way to go. In reality, the overall picture behind the reduction in identification of ALN indicates issues with identification criteria and resources, and whether the current policy encompasses all children in need, rather than a sudden shift to high quality inclusive education.

Schools report an increase in local authorities refusing requests for assessments or access to support for struggling learners. They have suggested the bar is being raised for access to support, without clarity or transparency. There’s also a clear indication from specialist staff in Wales that they have insufficient time to fulfil their ALN duties.

This suggests that processes and resources for identifying learners with ALN are playing a significant part in the reduced identification. Many learners could be slipping through the net, rather than experiencing effective inclusive provision.

This tension between policy intent and practice is familiar territory when it comes to inclusion. There are ongoing concerns that legislative reform has outpaced operational readiness and available resources, leading to a crisis point.

This crisis is exacerbated for Welsh-medium learners. The policy intention is for a fully bilingual system. But finding Welsh-medium specialists and honouring language preference is proving challenging. This has lead to families struggling to find support in their preferred language. Such battles are at odds with both Welsh Language policy and the principles of Person-Centred Planning and autonomy that are central to the reforms.

Whatever the outcome of the Senedd elections, educators and families across Wales will be hoping for an increased sense of momentum and urgency. They’ll also be looking for a commitment to sustained and appropriate levels of funding to ensure learners in Wales can be supported to access their education.

The Conversation

Emily Roberts-Tyler does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Additional learning needs present a key challenge for the incoming Senedd – https://theconversation.com/additional-learning-needs-present-a-key-challenge-for-the-incoming-senedd-281048

‘The farther away, the better’ is the problematic logic behind U.S. third-country deportations

Source: The Conversation – Canada – By Guillermo Candiz, Assistant Professor, Human Plurality, Université de l’Ontario français

Since January 2025, the Donald Trump administration in the United States has signed bilateral agreements with 27 governments to deport migrants to countries where they have no ties.

This process is known as third-country deportation, and it’s created a system that operates as migration deterrence. These agreements also transfer responsibility for managing migrant lives to the Global South.

In February 2025, the Trump administration sent two chartered planes carrying 200 people from countries like Iran, Afghanistan, Russia, Azerbaijan, Uzbekistan and China to San José, Costa Rica.

During our ongoing research that involved fieldwork in Costa Rica in 2025, we interviewed two families who were on board one of the planes. They had been shackled during the flight.

Alerted that the first deportation from the U.S. was to take place, Costa Rican journalists and human rights activists awaited the arrival at the airport. The Costa Rican agents closed the window blinds of the planes before removing the shackles.

Migrants given no information

As the migrants we interviewed told us — and as documented by Human Rights Watch — none of the people sent to Costa Rica spoke Spanish. They weren’t informed where the flights were headed or where they’d be taken upon arrival. No translators were present when the first plane landed, and only a few were available upon the arrival of the second flight.

U.S. authorities expelled the migrants without giving them the opportunity to apply for asylum. Although some third-country deportations had occurred prior to February 2025, they had typically been carried out on a much smaller scale.

The deportation of non‑citizens to Costa Rica set the stage for 27 bilateral agreements later signed by the Trump administration with governments across Latin America, Africa and Central Asia. What initially appeared to be an exceptional measure had, within a year, become a preferred approach to migration management.

‘Border spectacle’

In 2025, the U.S. deported approximately 675,000 people. Among them, around 15,000, or two per cent of the total, were sent to third countries.

Given the relatively low numbers involved, the objective is not the removal of large populations of unwanted migrants. Instead, these deportations function as what American migration scholar Nicholas De Genova terms “the border spectacle of migrant victimization.”

Such spectacles are designed to generate fear. They encourage some asylum-seekers already in the country to leave and aim to deter others from attempting to cross into the U.S. altogether.

The principle rationale for this border regime was made explicit by U.S. Secretary of State Marco Rubio in July 2025: “The further away, the better, so they can’t come back across the border.”

This statement wasn’t just rhetoric — it reflected U.S. policy.

Creating uncertainty about where migrants might be sent — whether Eswatini, South Sudan, Rwanda, Costa Rica or Cameroon — was central to the strategy. It served both to deter would-be migrants in their home countries and to pressure those already in the U.S. to pursue what the administration called “self-deportations.”

Why do Global South states sign on?

To push countries in the Global South to accept deportation agreements for non-nationals, the U.S. relies on four forms of pressure: direct payments, visa restrictions, tariff threats and conditions on foreign aid.

Ghana, for example, secured the lifting of consular restrictions in August 2025 after agreeing to co-operate on deportations.

When Costa Rica signed its third-country deportation agreement in March 2026, President Rodrigo Chaves stated that he was “helping the economically powerful brother of the North” in order to avoid American tariffs on Costa Rican free-trade zones.

Eswatini agreed to receive deportees in exchange for financial transfers and improvements in its bilateral relationship with the U.S. Rwanda, for its part, capitalized symbolically on the model by incorporating it into its regional diplomatic strategy.

Taking on costs

Yet these negotiations are deeply one-sided. While the countries in questions may achieve short‑term gains, they also take on additional, uncompensated costs.

As we learned through our interviews of migrants in Costa Rica, for example, 85 migrants of the approximated 200 received in February 2025 remained in the Central American country. That’s because they could not return to their countries of origin for fear of persecution, imprisonment or forced military recruitment.

Some later resumed their journey to the U.S. and successfully claimed asylum there. Others required access to medical care, work permits and other forms of assistance.

Beyond granting temporary residency permits with limited rights, Costa Rica lacks the resources to ensure the safety and security of these migrants and to adequately address their needs.

The Trump administration’s reduction in international humanitarian aid has further undermined Costa Rica’s refugee protection regime.

For migrants, this state of prolonged waiting marked by legal uncertainty has resulted in psychological distress. Several interviewees reported panic attacks, depression and insomnia.




Read more:
How international aid cuts are eroding refugee protections in the Global South


Beyond the United States

This emerging border regime is not uniquely American. In March 2026, the European Parliament endorsed the so-called return hubs mechanism, which opens the door to offshoring asylum processing.

Italy, for example, has had migrant detention hubs in Albania for more than a year.

Canada has reconfirmed its own Safe Third Country Agreement with the U.S.




Read more:
Tragedies, not accidents: Tougher Canadian and U.S. border policies will cost more lives


Canada also passed Bill C-2 and Bill C-12 in 2025, legislation that substantially restricts access to asylum. What’s more, it reduced its refugee resettlement targets by 30 per cent for the 2026–28 period.

This doesn’t constitute a replication of the U.S. model, but it does reflect a convergence. Different mechanisms are increasingly aligned in the same direction: the progressive erosion of the right to asylum.

Mobilization

It’s therefore important to ask whether some refugee claimants deported to the U.S. may subsequently face third‑country deportations to other states.

In March 2026, more than 30 human rights organizations issued a joint statement calling for an end to chain deportations to Costa Rica. It explicitly accused the Costa Rican state as being complicit in — and directly responsible for — American violations of its asylum law and its international obligations under the United Nations’ 1951 Refugee Convention and 1967 Protocol.

In the Democratic Republic of the Congo, protests recently erupted against the anticipated deportation of 1,100 Afghans from the U.S. to the country.

UN human rights experts have also expressed alarm about the risk of torture, enforced disappearance or arbitrary deprivation of life in some third countries.

Across the globe, migration scholars, human rights organizations and allies must do more than voice concern — they need to co-ordinate, organize and actively resist this emerging border regime before it becomes entrenched.

The Conversation

Guillermo Candiz receives funding from The Social Sciences and Humanities Research Council (SSHRC)

Tanya Basok receives funding from Social Science and Humanities Research Council

ref. ‘The farther away, the better’ is the problematic logic behind U.S. third-country deportations – https://theconversation.com/the-farther-away-the-better-is-the-problematic-logic-behind-u-s-third-country-deportations-281687

Alaska’s near-record landslide tsunami sent a wave 1,580 feet up the fjord walls – and left clues for building a warning system

Source: The Conversation – USA (2) – By Michael E. West, Director of the Alaska Earthquake Center and State Seismologist, University of Alaska Fairbanks

The Tracy Arm landslide sent a tsunami wave far up the opposite side of the fjord near South Sawyer Glacier. John Lyons/U.S. Geological Survey

On the evening of Aug. 9, 2025, passengers on the Hanse Explorer finished taking selfies and videos of the South Sawyer Glacier, and the ship headed back down the fjord. Twelve hours later, a landslide from the adjacent mountain unexpectedly collapsed into the fjord, initiating the second-highest tsunami in recorded history.

We conduct research on earthquakes and tsunamis at the Alaska Earthquake Center, and one of us serves as Alaska state seismologist. In a new study with colleagues, we detail how that landslide sent water and debris 1,580 feet (481 meters) up the other side of the fjord – higher than the top floor of the Taipei 101 skyscraper – and then continued down Tracy Arm. The force of the water stripped the fjord’s walls down to bare rock.

An illustration compares the height of the tsunami's reach to some of the world's tallest buildings
The Tracy Arm landslide generated a tsunami that sent a wave so high up the opposite fjord wall that it would have overtopped some of the world’s tallest buildings. Here’s how it compares to other large tsunamis around the world.
Steve Hicks/University College London

It was just after 5 o’clock in the morning on a dreary day, and fortunately, no ships were nearby. In the months after, some cruise lines started avoiding Tracy Arm. However, the conditions that led to this event are not at all unique to this fjord.

Landslides are common in the coastal mountains of Alaska where rapid uplift, caused by tectonic forces and long-term ice loss, converges with the erosive forces of precipitation and moving glaciers. But a curious pattern has emerged in recent years: Multiple major landslides have occurred precisely at the terminus of a retreating glacier.

Though the mechanics are still poorly understood, these mountains appear to become unstable when the ice disappears. When the landslide hits the water, the momentum of millions of tons of rock is transferred into tsunami waves.

Two illustrations of Tracy Arm and the glacier's extent over time.
Maps show how the glacier has retreated over the years, moving past the section of mountain that collapsed (outlined in white on the right) in the days prior to the slide. The map on the right shows the height the tsunami reached on the fjord walls.
Planet Labs

This same phenomenon is playing out from Alaska to Greenland and Norway, sometimes with deadly consequences. Across the Arctic, countries are trying to come to terms with this growing hazard. The options are not attractive: avoid vast swaths of coastline, or live with a poorly understood risk. We believe there is an obvious role for alert systems, but only if scientists have a better understanding of where and when landslides are likely to occur.

Signs that a landslide might be coming

The Tracy Arm landslide is a powerful example.

The landslide occurred in August, when warm ocean waters and heavier precipitation favor both glacier retreat and slope failure. The glacier below the landslide area had experienced rapid calving – large chunks of ice breaking off and falling into the water – and it had retreated more than a third of a mile in the two months prior. Heavy rain had been falling. Rain enters fractures in the mountain and pushes them closer to failure by increasing the water pressure in cracks.

Most provocative are the thousands of small seismic tremors that emanated from the area of the slide in the days prior to the mountainside collapsing.

We believe that this combination of signs would have been sufficient to issue progressive alerts to any ships in the vicinity and homes and businesses that could have been harmed by a tsunami at least a day prior to the failure – had a monitoring program existed.

Escalating alerts are used for everything from terrorism and nuclear plant safety to avalanches and volcanic unrest. They don’t remove the risk, but they do make it easier for people to safely coexist with hazards.

For example, though people are still killed in avalanches, alert systems have played an essential role in making winter backcountry travel safer for more people. The collapse at Tracy Arm demonstrates what could be possible for landslides.

What an alert system could look like

We believe that the combination of weather and rapid glacier retreat in early August 2025 was likely sufficient to issue an alert notifying people that the hazard may be temporarily elevated in a general area. On a yellow-orange-red scale, this would be a yellow alert.

In the hours prior to the landslide, the exponential increase in seismic events and telltale transition to what is known as seismic tremor – a continuous “hum” of seismic energy – were sufficient to communicate a time-sensitive warning for a specific region.

Seismic data from the closest monitoring station to the landslide, about 60 miles (100 kilometers) away, shows the “hum” of seismic energy increasing just ahead of the landslide, indicated by the tall yellow spike shortly after 5 a.m. Source: Alaska Earthquake Center.

These observations, recorded as a byproduct of regional earthquake monitoring, warranted an “orange” alert noting immediate concern. The signs were arguably sufficient to recommend keeping boats and ships out of the fjord.

Our research over the past few years has demonstrated that once a large landslide has started, it is possible to detect and measure the event within a couple of minutes. In this amount of time, seismic waves in the surrounding area can indicate the rough size of the landslide and whether it occurred near open water.

A monitoring program that could quickly communicate this would be able to issue a red alert, signaling an event in progress.

The National Oceanic and Atmospheric Administration’s tsunami warning program has spent decades fine-tuning rapid message dissemination. A warning system would have offered little help for ships in the immediate vicinity, but it could have provided perhaps 10 minutes of warning for those who rode out the harrowing tsunami farther away.

An animation showing the tsunami’s reach up the fjord walls after the landslide, as well as the large cresting wave as it heads down Tracy Arm. Credit: Shugar et al., 2026.

There is no landslide monitoring system operating yet at this scale in the U.S. Building one will require cooperation across state and federal agencies, and strengthened monitoring and communication networks. Even then, it will not be fail-proof.

Understanding risk, not removing it

Alert systems do not remove the risk entirely, but they are a better option than no warning at all. Over time, they also build awareness as communities and visitors get used to thinking about these hazards.

Many of the most alluring places on Earth come with significant hazards. Arctic fjords are among them. The same processes that create this hazard – glacier retreat, steep terrain, dynamic geology – are also what make these landscapes so compelling. The mix of glaciers, ice-choked waters and steep mountains is exactly what draws people to these places. People will continue to visit and experience them.

The last view of Tracy Arm, taken from the Hanse Explorer motoring away from the South Sawyer glacier, before a landslide from a mountain just out of view on the left crashed into the fjord. The landslide generated a tsunami that sent a wave nearly 1,600 feet (about 490 meters) up the mountain on the right.

The question is not whether these places should be avoided altogether, but how to help people make more informed decisions. We believe that stronger geophysical and meteorological monitoring, coupled with new research and communication channels, is the first step.

On Aug. 9, visitors unknowingly passed through a landscape on the cusp of failure. An alert system might have given tour companies and people in the area the information they needed to make more informed choices and avoid being caught by surprise.

The Conversation

Michael West is part of a cooperative effort between the Alaska Earthquake Center, the Alaska Division of Geological and Geophysical Surveys, and the U.S. Geological Survey’s Landslide Hazards Program to improve the understanding of large deep-seated landslides in Alaska. This effort receives financial support from the USGS.

Ezgi Karasozen is part of a cooperative effort between the Alaska Earthquake Center, the Alaska Division of Geological and Geophysical Surveys, and the U.S. Geological Survey’s Landslide Hazards Program to improve the understanding of large deep-seated landslides in Alaska. This effort receives financial support from the USGS.

ref. Alaska’s near-record landslide tsunami sent a wave 1,580 feet up the fjord walls – and left clues for building a warning system – https://theconversation.com/alaskas-near-record-landslide-tsunami-sent-a-wave-1-580-feet-up-the-fjord-walls-and-left-clues-for-building-a-warning-system-282017

Using diesel generators to power the AI revolution would kill hundreds of Americans a year

Source: The Conversation – USA (2) – By Peter Adams, Professor of Civil and Environmental Engineering, Engineering and Public Policy, Carnegie Mellon University

Diesel generators sit outside a data center in Ashburn, Va. Amanda Andrade-Rhoades for The Washington Post via Getty Images

With U.S. electricity demand starting to rise quickly and expected to continue rising, largely because of the power needed for data centers that process artificial intelligence, people are looking for almost any potential solution.

And people are warning that the full projected demand may not actually develop, which could make massive investments in power plants unnecessary, raising Americans’ electricity rates even more.

U.S. Secretary of Energy Chris Wright is among those who have been promoting what might seem to be an attractive idea: “We have 35 gigawatts of backup generators that are sitting there,” he told an audience of natural gas industry leaders in December 2025. He was referring to diesel-fired engines at hospitals, office complexes, corporate campuses and even data centers to provide electricity if the grid goes down.

That amount of power would be a significant step toward meeting the nation’s expected energy needs, without needing new long-term investments in power plants or transmission lines. But it’s also vital to know, as Wright went on to note, that “emissions rules or whatever” mean those generators can’t just be turned on and left running when there’s not a power outage or other emergency.

As an environmental engineer who studies air pollution from the energy system, I believe this proposal is concerning. Those emissions rules are in place because diesel-powered generators are among the dirtiest sources of energy, emitting fine particulate matter and related chemicals. That is a pollutant whose total emissions from all sources are estimated to cause about 100,000 premature deaths every year in the U.S. And in fact, emissions regulations on backup generators are less stringent than other power sources because they are intended only to run in emergency situations.

If Wright’s idea took hold, diesel fumes would pour into the nation’s air, often near major metropolitan areas that already have air pollution problems. To see more closely what would happen, John Allen, a research assistant at Carnegie Mellon University, and I projected the effects on public health and air quality of running backup diesel generators at data centers.

Simulating the emissions of diesel generators

Comprehensive data is hard to nail down about locations of data centers and which ones have how many diesel generators on site. Nobody has yet made a detailed proposal for which generators might be switched on, or for how long, so we did an exploratory analysis.

We started with an online database of locations of data centers. We also found documentation suggesting there is at least 35 gigawatts of diesel-powered generating capacity at data centers across the U.S., so we allocated that amount, which Wright had mentioned, proportionally to each data center’s size.

We looked at a scenario where these generators ran continuously throughout the year, generating 310 terawatt-hours of electricity. The generators might be used less – Wright himself talked about running them for only “a few hours per year.” But once they’re allowed to be turned on for regular power generation, people might get used to having that electricity available.

We assumed that all diesel generators are relatively new and comply with the Environmental Protection Agency’s most recent and stringent standards, which took full effect in 2015.

We compared the air pollution created from the diesel generators with a scenario where that same amount of power – 310 terawatt-hours – came from the existing mix of power plants in regional electrical grids. This could happen if utility companies built more generation capacity of the same types that already exist in the region or built new transmission lines to deliver more power from elsewhere.

Because no air quality model is perfect, we used three different computer simulations, each of which has been published in scholarly research journals, to simulate what would happen to the diesel pollution and how people downwind would be affected.

Diesel is dirtier

We found that using diesel generators rather than grid electricity would cause significant amounts of fine particulate matter pollution that would be dangerous to people’s health. The exact results varied with each simulation and with a range of assumptions about emissions from diesel generators. But in general, we found that using backup diesel generators this way would cause about 500 more premature deaths per year in the U.S. compared with getting the same electricity from the central grid.

In a scenario where diesel generators were somewhat dirtier than the most recent standards, one air quality model had more than 800 additional people dying prematurely each year nationwide.

In the counties that would be hardest hit by the diesel generators’ pollution, the concentrations of fine particulate matter would increase by 0.25 to 2 micrograms per cubic meter of air, depending on the location and other assumptions for our calculations. This might not sound like a lot, but most urban areas in the U.S. already have fine particulate air pollution that’s close to the EPA limit of 9 micrograms per cubic meter. Adding that much more pollution risks tipping those communities beyond federal standards.

The Clean Air Act requires states to adjust their emissions to meet standards, so upping pollution from backup generators would require other cuts in emissions, such as at power plants and transportation.

Smoke billows from the top of a building labeled 'Sheraton.'
A diesel generator malfunction on a hotel roof in Denver in 2015 sent black smoke into the sky. Firefighters determined there was no fire.
Andy Cross/The Denver Post via Getty Images

Inspection and maintenance challenges

The results concern us, but reality might be even worse. We assumed that diesel generators would meet the most recent Tier 4 EPA emissions standards, but they may be older or exempt for other reasons.

For all our simulations, we assumed inspections and maintenance would keep emissions controls functioning properly at all generators. Modern diesel particulate filters are effective at reducing emissions, though not eliminating them entirely. When those filters fail, emissions skyrocket. Monitoring and maintenance at all of the generators, if they were running continuously, would be a logistical nightmare for regulators and the owners of the generators, and likely expensive as well.

Historically, centralized power plants that have thorough on-site monitoring are the most likely to have emissions control equipment running correctly to reduce emissions verifiably. Shifting to smaller generator units in a wide range of locations creates more potential points of failure and makes it harder to figure out that something has gone wrong, and where.

In our analysis, we compared backup diesel generators with the current electrical grid, where 60% of generation is still from fossil fuels. Increasing generation from renewable energy sources, such as solar panels and wind power, could help meet the rising demand for power without the additional emissions of dangerous air pollution.

The Conversation

Peter Adams has received research funding from various federal organizations, including EPA, NASA, NSF, and DOE as well as private philanthropic organizations.

ref. Using diesel generators to power the AI revolution would kill hundreds of Americans a year – https://theconversation.com/using-diesel-generators-to-power-the-ai-revolution-would-kill-hundreds-of-americans-a-year-280892

Fire is transforming the US West’s public lands – research shows overlooked cost to recreation

Source: The Conversation – USA (2) – By Kyle Manley, Postdoctoral research fellow, University of Colorado Boulder

Large-scale wildfires seem to turn visitors away, while prescribed burning may have the opposite effect. Helen H. Richardson/MediaNews Group/The Denver Post via Getty Images

Colorado’s two largest fires on record, the Cameron Peak and East Troublesome fires, burned hundreds of thousands of acres across some of the state’s most visited landscapes in 2020.

The fires scorched trails, campgrounds and beloved ecosystems in and around Rocky Mountain National Park and the Arapahoe and Roosevelt national forests.

More than five years later, the scars remain stark: blackened hillsides, closed trails and bare slopes where forests once stood. According to our recent research, which has not yet been peer reviewed, the fires caused significant and lasting declines in visitation at the burned sites.

A sign says readers are entering a burned area with hazards.
The East Troublesome Fire burned nearly 200,000 acres. Years later, the area is still recovering.
Jim West/UCG/Universal Images Group via Getty Images

Even after the 2020 fires, Rocky Mountain National Park attracted 4.2 million visitors in 2024, generating US$862 million in economic output in local gateway communities such as Estes Park and Grand Lake. Rocky Mountain National Park is a significant contributor to the nearly 1 billion annual visits and $700 billion in spending that public lands generate nationwide as outdoor recreation continues to grow. It also supports a variety of important social values beyond the economy, including mental health and well-being, cultural and spiritual connection, and the sense of place that binds people to landscapes.

But these landscapes are changing fast. Wildfires are affecting our public lands at an accelerating scale and increasing intensity. Yet how fire affects recreation has remained poorly understood.

That’s the question I set out to answer with an interdisciplinary team of researchers. As a scientist who studies the benefits nature provides to people and how those benefits are affected by climate change, I wanted to know whether fire is eroding one of the most recognized and valued benefits of nature: recreation.

Tracking visitation across burned landscapes

Our first challenge was gathering data about visits to these outdoor areas.

A handful of monitored public lands track visitor counts, but those counts can tell us only so much about how fires affect recreation. Wildfires often cross boundaries, for example from a national park into a national forest, and span dispersed remote areas where no one is monitoring visitation.

Alternatively, every time someone logs a hike on AllTrails, posts a nature photo to Flickr, reports a bird sighting on eBird or simply carries a phone into the backcountry, they leave a precise digital trace of where and when they spent time outdoors. We trained a visitation model on the on-site counts that do exist at monitored sites, using millions of these digital traces, alongside other recreation drivers such as weather, land cover and site characteristics, as predictors.

Across Colorado and California, this approach let us track visitation in burned areas across hundreds of wildfires and prescribed burns for years before and after each fire, even in the remote, unmonitored landscapes where most fires burn. But changes in visitation can have many causes, including weather, broader recreation trends, even pandemic effects. So we statistically paired each burned site with a very similar unburned site elsewhere on public lands. This let us measure not just what happened after each fire, but also what we could expect would have happened without it. The gap between those two is how fire actually affected recreation.

We found that it’s not simply fire itself that drives people away, but a confluence of the type and severity of a fire, the ecosystem that burned and the social values connected to the fire-impacted landscape.

Wildfires that empty trails – and ones that don’t

In Colorado, the average wildfire reduced visitation to burned sites by 8% in the year of the fire. Those declines never recovered to prefire levels for the five-year postfire period we tracked.

As fires grew larger and burned more intensely, recreational losses sharpened. Visitation dropped 15% to 20% at sites burned at higher severity. These declines lasted years. Take the Cameron Peak Fire, for example. The Arapaho and Roosevelt national forests typically see about 8 million visits a year. Our model estimates that the area burned in the Cameron Peak Fire drew nearly 500,000 visits annually before the fire. Applying our 15% to 20% average declines estimated for moderate- to high-severity wildfires, that translates to roughly 70,000 to 100,000 fewer trips annually, losses our analysis finds persist for years.

Two adults and two children gather together in front a selfie-stick with mountains behind them.
A family poses for a selfie in front of the Gore Range overlook in Rocky Mountain National Park in Colorado. The park saw 4.2 million visitors in 2024.
Helen H. Richardson/The Denver Post via Getty Images

But these postfire recreational losses were largely concentrated in forested landscapes. Wildfires that occurred in grasslands, such as the southeastern Colorado Cherry Canyon Fire in 2020, by contrast, seemed to barely register with visitors. Visitation at these grassland-dominated burn sites showed essentially no change. This pattern reveals something important. People’s recreational responses to fire are not just about the physical damage and accessibility impacts. They reflect the particular relationships people hold with different landscapes. Grasses recover within a season or two, and the wide-open vistas that draw people to those landscapes remain intact, even after a fire.

Forests are different. The towering canopies, shaded trails and old-growth character that people value may take decades or centuries to return, if they return at all in a changing climate.

In California, our analysis reveals how these human-nature relationships also vary across regions, with much sharper and more persistent losses than in Colorado. Californian wildfires reduced visitation by 18% in the first year on average, and high-severity forest fires produced losses of 33% that showed no recovery five years after the fire. California’s fires tend to be significantly larger, more severe and more concentrated in forested landscapes.

However, small fires in California actually increased visitation by 8%. This suggests that after years of megafires, a small burn may barely register. Californians have grown accustomed to a fire-shaped landscape, and a modest fire scar may not be enough to keep them off the trails.

Prescribed fire tells a different story

As wildfire intensifies, land managers are responding by expanding prescribed fire programs. They are intentionally setting lower-intensity fires to clear out the dead trees, dry brush and accumulated debris built up from over a century of fire suppression that can feed catastrophic wildfires.

Current prescribed fire planning tends to focus on reducing fire suppression costs and protecting properties, as well as managing ecosystems by reducing fuel loads and improving wildlife habitat. But managers are scaling up these programs without knowing how prescribed fire affects the recreationists who visit these landscapes, a gap our analysis sets out to fill.

A VOX video on how decades of stopping forest fires made them worse.

In Colorado, we found that on average prescribed fire actually increased visitation by about 8% in the year of the fire. This increase may reflect improved trail conditions, enhanced wildlife habitat that attracts birders and hunters, or positive public perceptions of proactive management.

In California, prescribed fire on average decreased visitation by about 3%. Crucially, in stark contrast to wildfire, impacts were short-lived, with visitation returning to prefire levels within three years in both states.

Beyond their direct effects on recreation, prescribed burns also reduce the likelihood of future extreme fires – the very fires that drive the largest and longest-lasting recreation declines.

Why this matters beyond fire

Some of the Colorado communities that are most dependent economically on recreation experienced the steepest visitation declines in the period we studied. These are towns such as Grand Lake, Durango and Gunnison, where shops, hotels, restaurants and seasonal workers rely on a steady flow of visitors, and where sales tax from those visitors funds the infrastructure and daily life of the community. Persistent declines in visitation threaten the long-term viability of these places.

The implications run beyond fire. Calls to consider less tangible benefits of nature, such as recreation, into climate impact assessments, extreme events research and conservation planning have grown recently. Turning those calls into action requires evidence that can help land managers make decisions. Our work provides some of that evidence for fire and a framework that can be used for other disturbances, such as floods and droughts. Without accounting for these less tangible values of nature, increasingly extreme climate impacts will keep eroding the experiences, livelihoods and connections that sustain the well-being of millions of Americans.

Read more of our stories about Colorado.

The Conversation

Kyle Manley receives funding from the CIRES Visiting Fellows Program, funded by NOAA cooperative agreement NA22OAR4320151.

ref. Fire is transforming the US West’s public lands – research shows overlooked cost to recreation – https://theconversation.com/fire-is-transforming-the-us-wests-public-lands-research-shows-overlooked-cost-to-recreation-279831

Sleep apnea compromises far more than a good night’s rest – 2 neuroscientists outline the risks and the need for better diagnosis

Source: The Conversation – USA (3) – By Erika Yamazaki, PhD candidate in Neuroscience, Northwestern University

Snoring can be − but isn’t always − a symptom of sleep apnea. PeopleImages/iStock via Getty Images

Annual medical checkups typically cover the basics: diet, exercise and mental state. Surprisingly, many primary care providers fail to ask about one of the fundamental contributors to well-being: sleep.

We are two neuroscientists who study sleep and memory. We have both experienced this omission with our own doctors, even though we represent different ages and genders.

When asked, almost everyone has complaints about their sleep, yet most people fail to prioritize sleep. But poor sleep shouldn’t be ignored.

One particularly problematic sleep disorder is sleep apnea, and it is not rare. The condition affects nearly 1 billion people worldwide, estimates suggest, and the number continues to grow. In October 2025, former basketball star Shaquille O’Neal was featured in an awareness campaign for sleep apnea. But much greater awareness is needed.

The most common type of sleep apnea, obstructive sleep apnea, is characterized by repeated blockage of breathing during sleep, often resulting in sleepiness during the day, headaches or snoring – or a combination of these – and in the long term, increased risk for cardiovascular diseases.

Patients may not fit the typical profile: The stereotype is that the ones with sleep apnea are older males trending toward obese. Others may find that their sleep-related complaints are overlooked at wellness checks. These are missed opportunities for gathering critical health information that is important for diagnosis. Sleep apnea thus remains undiagnosed far too often in women and also in other groups.

Sleep apnea is not just about sleep

Sleep apnea is more than a sleep disorder. While it manifests when you are sleeping, with repeated partial or total pauses of breathing during sleep – termed hypopneas and apneas – its effects extend far beyond the night.

Repeated apneas and hypopneas tend to occur alongside reductions in oxygen levels in the brain and body. These episodes can happen more than 100 times per hour and on average last about 20 seconds. Despite brief awakenings that can occur after a person with sleep apnea stops breathing, by the morning they usually don’t remember ever pausing their breathing.

Reduced oxygen then leads to increases in blood pressure and heart rate, which stresses the cardiovascular system. Untreated sleep apnea can lead to a host of cardiovascular diseases, such as hypertension, heart failure and stroke. Sleep apnea is also associated with increased risk of dementia, as in Alzheimer’s disease and other neurodegenerative disorders.

Beyond health effects, the disorder is linked to reduced quality of life, a higher risk for motor vehicle accidents and increased medical costs for individuals, as well as for societies and governments.

Graphic illustration of obstructive sleep apnea with obstructed sleep on the left and an obstructed airway on the right.
Sleep apnea is characterized by breathing blockages during sleep.
Pikovit44/iStock via Getty Images Plus

A growing problem meets new solutions

The growing prevalence of obstructive sleep apnea reflects multiple factors. Greater awareness among medical professionals and accessible screening tools have helped.

At the same time, an increase in obesity rates and an aging global population have also contributed to the rise in cases diagnosed.

The treatment of sleep apnea has also advanced considerably over the past 20 years. The standard treatment for sleep apnea is continuous positive airway pressure, or CPAP, which prevents airway collapse with a stream of air through the mouth or nose.

However, people often report that CPAP is burdensome, and for some the therapy is intolerable. For those who dislike CPAP, implantable nerve stimulation devices can be effective. Other therapies include oral appliances to shift the jaw forward and open the airway, positional therapies to avoid back-sleeping, and myofunctional training to strengthen tongue and throat muscles.

Nevertheless, new treatment approaches are still needed. In late 2024, the U.S. Food and Drug Administration approved tirzepatide – the active ingredient in the GLP-1 drugs Mounjaro and Zepbound – for treating obstructive sleep apnea. The drug helps by lowering body weight, given that excess weight is associated with the disorder.

Both new and long-standing treatments for sleep apnea can be effective in reducing the detrimental health consequences. Yet these advances raise an important question: Who gets diagnosed and ultimately benefits from the treatments – and who doesn’t?

CPAP machine diagram with arrows pointing to air flow on a person wearing a mask in a bed.
CPAP is the most common treatment for sleep apnea, but many people find it intolerable.
VectorMine/iStock via Getty Images

Who gets diagnosed – and who gets missed

Despite the growing prevalence of sleep apnea, diagnosis and treatment do not occur equally across populations. Women with sleep apnea often experience headaches, insomnia and depression – symptoms that common screening tools for sleep apnea do not mention.

Hormonal changes throughout a woman’s life, different anatomy of the airway and differences in sensitivity to higher levels of carbon dioxide in the blood compared to men all suggest that more research and better tools are needed to improve healthcare for women with sleep apnea.

Many of the current diagnostic tools and treatment standards were developed based on studies in white populations.

Pulse oximetry on the finger detects decreases in blood oxygen, a key marker of sleep apnea screening and diagnosis. These finger oximeters are less sensitive in people with darker skin pigment, which likely leads to underestimates of severity.

At the same time, Medicaid beneficiaries in the U.S., who are disproportionately from racial minorities, are more likely to be denied long-term coverage for CPAP treatment, despite the finding that Black men have more severe sleep apnea than their white counterparts.

What you can do

Your probability of getting a referral to a specialist increases ninefold when you ask your primary care provider about sleep apnea. And there’s no need to be overly concerned about undergoing a sleep study in a hospital. Sleep studies can now be conducted at home to diagnose sleep apnea.

If you or your bed partner have any suspicions based on even a small subset of the possible symptoms of sleep apnea, bring it up with your healthcare provider. Mention any daytime symptoms, such as excessive sleepiness or headaches, and any nighttime symptoms, such as frequent urination, waking up short of breath, snoring or insomnia.

Starting the conversation may be the first step toward diagnosis and treatment – and to better health and well-being.

The Conversation

Ken Paller receives research funding from the US National Institutes of Health and the Tiny Blue Dot Foundation. He consults for and owns shares in NextSense, Inc.

Erika Yamazaki does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Sleep apnea compromises far more than a good night’s rest – 2 neuroscientists outline the risks and the need for better diagnosis – https://theconversation.com/sleep-apnea-compromises-far-more-than-a-good-nights-rest-2-neuroscientists-outline-the-risks-and-the-need-for-better-diagnosis-276732

Clinical trials that are actually marketing ploys targeting doctors – how seeding trials put profit over patients

Source: The Conversation – USA (3) – By Sukhun Kang, Assistant Professor of Technology Management, University of California, Santa Barbara

Marketing trials aren’t conducted for scientific knowledge or the benefit of patients. Ekin Kizilkaya/iStock via Getty Images Plus

Some clinical trials aren’t designed to answer scientific questions. They’re designed to market drugs. In our recently published research, my team and I analyzed over 34,000 industry-funded trials and found that hundreds of studies across seven medical fields were likely designed to promote a drug to physicians rather than to generate scientific data. For some fields, nearly 1% of clinical trials were for marketing purposes.

Known as seeding trials, these studies prioritize marketing over science while disguising their commercial purpose as legitimate research. Pharmaceutical companies use them to familiarize physicians with new products under the guise of data collection. Participants sign consent forms, believing they are contributing to medical knowledge.

In reality, patients are absorbing risks that serve corporate interests rather than resolving genuine uncertainty about the therapeutic potential of a drug.

The term seeding trial first entered the medical literature in 1994, when then-commissioner of the Food and Drug Administration David Kessler and his colleagues described such studies as attempts to entice doctors to prescribe new drugs through trials that appear to serve little scientific purpose.

Three decades later, the problem of seeding trials persists.

How seeding trials work

While the structure of a seeding trial looks similar to legitimate clinical trials on the surface, the objectives are different.

In a typical clinical trial, researchers recruit patients across clinics and hospitals to test whether a treatment is safe and effective.

In contrast, the pharmaceutical company behind a seeding trial enrolls large numbers of physicians at many sites, each seeing only a few patients. The goal is exposure: getting doctors to prescribe the drug, not generating robust data. Doctors may be selected based on their prescribing volume rather than their research credentials.

In a legitimate trial, the number of study sites reflects the number of patients needed to answer a scientific question. In a seeding trial, the number of sites reflects the number of doctors the company wants to reach.

Doctor in white coat, stethoscope and tie gesturing to pill bottle, talking to patient
Seeding trials recruit doctors based on their prescribing volume.
Cameravit/iStock via Getty Images Plus

Seeding trials often target drugs already on the market and operate as Phase 4, or postmarketing, studies. These types of studies are typically conducted after a drug has been approved to monitor its long-term safety or effectiveness. This trial stage receives less regulatory scrutiny than trials for initial drug approval, and the aims of the study may have limited relevance to actual patient care. For example, a seeding trial might measure whether patients prefer the taste of a new formulation or how quickly a drug dissolves in the stomach, rather than whether it actually improves health outcomes.

Legitimate trials also have independent oversight, with committees of scientists and ethicists who monitor the study’s progress and can halt it if patients are being harmed.

In a seeding trial, this oversight is often minimal. The sponsor of the study – typically the pharmaceutical company funding the research – maintains heavy control over the trial’s design and conduct.

Cases that exposed seeding trials

Seeding trials had attracted little public attention until litigation in the 1990s forced open the internal files of two major pharmaceutical companies, revealing that studies presented as science had been designed as marketing campaigns.

The most notorious example is Merck’s ADVANTAGE trial for the painkiller Vioxx (rofecoxib), which was first approved in 1999. The company presented the study, which ran from 1999 to 2001, as scientific research, but internal documents revealed that its primary purpose was to encourage physicians to prescribe Vioxx to their patients.

Meanwhile, Merck was accused of downplaying the significant cardiovascular risks associated with the drug. The consequences were severe: Approximately 30,000 lawsuits and nearly $5 billion in compensation followed Vioxx’s withdrawal from the market.

Close-up of bottle of Vioxx, with round pills arranged around it
Merck downplayed Vioxx’s risk of heart attack and stroke.
AP Photo/Daniel Hulshizer

Parke-Davis’ STEPS trial for the painkiller Neurontin (gabapentin) – first approved in 1993 for epilepsy – followed a similar pattern of disguising marketing as research. Internal documents showed that the trial, which ran from 1996 to 1998, aimed to disseminate marketing messages through the medical literature and encourage clinicians to prescribe the drug off-label for conditions it was not approved for, such as neuropathic pain and bipolar disorder.

Unlike Vioxx, gabapentin was never withdrawn. The trial’s commercial legacy outlasted its scientific one.

These cases came to light only because litigation forced the release of internal company documents. Without that exposure, they would have remained indistinguishable from ordinary research.

How common are seeding trials?

My team and I study how pharmaceutical firms innovate and respond to regulations. To estimate the prevalence of seeding trials, we analyzed nearly 34,400 industry-funded Phase 3 and Phase 4 studies that posted results on ClinicalTrials.gov between 1998 and 2024. The trials covered seven therapeutic areas where researchers had previously documented seeding trials, including major depressive disorder, epilepsy, Type 2 diabetes and rheumatoid arthritis.

We screened these trials for criteria that prior research has identified as hallmarks of a seeded trial, such as low patient-to-site ratios and limited independent oversight.

Ultimately, we identified 204 trials – 0.59% – that had characteristics consistent with marketing-driven study design. The prevalence of these probable seeding trials in different disciplines ranged from 0.15% in osteoarthritis to 0.98% in rheumatoid arthritis.

These figures might understate the true scope of marketing-driven research. The criteria we used capture only the most identifiable cases of studies driven by marketing purposes. Definitively identifying seeding trials requires access to internal sponsor documents revealing the intent of the study, and those documents surface only through litigation or whistleblowers.

Many trials occupy an ambiguous middle ground, generating useful data while simultaneously serving promotional objectives. Without systematic surveillance, the full extent of marketing-driven studies remains unknown.

Close-up of person holding an orange pill bottle
Pharmaceutical companies have a vested interest in getting their drug products to doctors and patients.
Catherine McQueen/Moment via Getty Images

The criteria to identify seeding trials also require careful interpretation. A low patient-to-site ratio, for instance, can reflect the practical difficulties of enrolling patients in studies of drugs already on the market, such as trials testing new drug combinations or new uses for an existing treatment. These markers are best understood as signals of possible marketing intent warranting closer scrutiny, not proof of marketing intent.

Whether the prevalence of seeding trials has shifted with the expansion of transparency requirements over the past decade cannot be determined from existing registry data.

What can be done

Seeding trials may be uncommon, but they are not accidental. They reflect structural incentives in a system where the companies that fund research also stand to gain from its results. Strengthening transparency in clinical trial registration, funding disclosure and oversight would help ensure that clinical research serves patients first.

Along with other researchers, we’ve proposed reforms that cluster around two areas. The first is standardized reporting that discloses trial funding, investigator payments, enrollment criteria and the rationale for site selection. The second is independent oversight, such as committees funded through pooled industry levies, which are fees collected from pharmaceutical companies to finance independent monitoring. Random audits with publicly available results are one form of such oversight.

Some infrastructure for tracking financial relationships between industry and physicians is already in place. In the U.S., the Open Payments database allows public tracking of industry payments to physicians. But regulatory variability across countries creates openings for companies to conduct marketing-driven trials in jurisdictions with weaker oversight, particularly in low- and middle-income countries.

Clinicians can protect themselves and their patients by screening for a set of red flags before agreeing to participate in or cite a trial in their research. These include unusually low patient-to-site ratios, selecting investigators based on prescribing volume, sponsor-dominated oversight and study endpoints of limited clinical relevance. Consent forms are among the few documents patients see before enrolling, and clearer disclosure of the commercial and scientific purpose of a study is among the reforms we have called for.

For patients, clinicians and regulators alike, the question to ask of any trial is the same: Whom does it really serve?

The Conversation

Sukhun Kang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clinical trials that are actually marketing ploys targeting doctors – how seeding trials put profit over patients – https://theconversation.com/clinical-trials-that-are-actually-marketing-ploys-targeting-doctors-how-seeding-trials-put-profit-over-patients-280398

So your new ‘co-worker’ is an AI agent – here’s how to make the best of your human-machine relationship 

Source: The Conversation – USA – By Nigel Melville, Associate Professor of Information Systems, University of Michigan

Meet your new colleague. AndreyPopov/iStock via Getty Images

Judging by a slew of recent corporate announcements, your next “co-worker” might be an artificial intelligence agent – doing the work of an assistant, job scheduler, morning debriefer, learning coach and more.

JPMorgan Chase, the largest U.S. bank, describes a clear vision for a new world of omnipresent AI agents: “Every employee will have their own personalized AI assistant; every process is powered by AI agents, and every client experience has an AI concierge.”

In brick-and-mortar retail, Walmart is already implementing its vision around agents, which involves support of customers, in-store employees and other business areas, with supervisor agents assigning tasks to subagents much like managers oversee employees.

What these and many other large organizations realize is that agents don’t just answer questions, like an AI-powered search engine or simple chatbot. They complete real work by planning tasks, taking actions and checking results to achieve a goal.

But there’s a problem. Companies in industries ranging from finance and tech to logistics and legal are rapidly embracing the promise of AI agents. But the flesh-and-blood workers they’re meant to assist – and sometimes replace – are struggling to adapt, hurting morale and productivity in the process.

The result is a growing climate of fear about AI job insecurity. FOBO – fear of becoming obsolete – is now a thing. A recent survey by consultancy KPMG found that 52% of workers report they are concerned that AI could eventually take their jobs. And some are fighting back. In another survey, nearly one-third said they are actively sabotaging their company’s AI strategy.

To make matters worse, some of these AI agents are going rogue, deleting data or executing other unintended actions.

My research on AI and agent capabilities, value and risk, as well as emerging studies of the cognitive implications of AI, the future of work and the role of AI in workplace inequality, suggest two key lessons for anyone navigating this new AI agent reality:

First, learn how the agents you’re working with operate, including what they do well, where they fall short and how to catch mistakes.

Second, lean into your fundamentally human strengths. These are things agents can’t replicate. Doing so can also help you sustain your own health and well-being.

Rise of the AI agents

AI agents began entering the workforce in 2025 – mainly in tech, finance and customer service – as the next stage of the generative AI revolution. But in 2026, the AI-powered automators are increasingly being deployed in other areas, such as legal and compliance, supply chain management, research and development, healthcare services and retail.

One example is global transportation giant FedEx, which is planning an entire AI agent workforce for its logistic network. The company plans to create “manager agents,” “audit agents” and “worker agents” to create a trail of accountability for their actions, according to The Wall Street Journal.

The theme of agents working together is echoed by North American food service company Gordon Food Service, which is using cross-team agents to reshape its product sourcing strategy.

As companies rapidly adopt the latest AI systems, agents are taking on increasingly diverse roles in the workplace that leverage their ability to do autonomous work.

It’s all driven by economics, with 88% of early corporate adopters reporting a return on investment on at least one use of an AI agent, according to a Google survey of senior business leaders.

Retail giant Amazon says customers who engage with its Rufus AI agent while shopping are 60% more likely to make a purchase compared with those who don’t get help from Rufus. Overall, Rufus is expected to generate over $10 billion in additional annual sales for Amazon, compared with a baseline without Rufus.

Anticipating gains from agents, global consultancy McKinsey already has 25,000 agents doing various tasks. It plans to have as many AI agents as human workers by 2027.

And new model releases, such as Anthropic’s Mythos, will expand what is possible and likely accelerate agent deployment.

‘The AI rollout is here.’

Agents still can’t do it alone

A 2023 study I conducted found that AI has the capacity to simulate human capabilities such as cognition, decision-making, creativity and collaboration with people and other agents. The simulation is imperfect, however, so agents need support.

For example, agents are resourceful and relentless. They try repeatedly until they get results, without the loss of motivation humans sometimes experience. But they can also behave unpredictably, even taking harmful actions, such as deleting emails or conducting a smear campaign. And research illustrates that agents can be easily tricked into bad behavior, such as overcorrecting when told not to do something, being persuaded by human appeals to urgency and simple manipulation tricks.

Agents can be quirky too, using odd emojis in formal business writing or responding cynically when you just want the facts. And unlike humans, agents lack emotion, self-awareness or intent. When they fail, such as by pursuing misaligned goals, it’s not personal any more than an espresso machine can break out of spite.

In short, agents can act in unpredictable ways, and it’s difficult to know when that will happen. Managing this uncertainty is a new task for employees, and it comes at a time when many employers have heightened expectations of productivity gains from adopting AI agents.

The bottom line is that if you understand how your agent behaves, you’ll be able to be more productive working with them, be in a better position to avoid risks and be more valued yourself.

Get to know your agent

So what do you do if your boss suddenly tells you you’ll be working with an AI agent from now on?

Just like you would with a new human colleague, the first thing you should do is get to know it. In essence, you need to learn how to collaborate with agents effectively, how to evaluate their performance, what makes them tick and associated ethical implications.

And then jump right in. Give the agent a task and observe how it responds. Try different approaches and pay attention to its output quality, behavior and style. Focus on three essentials:

  • Clarity of intent: Define exactly what to do in clear instructions, what information your agent needs, the role your agent is performing, limitations on behavior, and what success looks like for the task.

  • Evaluate results against clear criteria.

  • Guide the agent during the task by answering questions as they arise.

Overall, this will help you effectively use and critically evaluate AI agents.

Leaning into your humanness

Your agent deskmate still needs you – and your humanness – to be effective.

There’s a growing understanding that in a world of ubiquitous agents, certain skills, such as analyzing information, are becoming less important, while other skills, such as interpersonal and communication skills, are increasing in importance. Think about it: No AI agent can read a room like a human can and pick up on the vibe.

But making the switch will not be easy. It will require psychological work.

Researcher Tomas Chamorro-Premuzic suggests connecting with others as only humans can do, and unlocking your curiosity, while your agent handles the drudgery. In other words, focus on qualities that AI agents don’t have, such as the ability to pick up on nonverbal cues, deliver a pitch with a human touch, manage conflict and build relationships. These skills are the glue that holds human teams together.

AI agents are likely to become a significant part of the workplace. But how and how fast that will happen is unknown.

To make the best of it, learn how to work effectively with agents and embrace your own humanness. This way, you’ll be in a better position to make informed decisions about how to interact with humans and agents alike.

The Conversation

Nigel Melville consults for organizations in the area of AI, owns shares in several tech companies that provide AI services, and has received grants from IT leadership organizations to conduct research on AI in organizations. He does not own shares in any companies that focus on agentic AI.

ref. So your new ‘co-worker’ is an AI agent – here’s how to make the best of your human-machine relationship  – https://theconversation.com/so-your-new-co-worker-is-an-ai-agent-heres-how-to-make-the-best-of-your-human-machine-relationship-276011

US violent crime is at its lowest in more than a century – but the funding that helped reduce it is disappearing

Source: The Conversation – USA – By Andrea Hagan, Instructor of Criminology & Justice, Loyola University New Orleans

Homicides across 35 major American cities fell 21% in 2025. South_agency/Getty Images

The United States is experiencing one of the steepest declines in violent crime in modern history, including a murder rate at its lowest point in more than a century.

Homicides across 35 major American cities fell 21% in 2025, amounting to 922 fewer people killed. Robberies dropped 23%. Gun assaults declined 22%. Carjackings plummeted 43%.

Yet the Trump administration has yanked hundreds of millions of dollars from the programs that helped make those numbers possible.

As a scholar focused on how policy decisions and structural conditions shape crime in marginalized communities, I see a pattern forming that could put these historic gains at serious risk.

‘Wasteful grants’

In April 2025, the Department of Justice terminated 365 previously awarded grants. About US$500 million in promised funds evaporated, affecting more than 550 organizations across 48 states.

The cuts stretched across the public safety landscape: community violence intervention, victim services, law enforcement training, juvenile justice, offender reentry and criminal justice research.

Then-Attorney General Pam Bondi described the cancellations as eliminating “wasteful grants.” The White House argued that the grant programs had been “funding DEI and cultural Marxism” rather than helping to keep Americans safe.

The DOJ’s fiscal year 2026 budget proposal reduces the pool of funds for public safety and justice programs by an additional $850 million – about a 15% decrease from the prior year.

A prison cell is seen with it door partly open.
A law supporting ex-inmates with temporary housing and healthcare lost $40 million in funding.
Edwin Remsberg/Getty Images

Bipartisan programs

On the ground, the effects of the cancellations were immediate.

Initiatives implementing a federal law to support ex-inmates with temporary housing, job training and healthcare lost $40 million in funding, according to the Brennan Center for Justice at New York Unversity.

Many of the terminated programs had deep bipartisan roots.

Project Safe Neighborhoods, a crime-reduction initiative launched in 2001 under President George W. Bush, lost its training funds, the Council on Criminal Justice found. Also axed was an anti-terrorism program that had trained more than 430,000 state and local law enforcement officers and other partners since 1996.

More modest programs were targeted as well.

In rural Oregon, a DOJ grant had allowed the Union County district attorney to hire an investigator who, after a few years of probing a 43-year-old cold case involving the killing of a 21-year-old woman, finally developed some leads. When the money was cut, the investigation stopped.

Funding cliffs

The funding cuts couldn’t have come at a worse time. States and local jurisdictions were already facing looming cuts, as billions of dollars provided by President Joe Biden’s COVID recovery plan run out on Dec. 31, 2026.

Many local governments had used that money to build violence prevention programs from the ground up: employing community-based mediators, launching youth employment initiatives and expanding behavioral health teams.

And now? A double funding cliff with the sudden cancellation of DOJ grants, paired with the expiration of COVID recovery money.

In Chicago, this cliff has already forced a 43% cut to the city’s domestic violence prevention budget for 2026 – even as its share of domestic-related homicides rose 13% over the previous year.

Larger and more targeted

Criminology research helps explain the particular risks of abrupt disinvestment. Emory sociology professor Robert Agnew’s General Strain Theory identifies a direct relationship between increased strain – economic pressure, blocked opportunities, the withdrawal of institutional support – and higher risks of criminal behavior.

Flashing red and blue lights are seen on a police car at night.
Researchers warn that cuts to violence prevention programs are likely to lead to increases in gun crime.
Jeremy Hogan/Getty Images

Historical precedent reinforces the concern. In 2013, federal across-the-board spending cuts eliminated services for more than 955,000 crime victims in a single year. The capacity of the FBI and related agencies was slashed by the equivalent of more than 1,000 agents.

Between 2014 and 2016, the violent crime rate climbed 7%.

The 2025 cuts are substantially larger and more targeted, and have devastated some groups.

Equal Justice USA, a national organization working to end the death penalty and reduce violence through community-based interventions, shut down in August 2025 after losing more than $3 million in DOJ grants.

Local programs like Baltimore’s LifeBridge Health’s Center for Hope lost $1.2 million to provide therapy for gun violence survivors.

“What shocked me the most … was what feels like the utter cruelty of it,” said Adam Rosenberg, who runs the center, referring to the cancellation of the funds.

As of April 2026, the DOJ has not paid out $200 million in approved grants to assist victims of domestic violence, sexual assault and human trafficking.

This comes after the department last year allowed more than 100 grants for human trafficking survivors to expire, affecting more than 5,000 victims, despite Congress allocating $88 million for these services.

Researchers at the University of Pennsylvania warn that cuts to violence prevention programs are likely to lead to increases in gun crime.

What happens next

The initiatives now losing funding are the ones that helped drive crime down in many American cities.

Community members trained in conflict mediation help extinguish tensions before they turn lethal. Youth programs provide alternatives to street economies. Forensic labs process the evidence that solves cases. Reentry programs keep people from cycling back through the system. With each serving a distinct function, together they form the infrastructure of public safety.

As funding for crime prevention from two main sources runs out, whether progress continues depends on what happens next.

The Conversation

Andrea Hagan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. US violent crime is at its lowest in more than a century – but the funding that helped reduce it is disappearing – https://theconversation.com/us-violent-crime-is-at-its-lowest-in-more-than-a-century-but-the-funding-that-helped-reduce-it-is-disappearing-276834