Alaska’s near-record landslide tsunami sent a wave 1,580 feet up the fjord walls – and left clues for building a warning system

Source: The Conversation – USA (2) – By Michael E. West, Director of the Alaska Earthquake Center and State Seismologist, University of Alaska Fairbanks

The Tracy Arm landslide sent a tsunami wave far up the opposite side of the fjord near South Sawyer Glacier. John Lyons/U.S. Geological Survey

On the evening of Aug. 9, 2025, passengers on the Hanse Explorer finished taking selfies and videos of the South Sawyer Glacier, and the ship headed back down the fjord. Twelve hours later, a landslide from the adjacent mountain unexpectedly collapsed into the fjord, initiating the second-highest tsunami in recorded history.

We conduct research on earthquakes and tsunamis at the Alaska Earthquake Center, and one of us serves as Alaska state seismologist. In a new study with colleagues, we detail how that landslide sent water and debris 1,580 feet (481 meters) up the other side of the fjord – higher than the top floor of the Taipei 101 skyscraper – and then continued down Tracy Arm. The force of the water stripped the fjord’s walls down to bare rock.

An illustration compares the height of the tsunami's reach to some of the world's tallest buildings
The Tracy Arm landslide generated a tsunami that sent a wave so high up the opposite fjord wall that it would have overtopped some of the world’s tallest buildings. Here’s how it compares to other large tsunamis around the world.
Steve Hicks/University College London

It was just after 5 o’clock in the morning on a dreary day, and fortunately, no ships were nearby. In the months after, some cruise lines started avoiding Tracy Arm. However, the conditions that led to this event are not at all unique to this fjord.

Landslides are common in the coastal mountains of Alaska where rapid uplift, caused by tectonic forces and long-term ice loss, converges with the erosive forces of precipitation and moving glaciers. But a curious pattern has emerged in recent years: Multiple major landslides have occurred precisely at the terminus of a retreating glacier.

Though the mechanics are still poorly understood, these mountains appear to become unstable when the ice disappears. When the landslide hits the water, the momentum of millions of tons of rock is transferred into tsunami waves.

Two illustrations of Tracy Arm and the glacier's extent over time.
Maps show how the glacier has retreated over the years, moving past the section of mountain that collapsed (outlined in white on the right) in the days prior to the slide. The map on the right shows the height the tsunami reached on the fjord walls.
Planet Labs

This same phenomenon is playing out from Alaska to Greenland and Norway, sometimes with deadly consequences. Across the Arctic, countries are trying to come to terms with this growing hazard. The options are not attractive: avoid vast swaths of coastline, or live with a poorly understood risk. We believe there is an obvious role for alert systems, but only if scientists have a better understanding of where and when landslides are likely to occur.

Signs that a landslide might be coming

The Tracy Arm landslide is a powerful example.

The landslide occurred in August, when warm ocean waters and heavier precipitation favor both glacier retreat and slope failure. The glacier below the landslide area had experienced rapid calving – large chunks of ice breaking off and falling into the water – and it had retreated more than a third of a mile in the two months prior. Heavy rain had been falling. Rain enters fractures in the mountain and pushes them closer to failure by increasing the water pressure in cracks.

Most provocative are the thousands of small seismic tremors that emanated from the area of the slide in the days prior to the mountainside collapsing.

We believe that this combination of signs would have been sufficient to issue progressive alerts to any ships in the vicinity and homes and businesses that could have been harmed by a tsunami at least a day prior to the failure – had a monitoring program existed.

Escalating alerts are used for everything from terrorism and nuclear plant safety to avalanches and volcanic unrest. They don’t remove the risk, but they do make it easier for people to safely coexist with hazards.

For example, though people are still killed in avalanches, alert systems have played an essential role in making winter backcountry travel safer for more people. The collapse at Tracy Arm demonstrates what could be possible for landslides.

What an alert system could look like

We believe that the combination of weather and rapid glacier retreat in early August 2025 was likely sufficient to issue an alert notifying people that the hazard may be temporarily elevated in a general area. On a yellow-orange-red scale, this would be a yellow alert.

In the hours prior to the landslide, the exponential increase in seismic events and telltale transition to what is known as seismic tremor – a continuous “hum” of seismic energy – were sufficient to communicate a time-sensitive warning for a specific region.

Seismic data from the closest monitoring station to the landslide, about 60 miles (100 kilometers) away, shows the “hum” of seismic energy increasing just ahead of the landslide, indicated by the tall yellow spike shortly after 5 a.m. Source: Alaska Earthquake Center.

These observations, recorded as a byproduct of regional earthquake monitoring, warranted an “orange” alert noting immediate concern. The signs were arguably sufficient to recommend keeping boats and ships out of the fjord.

Our research over the past few years has demonstrated that once a large landslide has started, it is possible to detect and measure the event within a couple of minutes. In this amount of time, seismic waves in the surrounding area can indicate the rough size of the landslide and whether it occurred near open water.

A monitoring program that could quickly communicate this would be able to issue a red alert, signaling an event in progress.

The National Oceanic and Atmospheric Administration’s tsunami warning program has spent decades fine-tuning rapid message dissemination. A warning system would have offered little help for ships in the immediate vicinity, but it could have provided perhaps 10 minutes of warning for those who rode out the harrowing tsunami farther away.

An animation showing the tsunami’s reach up the fjord walls after the landslide, as well as the large cresting wave as it heads down Tracy Arm. Credit: Shugar et al., 2026.

There is no landslide monitoring system operating yet at this scale in the U.S. Building one will require cooperation across state and federal agencies, and strengthened monitoring and communication networks. Even then, it will not be fail-proof.

Understanding risk, not removing it

Alert systems do not remove the risk entirely, but they are a better option than no warning at all. Over time, they also build awareness as communities and visitors get used to thinking about these hazards.

Many of the most alluring places on Earth come with significant hazards. Arctic fjords are among them. The same processes that create this hazard – glacier retreat, steep terrain, dynamic geology – are also what make these landscapes so compelling. The mix of glaciers, ice-choked waters and steep mountains is exactly what draws people to these places. People will continue to visit and experience them.

The last view of Tracy Arm, taken from the Hanse Explorer motoring away from the South Sawyer glacier, before a landslide from a mountain just out of view on the left crashed into the fjord. The landslide generated a tsunami that sent a wave nearly 1,600 feet (about 490 meters) up the mountain on the right.

The question is not whether these places should be avoided altogether, but how to help people make more informed decisions. We believe that stronger geophysical and meteorological monitoring, coupled with new research and communication channels, is the first step.

On Aug. 9, visitors unknowingly passed through a landscape on the cusp of failure. An alert system might have given tour companies and people in the area the information they needed to make more informed choices and avoid being caught by surprise.

The Conversation

Michael West is part of a cooperative effort between the Alaska Earthquake Center, the Alaska Division of Geological and Geophysical Surveys, and the U.S. Geological Survey’s Landslide Hazards Program to improve the understanding of large deep-seated landslides in Alaska. This effort receives financial support from the USGS.

Ezgi Karasozen is part of a cooperative effort between the Alaska Earthquake Center, the Alaska Division of Geological and Geophysical Surveys, and the U.S. Geological Survey’s Landslide Hazards Program to improve the understanding of large deep-seated landslides in Alaska. This effort receives financial support from the USGS.

ref. Alaska’s near-record landslide tsunami sent a wave 1,580 feet up the fjord walls – and left clues for building a warning system – https://theconversation.com/alaskas-near-record-landslide-tsunami-sent-a-wave-1-580-feet-up-the-fjord-walls-and-left-clues-for-building-a-warning-system-282017

Using diesel generators to power the AI revolution would kill hundreds of Americans a year

Source: The Conversation – USA (2) – By Peter Adams, Professor of Civil and Environmental Engineering, Engineering and Public Policy, Carnegie Mellon University

Diesel generators sit outside a data center in Ashburn, Va. Amanda Andrade-Rhoades for The Washington Post via Getty Images

With U.S. electricity demand starting to rise quickly and expected to continue rising, largely because of the power needed for data centers that process artificial intelligence, people are looking for almost any potential solution.

And people are warning that the full projected demand may not actually develop, which could make massive investments in power plants unnecessary, raising Americans’ electricity rates even more.

U.S. Secretary of Energy Chris Wright is among those who have been promoting what might seem to be an attractive idea: “We have 35 gigawatts of backup generators that are sitting there,” he told an audience of natural gas industry leaders in December 2025. He was referring to diesel-fired engines at hospitals, office complexes, corporate campuses and even data centers to provide electricity if the grid goes down.

That amount of power would be a significant step toward meeting the nation’s expected energy needs, without needing new long-term investments in power plants or transmission lines. But it’s also vital to know, as Wright went on to note, that “emissions rules or whatever” mean those generators can’t just be turned on and left running when there’s not a power outage or other emergency.

As an environmental engineer who studies air pollution from the energy system, I believe this proposal is concerning. Those emissions rules are in place because diesel-powered generators are among the dirtiest sources of energy, emitting fine particulate matter and related chemicals. That is a pollutant whose total emissions from all sources are estimated to cause about 100,000 premature deaths every year in the U.S. And in fact, emissions regulations on backup generators are less stringent than other power sources because they are intended only to run in emergency situations.

If Wright’s idea took hold, diesel fumes would pour into the nation’s air, often near major metropolitan areas that already have air pollution problems. To see more closely what would happen, John Allen, a research assistant at Carnegie Mellon University, and I projected the effects on public health and air quality of running backup diesel generators at data centers.

Simulating the emissions of diesel generators

Comprehensive data is hard to nail down about locations of data centers and which ones have how many diesel generators on site. Nobody has yet made a detailed proposal for which generators might be switched on, or for how long, so we did an exploratory analysis.

We started with an online database of locations of data centers. We also found documentation suggesting there is at least 35 gigawatts of diesel-powered generating capacity at data centers across the U.S., so we allocated that amount, which Wright had mentioned, proportionally to each data center’s size.

We looked at a scenario where these generators ran continuously throughout the year, generating 310 terawatt-hours of electricity. The generators might be used less – Wright himself talked about running them for only “a few hours per year.” But once they’re allowed to be turned on for regular power generation, people might get used to having that electricity available.

We assumed that all diesel generators are relatively new and comply with the Environmental Protection Agency’s most recent and stringent standards, which took full effect in 2015.

We compared the air pollution created from the diesel generators with a scenario where that same amount of power – 310 terawatt-hours – came from the existing mix of power plants in regional electrical grids. This could happen if utility companies built more generation capacity of the same types that already exist in the region or built new transmission lines to deliver more power from elsewhere.

Because no air quality model is perfect, we used three different computer simulations, each of which has been published in scholarly research journals, to simulate what would happen to the diesel pollution and how people downwind would be affected.

Diesel is dirtier

We found that using diesel generators rather than grid electricity would cause significant amounts of fine particulate matter pollution that would be dangerous to people’s health. The exact results varied with each simulation and with a range of assumptions about emissions from diesel generators. But in general, we found that using backup diesel generators this way would cause about 500 more premature deaths per year in the U.S. compared with getting the same electricity from the central grid.

In a scenario where diesel generators were somewhat dirtier than the most recent standards, one air quality model had more than 800 additional people dying prematurely each year nationwide.

In the counties that would be hardest hit by the diesel generators’ pollution, the concentrations of fine particulate matter would increase by 0.25 to 2 micrograms per cubic meter of air, depending on the location and other assumptions for our calculations. This might not sound like a lot, but most urban areas in the U.S. already have fine particulate air pollution that’s close to the EPA limit of 9 micrograms per cubic meter. Adding that much more pollution risks tipping those communities beyond federal standards.

The Clean Air Act requires states to adjust their emissions to meet standards, so upping pollution from backup generators would require other cuts in emissions, such as at power plants and transportation.

Smoke billows from the top of a building labeled 'Sheraton.'
A diesel generator malfunction on a hotel roof in Denver in 2015 sent black smoke into the sky. Firefighters determined there was no fire.
Andy Cross/The Denver Post via Getty Images

Inspection and maintenance challenges

The results concern us, but reality might be even worse. We assumed that diesel generators would meet the most recent Tier 4 EPA emissions standards, but they may be older or exempt for other reasons.

For all our simulations, we assumed inspections and maintenance would keep emissions controls functioning properly at all generators. Modern diesel particulate filters are effective at reducing emissions, though not eliminating them entirely. When those filters fail, emissions skyrocket. Monitoring and maintenance at all of the generators, if they were running continuously, would be a logistical nightmare for regulators and the owners of the generators, and likely expensive as well.

Historically, centralized power plants that have thorough on-site monitoring are the most likely to have emissions control equipment running correctly to reduce emissions verifiably. Shifting to smaller generator units in a wide range of locations creates more potential points of failure and makes it harder to figure out that something has gone wrong, and where.

In our analysis, we compared backup diesel generators with the current electrical grid, where 60% of generation is still from fossil fuels. Increasing generation from renewable energy sources, such as solar panels and wind power, could help meet the rising demand for power without the additional emissions of dangerous air pollution.

The Conversation

Peter Adams has received research funding from various federal organizations, including EPA, NASA, NSF, and DOE as well as private philanthropic organizations.

ref. Using diesel generators to power the AI revolution would kill hundreds of Americans a year – https://theconversation.com/using-diesel-generators-to-power-the-ai-revolution-would-kill-hundreds-of-americans-a-year-280892

Fire is transforming the US West’s public lands – research shows overlooked cost to recreation

Source: The Conversation – USA (2) – By Kyle Manley, Postdoctoral research fellow, University of Colorado Boulder

Large-scale wildfires seem to turn visitors away, while prescribed burning may have the opposite effect. Helen H. Richardson/MediaNews Group/The Denver Post via Getty Images

Colorado’s two largest fires on record, the Cameron Peak and East Troublesome fires, burned hundreds of thousands of acres across some of the state’s most visited landscapes in 2020.

The fires scorched trails, campgrounds and beloved ecosystems in and around Rocky Mountain National Park and the Arapahoe and Roosevelt national forests.

More than five years later, the scars remain stark: blackened hillsides, closed trails and bare slopes where forests once stood. According to our recent research, which has not yet been peer reviewed, the fires caused significant and lasting declines in visitation at the burned sites.

A sign says readers are entering a burned area with hazards.
The East Troublesome Fire burned nearly 200,000 acres. Years later, the area is still recovering.
Jim West/UCG/Universal Images Group via Getty Images

Even after the 2020 fires, Rocky Mountain National Park attracted 4.2 million visitors in 2024, generating US$862 million in economic output in local gateway communities such as Estes Park and Grand Lake. Rocky Mountain National Park is a significant contributor to the nearly 1 billion annual visits and $700 billion in spending that public lands generate nationwide as outdoor recreation continues to grow. It also supports a variety of important social values beyond the economy, including mental health and well-being, cultural and spiritual connection, and the sense of place that binds people to landscapes.

But these landscapes are changing fast. Wildfires are affecting our public lands at an accelerating scale and increasing intensity. Yet how fire affects recreation has remained poorly understood.

That’s the question I set out to answer with an interdisciplinary team of researchers. As a scientist who studies the benefits nature provides to people and how those benefits are affected by climate change, I wanted to know whether fire is eroding one of the most recognized and valued benefits of nature: recreation.

Tracking visitation across burned landscapes

Our first challenge was gathering data about visits to these outdoor areas.

A handful of monitored public lands track visitor counts, but those counts can tell us only so much about how fires affect recreation. Wildfires often cross boundaries, for example from a national park into a national forest, and span dispersed remote areas where no one is monitoring visitation.

Alternatively, every time someone logs a hike on AllTrails, posts a nature photo to Flickr, reports a bird sighting on eBird or simply carries a phone into the backcountry, they leave a precise digital trace of where and when they spent time outdoors. We trained a visitation model on the on-site counts that do exist at monitored sites, using millions of these digital traces, alongside other recreation drivers such as weather, land cover and site characteristics, as predictors.

Across Colorado and California, this approach let us track visitation in burned areas across hundreds of wildfires and prescribed burns for years before and after each fire, even in the remote, unmonitored landscapes where most fires burn. But changes in visitation can have many causes, including weather, broader recreation trends, even pandemic effects. So we statistically paired each burned site with a very similar unburned site elsewhere on public lands. This let us measure not just what happened after each fire, but also what we could expect would have happened without it. The gap between those two is how fire actually affected recreation.

We found that it’s not simply fire itself that drives people away, but a confluence of the type and severity of a fire, the ecosystem that burned and the social values connected to the fire-impacted landscape.

Wildfires that empty trails – and ones that don’t

In Colorado, the average wildfire reduced visitation to burned sites by 8% in the year of the fire. Those declines never recovered to prefire levels for the five-year postfire period we tracked.

As fires grew larger and burned more intensely, recreational losses sharpened. Visitation dropped 15% to 20% at sites burned at higher severity. These declines lasted years. Take the Cameron Peak Fire, for example. The Arapaho and Roosevelt national forests typically see about 8 million visits a year. Our model estimates that the area burned in the Cameron Peak Fire drew nearly 500,000 visits annually before the fire. Applying our 15% to 20% average declines estimated for moderate- to high-severity wildfires, that translates to roughly 70,000 to 100,000 fewer trips annually, losses our analysis finds persist for years.

Two adults and two children gather together in front a selfie-stick with mountains behind them.
A family poses for a selfie in front of the Gore Range overlook in Rocky Mountain National Park in Colorado. The park saw 4.2 million visitors in 2024.
Helen H. Richardson/The Denver Post via Getty Images

But these postfire recreational losses were largely concentrated in forested landscapes. Wildfires that occurred in grasslands, such as the southeastern Colorado Cherry Canyon Fire in 2020, by contrast, seemed to barely register with visitors. Visitation at these grassland-dominated burn sites showed essentially no change. This pattern reveals something important. People’s recreational responses to fire are not just about the physical damage and accessibility impacts. They reflect the particular relationships people hold with different landscapes. Grasses recover within a season or two, and the wide-open vistas that draw people to those landscapes remain intact, even after a fire.

Forests are different. The towering canopies, shaded trails and old-growth character that people value may take decades or centuries to return, if they return at all in a changing climate.

In California, our analysis reveals how these human-nature relationships also vary across regions, with much sharper and more persistent losses than in Colorado. Californian wildfires reduced visitation by 18% in the first year on average, and high-severity forest fires produced losses of 33% that showed no recovery five years after the fire. California’s fires tend to be significantly larger, more severe and more concentrated in forested landscapes.

However, small fires in California actually increased visitation by 8%. This suggests that after years of megafires, a small burn may barely register. Californians have grown accustomed to a fire-shaped landscape, and a modest fire scar may not be enough to keep them off the trails.

Prescribed fire tells a different story

As wildfire intensifies, land managers are responding by expanding prescribed fire programs. They are intentionally setting lower-intensity fires to clear out the dead trees, dry brush and accumulated debris built up from over a century of fire suppression that can feed catastrophic wildfires.

Current prescribed fire planning tends to focus on reducing fire suppression costs and protecting properties, as well as managing ecosystems by reducing fuel loads and improving wildlife habitat. But managers are scaling up these programs without knowing how prescribed fire affects the recreationists who visit these landscapes, a gap our analysis sets out to fill.

A VOX video on how decades of stopping forest fires made them worse.

In Colorado, we found that on average prescribed fire actually increased visitation by about 8% in the year of the fire. This increase may reflect improved trail conditions, enhanced wildlife habitat that attracts birders and hunters, or positive public perceptions of proactive management.

In California, prescribed fire on average decreased visitation by about 3%. Crucially, in stark contrast to wildfire, impacts were short-lived, with visitation returning to prefire levels within three years in both states.

Beyond their direct effects on recreation, prescribed burns also reduce the likelihood of future extreme fires – the very fires that drive the largest and longest-lasting recreation declines.

Why this matters beyond fire

Some of the Colorado communities that are most dependent economically on recreation experienced the steepest visitation declines in the period we studied. These are towns such as Grand Lake, Durango and Gunnison, where shops, hotels, restaurants and seasonal workers rely on a steady flow of visitors, and where sales tax from those visitors funds the infrastructure and daily life of the community. Persistent declines in visitation threaten the long-term viability of these places.

The implications run beyond fire. Calls to consider less tangible benefits of nature, such as recreation, into climate impact assessments, extreme events research and conservation planning have grown recently. Turning those calls into action requires evidence that can help land managers make decisions. Our work provides some of that evidence for fire and a framework that can be used for other disturbances, such as floods and droughts. Without accounting for these less tangible values of nature, increasingly extreme climate impacts will keep eroding the experiences, livelihoods and connections that sustain the well-being of millions of Americans.

Read more of our stories about Colorado.

The Conversation

Kyle Manley receives funding from the CIRES Visiting Fellows Program, funded by NOAA cooperative agreement NA22OAR4320151.

ref. Fire is transforming the US West’s public lands – research shows overlooked cost to recreation – https://theconversation.com/fire-is-transforming-the-us-wests-public-lands-research-shows-overlooked-cost-to-recreation-279831

Sleep apnea compromises far more than a good night’s rest – 2 neuroscientists outline the risks and the need for better diagnosis

Source: The Conversation – USA (3) – By Erika Yamazaki, PhD candidate in Neuroscience, Northwestern University

Snoring can be − but isn’t always − a symptom of sleep apnea. PeopleImages/iStock via Getty Images

Annual medical checkups typically cover the basics: diet, exercise and mental state. Surprisingly, many primary care providers fail to ask about one of the fundamental contributors to well-being: sleep.

We are two neuroscientists who study sleep and memory. We have both experienced this omission with our own doctors, even though we represent different ages and genders.

When asked, almost everyone has complaints about their sleep, yet most people fail to prioritize sleep. But poor sleep shouldn’t be ignored.

One particularly problematic sleep disorder is sleep apnea, and it is not rare. The condition affects nearly 1 billion people worldwide, estimates suggest, and the number continues to grow. In October 2025, former basketball star Shaquille O’Neal was featured in an awareness campaign for sleep apnea. But much greater awareness is needed.

The most common type of sleep apnea, obstructive sleep apnea, is characterized by repeated blockage of breathing during sleep, often resulting in sleepiness during the day, headaches or snoring – or a combination of these – and in the long term, increased risk for cardiovascular diseases.

Patients may not fit the typical profile: The stereotype is that the ones with sleep apnea are older males trending toward obese. Others may find that their sleep-related complaints are overlooked at wellness checks. These are missed opportunities for gathering critical health information that is important for diagnosis. Sleep apnea thus remains undiagnosed far too often in women and also in other groups.

Sleep apnea is not just about sleep

Sleep apnea is more than a sleep disorder. While it manifests when you are sleeping, with repeated partial or total pauses of breathing during sleep – termed hypopneas and apneas – its effects extend far beyond the night.

Repeated apneas and hypopneas tend to occur alongside reductions in oxygen levels in the brain and body. These episodes can happen more than 100 times per hour and on average last about 20 seconds. Despite brief awakenings that can occur after a person with sleep apnea stops breathing, by the morning they usually don’t remember ever pausing their breathing.

Reduced oxygen then leads to increases in blood pressure and heart rate, which stresses the cardiovascular system. Untreated sleep apnea can lead to a host of cardiovascular diseases, such as hypertension, heart failure and stroke. Sleep apnea is also associated with increased risk of dementia, as in Alzheimer’s disease and other neurodegenerative disorders.

Beyond health effects, the disorder is linked to reduced quality of life, a higher risk for motor vehicle accidents and increased medical costs for individuals, as well as for societies and governments.

Graphic illustration of obstructive sleep apnea with obstructed sleep on the left and an obstructed airway on the right.
Sleep apnea is characterized by breathing blockages during sleep.
Pikovit44/iStock via Getty Images Plus

A growing problem meets new solutions

The growing prevalence of obstructive sleep apnea reflects multiple factors. Greater awareness among medical professionals and accessible screening tools have helped.

At the same time, an increase in obesity rates and an aging global population have also contributed to the rise in cases diagnosed.

The treatment of sleep apnea has also advanced considerably over the past 20 years. The standard treatment for sleep apnea is continuous positive airway pressure, or CPAP, which prevents airway collapse with a stream of air through the mouth or nose.

However, people often report that CPAP is burdensome, and for some the therapy is intolerable. For those who dislike CPAP, implantable nerve stimulation devices can be effective. Other therapies include oral appliances to shift the jaw forward and open the airway, positional therapies to avoid back-sleeping, and myofunctional training to strengthen tongue and throat muscles.

Nevertheless, new treatment approaches are still needed. In late 2024, the U.S. Food and Drug Administration approved tirzepatide – the active ingredient in the GLP-1 drugs Mounjaro and Zepbound – for treating obstructive sleep apnea. The drug helps by lowering body weight, given that excess weight is associated with the disorder.

Both new and long-standing treatments for sleep apnea can be effective in reducing the detrimental health consequences. Yet these advances raise an important question: Who gets diagnosed and ultimately benefits from the treatments – and who doesn’t?

CPAP machine diagram with arrows pointing to air flow on a person wearing a mask in a bed.
CPAP is the most common treatment for sleep apnea, but many people find it intolerable.
VectorMine/iStock via Getty Images

Who gets diagnosed – and who gets missed

Despite the growing prevalence of sleep apnea, diagnosis and treatment do not occur equally across populations. Women with sleep apnea often experience headaches, insomnia and depression – symptoms that common screening tools for sleep apnea do not mention.

Hormonal changes throughout a woman’s life, different anatomy of the airway and differences in sensitivity to higher levels of carbon dioxide in the blood compared to men all suggest that more research and better tools are needed to improve healthcare for women with sleep apnea.

Many of the current diagnostic tools and treatment standards were developed based on studies in white populations.

Pulse oximetry on the finger detects decreases in blood oxygen, a key marker of sleep apnea screening and diagnosis. These finger oximeters are less sensitive in people with darker skin pigment, which likely leads to underestimates of severity.

At the same time, Medicaid beneficiaries in the U.S., who are disproportionately from racial minorities, are more likely to be denied long-term coverage for CPAP treatment, despite the finding that Black men have more severe sleep apnea than their white counterparts.

What you can do

Your probability of getting a referral to a specialist increases ninefold when you ask your primary care provider about sleep apnea. And there’s no need to be overly concerned about undergoing a sleep study in a hospital. Sleep studies can now be conducted at home to diagnose sleep apnea.

If you or your bed partner have any suspicions based on even a small subset of the possible symptoms of sleep apnea, bring it up with your healthcare provider. Mention any daytime symptoms, such as excessive sleepiness or headaches, and any nighttime symptoms, such as frequent urination, waking up short of breath, snoring or insomnia.

Starting the conversation may be the first step toward diagnosis and treatment – and to better health and well-being.

The Conversation

Ken Paller receives research funding from the US National Institutes of Health and the Tiny Blue Dot Foundation. He consults for and owns shares in NextSense, Inc.

Erika Yamazaki does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Sleep apnea compromises far more than a good night’s rest – 2 neuroscientists outline the risks and the need for better diagnosis – https://theconversation.com/sleep-apnea-compromises-far-more-than-a-good-nights-rest-2-neuroscientists-outline-the-risks-and-the-need-for-better-diagnosis-276732

Clinical trials that are actually marketing ploys targeting doctors – how seeding trials put profit over patients

Source: The Conversation – USA (3) – By Sukhun Kang, Assistant Professor of Technology Management, University of California, Santa Barbara

Marketing trials aren’t conducted for scientific knowledge or the benefit of patients. Ekin Kizilkaya/iStock via Getty Images Plus

Some clinical trials aren’t designed to answer scientific questions. They’re designed to market drugs. In our recently published research, my team and I analyzed over 34,000 industry-funded trials and found that hundreds of studies across seven medical fields were likely designed to promote a drug to physicians rather than to generate scientific data. For some fields, nearly 1% of clinical trials were for marketing purposes.

Known as seeding trials, these studies prioritize marketing over science while disguising their commercial purpose as legitimate research. Pharmaceutical companies use them to familiarize physicians with new products under the guise of data collection. Participants sign consent forms, believing they are contributing to medical knowledge.

In reality, patients are absorbing risks that serve corporate interests rather than resolving genuine uncertainty about the therapeutic potential of a drug.

The term seeding trial first entered the medical literature in 1994, when then-commissioner of the Food and Drug Administration David Kessler and his colleagues described such studies as attempts to entice doctors to prescribe new drugs through trials that appear to serve little scientific purpose.

Three decades later, the problem of seeding trials persists.

How seeding trials work

While the structure of a seeding trial looks similar to legitimate clinical trials on the surface, the objectives are different.

In a typical clinical trial, researchers recruit patients across clinics and hospitals to test whether a treatment is safe and effective.

In contrast, the pharmaceutical company behind a seeding trial enrolls large numbers of physicians at many sites, each seeing only a few patients. The goal is exposure: getting doctors to prescribe the drug, not generating robust data. Doctors may be selected based on their prescribing volume rather than their research credentials.

In a legitimate trial, the number of study sites reflects the number of patients needed to answer a scientific question. In a seeding trial, the number of sites reflects the number of doctors the company wants to reach.

Doctor in white coat, stethoscope and tie gesturing to pill bottle, talking to patient
Seeding trials recruit doctors based on their prescribing volume.
Cameravit/iStock via Getty Images Plus

Seeding trials often target drugs already on the market and operate as Phase 4, or postmarketing, studies. These types of studies are typically conducted after a drug has been approved to monitor its long-term safety or effectiveness. This trial stage receives less regulatory scrutiny than trials for initial drug approval, and the aims of the study may have limited relevance to actual patient care. For example, a seeding trial might measure whether patients prefer the taste of a new formulation or how quickly a drug dissolves in the stomach, rather than whether it actually improves health outcomes.

Legitimate trials also have independent oversight, with committees of scientists and ethicists who monitor the study’s progress and can halt it if patients are being harmed.

In a seeding trial, this oversight is often minimal. The sponsor of the study – typically the pharmaceutical company funding the research – maintains heavy control over the trial’s design and conduct.

Cases that exposed seeding trials

Seeding trials had attracted little public attention until litigation in the 1990s forced open the internal files of two major pharmaceutical companies, revealing that studies presented as science had been designed as marketing campaigns.

The most notorious example is Merck’s ADVANTAGE trial for the painkiller Vioxx (rofecoxib), which was first approved in 1999. The company presented the study, which ran from 1999 to 2001, as scientific research, but internal documents revealed that its primary purpose was to encourage physicians to prescribe Vioxx to their patients.

Meanwhile, Merck was accused of downplaying the significant cardiovascular risks associated with the drug. The consequences were severe: Approximately 30,000 lawsuits and nearly $5 billion in compensation followed Vioxx’s withdrawal from the market.

Close-up of bottle of Vioxx, with round pills arranged around it
Merck downplayed Vioxx’s risk of heart attack and stroke.
AP Photo/Daniel Hulshizer

Parke-Davis’ STEPS trial for the painkiller Neurontin (gabapentin) – first approved in 1993 for epilepsy – followed a similar pattern of disguising marketing as research. Internal documents showed that the trial, which ran from 1996 to 1998, aimed to disseminate marketing messages through the medical literature and encourage clinicians to prescribe the drug off-label for conditions it was not approved for, such as neuropathic pain and bipolar disorder.

Unlike Vioxx, gabapentin was never withdrawn. The trial’s commercial legacy outlasted its scientific one.

These cases came to light only because litigation forced the release of internal company documents. Without that exposure, they would have remained indistinguishable from ordinary research.

How common are seeding trials?

My team and I study how pharmaceutical firms innovate and respond to regulations. To estimate the prevalence of seeding trials, we analyzed nearly 34,400 industry-funded Phase 3 and Phase 4 studies that posted results on ClinicalTrials.gov between 1998 and 2024. The trials covered seven therapeutic areas where researchers had previously documented seeding trials, including major depressive disorder, epilepsy, Type 2 diabetes and rheumatoid arthritis.

We screened these trials for criteria that prior research has identified as hallmarks of a seeded trial, such as low patient-to-site ratios and limited independent oversight.

Ultimately, we identified 204 trials – 0.59% – that had characteristics consistent with marketing-driven study design. The prevalence of these probable seeding trials in different disciplines ranged from 0.15% in osteoarthritis to 0.98% in rheumatoid arthritis.

These figures might understate the true scope of marketing-driven research. The criteria we used capture only the most identifiable cases of studies driven by marketing purposes. Definitively identifying seeding trials requires access to internal sponsor documents revealing the intent of the study, and those documents surface only through litigation or whistleblowers.

Many trials occupy an ambiguous middle ground, generating useful data while simultaneously serving promotional objectives. Without systematic surveillance, the full extent of marketing-driven studies remains unknown.

Close-up of person holding an orange pill bottle
Pharmaceutical companies have a vested interest in getting their drug products to doctors and patients.
Catherine McQueen/Moment via Getty Images

The criteria to identify seeding trials also require careful interpretation. A low patient-to-site ratio, for instance, can reflect the practical difficulties of enrolling patients in studies of drugs already on the market, such as trials testing new drug combinations or new uses for an existing treatment. These markers are best understood as signals of possible marketing intent warranting closer scrutiny, not proof of marketing intent.

Whether the prevalence of seeding trials has shifted with the expansion of transparency requirements over the past decade cannot be determined from existing registry data.

What can be done

Seeding trials may be uncommon, but they are not accidental. They reflect structural incentives in a system where the companies that fund research also stand to gain from its results. Strengthening transparency in clinical trial registration, funding disclosure and oversight would help ensure that clinical research serves patients first.

Along with other researchers, we’ve proposed reforms that cluster around two areas. The first is standardized reporting that discloses trial funding, investigator payments, enrollment criteria and the rationale for site selection. The second is independent oversight, such as committees funded through pooled industry levies, which are fees collected from pharmaceutical companies to finance independent monitoring. Random audits with publicly available results are one form of such oversight.

Some infrastructure for tracking financial relationships between industry and physicians is already in place. In the U.S., the Open Payments database allows public tracking of industry payments to physicians. But regulatory variability across countries creates openings for companies to conduct marketing-driven trials in jurisdictions with weaker oversight, particularly in low- and middle-income countries.

Clinicians can protect themselves and their patients by screening for a set of red flags before agreeing to participate in or cite a trial in their research. These include unusually low patient-to-site ratios, selecting investigators based on prescribing volume, sponsor-dominated oversight and study endpoints of limited clinical relevance. Consent forms are among the few documents patients see before enrolling, and clearer disclosure of the commercial and scientific purpose of a study is among the reforms we have called for.

For patients, clinicians and regulators alike, the question to ask of any trial is the same: Whom does it really serve?

The Conversation

Sukhun Kang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clinical trials that are actually marketing ploys targeting doctors – how seeding trials put profit over patients – https://theconversation.com/clinical-trials-that-are-actually-marketing-ploys-targeting-doctors-how-seeding-trials-put-profit-over-patients-280398

So your new ‘co-worker’ is an AI agent – here’s how to make the best of your human-machine relationship 

Source: The Conversation – USA – By Nigel Melville, Associate Professor of Information Systems, University of Michigan

Meet your new colleague. AndreyPopov/iStock via Getty Images

Judging by a slew of recent corporate announcements, your next “co-worker” might be an artificial intelligence agent – doing the work of an assistant, job scheduler, morning debriefer, learning coach and more.

JPMorgan Chase, the largest U.S. bank, describes a clear vision for a new world of omnipresent AI agents: “Every employee will have their own personalized AI assistant; every process is powered by AI agents, and every client experience has an AI concierge.”

In brick-and-mortar retail, Walmart is already implementing its vision around agents, which involves support of customers, in-store employees and other business areas, with supervisor agents assigning tasks to subagents much like managers oversee employees.

What these and many other large organizations realize is that agents don’t just answer questions, like an AI-powered search engine or simple chatbot. They complete real work by planning tasks, taking actions and checking results to achieve a goal.

But there’s a problem. Companies in industries ranging from finance and tech to logistics and legal are rapidly embracing the promise of AI agents. But the flesh-and-blood workers they’re meant to assist – and sometimes replace – are struggling to adapt, hurting morale and productivity in the process.

The result is a growing climate of fear about AI job insecurity. FOBO – fear of becoming obsolete – is now a thing. A recent survey by consultancy KPMG found that 52% of workers report they are concerned that AI could eventually take their jobs. And some are fighting back. In another survey, nearly one-third said they are actively sabotaging their company’s AI strategy.

To make matters worse, some of these AI agents are going rogue, deleting data or executing other unintended actions.

My research on AI and agent capabilities, value and risk, as well as emerging studies of the cognitive implications of AI, the future of work and the role of AI in workplace inequality, suggest two key lessons for anyone navigating this new AI agent reality:

First, learn how the agents you’re working with operate, including what they do well, where they fall short and how to catch mistakes.

Second, lean into your fundamentally human strengths. These are things agents can’t replicate. Doing so can also help you sustain your own health and well-being.

Rise of the AI agents

AI agents began entering the workforce in 2025 – mainly in tech, finance and customer service – as the next stage of the generative AI revolution. But in 2026, the AI-powered automators are increasingly being deployed in other areas, such as legal and compliance, supply chain management, research and development, healthcare services and retail.

One example is global transportation giant FedEx, which is planning an entire AI agent workforce for its logistic network. The company plans to create “manager agents,” “audit agents” and “worker agents” to create a trail of accountability for their actions, according to The Wall Street Journal.

The theme of agents working together is echoed by North American food service company Gordon Food Service, which is using cross-team agents to reshape its product sourcing strategy.

As companies rapidly adopt the latest AI systems, agents are taking on increasingly diverse roles in the workplace that leverage their ability to do autonomous work.

It’s all driven by economics, with 88% of early corporate adopters reporting a return on investment on at least one use of an AI agent, according to a Google survey of senior business leaders.

Retail giant Amazon says customers who engage with its Rufus AI agent while shopping are 60% more likely to make a purchase compared with those who don’t get help from Rufus. Overall, Rufus is expected to generate over $10 billion in additional annual sales for Amazon, compared with a baseline without Rufus.

Anticipating gains from agents, global consultancy McKinsey already has 25,000 agents doing various tasks. It plans to have as many AI agents as human workers by 2027.

And new model releases, such as Anthropic’s Mythos, will expand what is possible and likely accelerate agent deployment.

‘The AI rollout is here.’

Agents still can’t do it alone

A 2023 study I conducted found that AI has the capacity to simulate human capabilities such as cognition, decision-making, creativity and collaboration with people and other agents. The simulation is imperfect, however, so agents need support.

For example, agents are resourceful and relentless. They try repeatedly until they get results, without the loss of motivation humans sometimes experience. But they can also behave unpredictably, even taking harmful actions, such as deleting emails or conducting a smear campaign. And research illustrates that agents can be easily tricked into bad behavior, such as overcorrecting when told not to do something, being persuaded by human appeals to urgency and simple manipulation tricks.

Agents can be quirky too, using odd emojis in formal business writing or responding cynically when you just want the facts. And unlike humans, agents lack emotion, self-awareness or intent. When they fail, such as by pursuing misaligned goals, it’s not personal any more than an espresso machine can break out of spite.

In short, agents can act in unpredictable ways, and it’s difficult to know when that will happen. Managing this uncertainty is a new task for employees, and it comes at a time when many employers have heightened expectations of productivity gains from adopting AI agents.

The bottom line is that if you understand how your agent behaves, you’ll be able to be more productive working with them, be in a better position to avoid risks and be more valued yourself.

Get to know your agent

So what do you do if your boss suddenly tells you you’ll be working with an AI agent from now on?

Just like you would with a new human colleague, the first thing you should do is get to know it. In essence, you need to learn how to collaborate with agents effectively, how to evaluate their performance, what makes them tick and associated ethical implications.

And then jump right in. Give the agent a task and observe how it responds. Try different approaches and pay attention to its output quality, behavior and style. Focus on three essentials:

  • Clarity of intent: Define exactly what to do in clear instructions, what information your agent needs, the role your agent is performing, limitations on behavior, and what success looks like for the task.

  • Evaluate results against clear criteria.

  • Guide the agent during the task by answering questions as they arise.

Overall, this will help you effectively use and critically evaluate AI agents.

Leaning into your humanness

Your agent deskmate still needs you – and your humanness – to be effective.

There’s a growing understanding that in a world of ubiquitous agents, certain skills, such as analyzing information, are becoming less important, while other skills, such as interpersonal and communication skills, are increasing in importance. Think about it: No AI agent can read a room like a human can and pick up on the vibe.

But making the switch will not be easy. It will require psychological work.

Researcher Tomas Chamorro-Premuzic suggests connecting with others as only humans can do, and unlocking your curiosity, while your agent handles the drudgery. In other words, focus on qualities that AI agents don’t have, such as the ability to pick up on nonverbal cues, deliver a pitch with a human touch, manage conflict and build relationships. These skills are the glue that holds human teams together.

AI agents are likely to become a significant part of the workplace. But how and how fast that will happen is unknown.

To make the best of it, learn how to work effectively with agents and embrace your own humanness. This way, you’ll be in a better position to make informed decisions about how to interact with humans and agents alike.

The Conversation

Nigel Melville consults for organizations in the area of AI, owns shares in several tech companies that provide AI services, and has received grants from IT leadership organizations to conduct research on AI in organizations. He does not own shares in any companies that focus on agentic AI.

ref. So your new ‘co-worker’ is an AI agent – here’s how to make the best of your human-machine relationship  – https://theconversation.com/so-your-new-co-worker-is-an-ai-agent-heres-how-to-make-the-best-of-your-human-machine-relationship-276011

US violent crime is at its lowest in more than a century – but the funding that helped reduce it is disappearing

Source: The Conversation – USA – By Andrea Hagan, Instructor of Criminology & Justice, Loyola University New Orleans

Homicides across 35 major American cities fell 21% in 2025. South_agency/Getty Images

The United States is experiencing one of the steepest declines in violent crime in modern history, including a murder rate at its lowest point in more than a century.

Homicides across 35 major American cities fell 21% in 2025, amounting to 922 fewer people killed. Robberies dropped 23%. Gun assaults declined 22%. Carjackings plummeted 43%.

Yet the Trump administration has yanked hundreds of millions of dollars from the programs that helped make those numbers possible.

As a scholar focused on how policy decisions and structural conditions shape crime in marginalized communities, I see a pattern forming that could put these historic gains at serious risk.

‘Wasteful grants’

In April 2025, the Department of Justice terminated 365 previously awarded grants. About US$500 million in promised funds evaporated, affecting more than 550 organizations across 48 states.

The cuts stretched across the public safety landscape: community violence intervention, victim services, law enforcement training, juvenile justice, offender reentry and criminal justice research.

Then-Attorney General Pam Bondi described the cancellations as eliminating “wasteful grants.” The White House argued that the grant programs had been “funding DEI and cultural Marxism” rather than helping to keep Americans safe.

The DOJ’s fiscal year 2026 budget proposal reduces the pool of funds for public safety and justice programs by an additional $850 million – about a 15% decrease from the prior year.

A prison cell is seen with it door partly open.
A law supporting ex-inmates with temporary housing and healthcare lost $40 million in funding.
Edwin Remsberg/Getty Images

Bipartisan programs

On the ground, the effects of the cancellations were immediate.

Initiatives implementing a federal law to support ex-inmates with temporary housing, job training and healthcare lost $40 million in funding, according to the Brennan Center for Justice at New York Unversity.

Many of the terminated programs had deep bipartisan roots.

Project Safe Neighborhoods, a crime-reduction initiative launched in 2001 under President George W. Bush, lost its training funds, the Council on Criminal Justice found. Also axed was an anti-terrorism program that had trained more than 430,000 state and local law enforcement officers and other partners since 1996.

More modest programs were targeted as well.

In rural Oregon, a DOJ grant had allowed the Union County district attorney to hire an investigator who, after a few years of probing a 43-year-old cold case involving the killing of a 21-year-old woman, finally developed some leads. When the money was cut, the investigation stopped.

Funding cliffs

The funding cuts couldn’t have come at a worse time. States and local jurisdictions were already facing looming cuts, as billions of dollars provided by President Joe Biden’s COVID recovery plan run out on Dec. 31, 2026.

Many local governments had used that money to build violence prevention programs from the ground up: employing community-based mediators, launching youth employment initiatives and expanding behavioral health teams.

And now? A double funding cliff with the sudden cancellation of DOJ grants, paired with the expiration of COVID recovery money.

In Chicago, this cliff has already forced a 43% cut to the city’s domestic violence prevention budget for 2026 – even as its share of domestic-related homicides rose 13% over the previous year.

Larger and more targeted

Criminology research helps explain the particular risks of abrupt disinvestment. Emory sociology professor Robert Agnew’s General Strain Theory identifies a direct relationship between increased strain – economic pressure, blocked opportunities, the withdrawal of institutional support – and higher risks of criminal behavior.

Flashing red and blue lights are seen on a police car at night.
Researchers warn that cuts to violence prevention programs are likely to lead to increases in gun crime.
Jeremy Hogan/Getty Images

Historical precedent reinforces the concern. In 2013, federal across-the-board spending cuts eliminated services for more than 955,000 crime victims in a single year. The capacity of the FBI and related agencies was slashed by the equivalent of more than 1,000 agents.

Between 2014 and 2016, the violent crime rate climbed 7%.

The 2025 cuts are substantially larger and more targeted, and have devastated some groups.

Equal Justice USA, a national organization working to end the death penalty and reduce violence through community-based interventions, shut down in August 2025 after losing more than $3 million in DOJ grants.

Local programs like Baltimore’s LifeBridge Health’s Center for Hope lost $1.2 million to provide therapy for gun violence survivors.

“What shocked me the most … was what feels like the utter cruelty of it,” said Adam Rosenberg, who runs the center, referring to the cancellation of the funds.

As of April 2026, the DOJ has not paid out $200 million in approved grants to assist victims of domestic violence, sexual assault and human trafficking.

This comes after the department last year allowed more than 100 grants for human trafficking survivors to expire, affecting more than 5,000 victims, despite Congress allocating $88 million for these services.

Researchers at the University of Pennsylvania warn that cuts to violence prevention programs are likely to lead to increases in gun crime.

What happens next

The initiatives now losing funding are the ones that helped drive crime down in many American cities.

Community members trained in conflict mediation help extinguish tensions before they turn lethal. Youth programs provide alternatives to street economies. Forensic labs process the evidence that solves cases. Reentry programs keep people from cycling back through the system. With each serving a distinct function, together they form the infrastructure of public safety.

As funding for crime prevention from two main sources runs out, whether progress continues depends on what happens next.

The Conversation

Andrea Hagan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. US violent crime is at its lowest in more than a century – but the funding that helped reduce it is disappearing – https://theconversation.com/us-violent-crime-is-at-its-lowest-in-more-than-a-century-but-the-funding-that-helped-reduce-it-is-disappearing-276834

After the execution of James G. Broadnax in Texas, questions persist over use of rap lyrics as evidence

Source: The Conversation – USA (2) – By A.D. Carson, Associate Professor of Hip-Hop, University of Virginia

Despite a flurry of last-minute appeals and amicus briefs, James G. Broadnax was executed on April 30, 2026. Partisan Defense League/X

After languishing on death row in Texas for nearly two decades, James G. Broadnax was executed on April 30, 2026.

In 2009, a nearly all-white jury convicted him of robbery and double murder. Broadnax’s lawyers believed the initial rejection of all Black candidates from the jury pool was unconstitutional. They also believed it was unconstitutional that prosecutors used 40 pages of Broadnax’s handwritten lyrics, which they characterized as “gangsta rap” that doubled as a “self-admission” of Broadnax’s criminal “mentality.”

The lyrics shown to the jury were not introduced during the phase of the trial that determined Broadnax’s guilt for robbery and murder. They were presented only during the sentencing phase, when the jury considered whether he should receive the death penalty.

In 2025, I published “Being Dope: Hip Hop and Theory through Mixtape Memoir.” The book uses prose and lyrics to explore common misconceptions about rap and rappers. Along with the way lyrics continue to be used to demonize people inside and outside the courtroom – in ways that no other art form has to contend with – I highlight how “rap” is often used as shorthand for violence, drugs and criminality.

When rap lyrics become death sentences

In 2019, Erik Nielson, a scholar whose work focuses on the use of rap music as evidence in criminal trials, co-wrote “Rap on Trial” with legal scholar Andrea Dennis. In the book, Nielson and Dennis highlight a pattern of prosecutors treating rap lyrics as confessions or evidence of motive, even though they’re typically fictional or exaggerated. Meanwhile, even though other art forms routinely involve characters, lyrics or imagery that depicts violence, they’re rarely used as evidence of guilt in the courtroom.

Dennis and Nielson, who’s a signatory on one of the amicus briefs that was filed in support of Broadnax, maintain a database of over 800 cases in which lyrics have been used as evidence.

It includes some well-known cases, but most of the entries in the database involve people who are not well known as rappers.

For instance, during the closing arguments in the trial of Dominique Green, Texas prosecutors read aloud graphic lyrics from a Geto Boys song. Green hadn’t written the lyrics, and there was no clear connection between him and the song. Critics like Nielsen say the move was intended to shape how the jury perceived Green, who was sentenced to death in 1994 and executed in 2004.

Broadnax met a similar fate. While high on PCP and marijuana, he’d initially confessed to the killings of Stephen Swan and Matthew Butler in the Dallas suburbs in 2008. He later retracted his confession. In March 2026, Broadnax’s cousin and co-defendant, Demarius Cummings, signed a sworn statement admitting to pulling the trigger to kill Swan and Butler. Cummings had been tried separately and had already received life without parole.

Cummings said that he initially went along with Broadnax’s confession, but after 17 years – and learning in February 2026 that his cousin would be executed – he felt compelled to correct his previous statements.

Evidence corroborates Cummings’ admission. His DNA was found on the grip of the murder weapon and on the clothing of one of the victims. Broadnax’s DNA was not found on either.

Despite a flurry of last-minute appeals and amicus briefs, the state executed Broadnax.

From ‘Jim Crow’ to ‘authentically’ Black

In my view, the willingness of courts to accept rap lyrics as evidence emerges from popular entertainment’s long-standing deployment of negative stereotypes about Black people.

In the U.S., minstrel shows were among the first widely popular forms of mass entertainment. Performers were often white people who donned blackface to mock Black people through song, dance and slapstick comedy. Characters like Thomas Rice’s “Jim Crow” employed tropes of Black people as buffoonish, lazy and lascivious – stereotypes that underpinned the racist legal code of segregation that came to be known as Jim Crow laws.

Alongside legal segregation, separate and unequal categories emerged for Black music. In 1920, Mamie Smith released “Crazy Blues,” the first commercial blues recording by a Black artist. Recordings like Smith’s were cordoned off into their own separate category, called “race records.” In 1942, Billboard began charting another invented category that it dubbed the “Harlem Hit Parade.” Black music would go on to be called, at various points, “rhythm and blues,” “soul” and “urban contemporary” into the 1970s.

These genres helped market this music as “authentically” Black. I use quotes because I argue that these genres have always reflected the audience’s listening practices and expectations, as much as anything real or unique about Black people.

A boogeyman for America’s ills

By the 1980s and 1990s, rap music was likewise pigeonholed as a “Black” genre. And “gangsta rap” soon emerged as a subgenre that became, for some listeners, an effective stand-in for all the purported ills that plagued Black communities.

N.W.A. rapped about police brutality, violence and poverty, among other topics. Tracks like “F— Tha Police” were lyrically provocative and confrontational.

Black-and-white photo of two rappers performing on a stage.
MC Ren and Eazy-E of N.W.A. perform during a show in Milwaukee in June 1989.
Raymond Boyd/Getty Images

When Milt Ahlerich, an assistant director at the FBI, sent a letter to N.W.A.’s record label warning that the track could lead to disrespect and violence against law enforcement, the troupe saw a marketing opportunity, going on to brag that they were “the world’s most dangerous group.” And many audiences went on to interpret their tracks as documentary evidence of everyday Black life. In fact, I argue that this broader interpretation of rap music led, at least in part, to the eagerness with which the public initially supported the so-called “war on drugs,” which ended up disproportionately targeting Black communities in places like Decatur, Illinois, where I grew up.

Even when artists go to great pains to distinguish their art, many audiences simply want to believe all rap music and rap artists were doing and saying the same things. Their unwillingness to engage beyond the surface means a refusal to examine rap’s layered explorations of life, pride and pain, described through lyrical humor, social commentary and witty wordplay.

As Rolling Stone journalist Ed Kiersh wrote in 1986, “To much of white America, rap means mayhem and bloodletting.”

‘Being Dope’ is personal

For me, this is personal. I have been a rapper all of my professional life. In “Being Dope,” I write about teaching high school in Springfield, Illinois, where a local radio host used my music to try to paint me as unprofessional or worse, and called for me to be fired over it.

I decided to pursue a Ph.D. and study the rhetorical appeal of rap music. I wrote a rap album as my dissertation, and after becoming a professor of hip-hop, I published the first-ever peer-reviewed album with an academic press.

Rap has many functions. It’s a daily practice undertaken by ordinary people, not just the ones who aim to be famous. When I discuss the public perception of rappers, I highlight how I continue to grapple with the uneasiness my identity as a rapper can elicit in other people. I also focus on the stories of friends and family members, as well as people like Willie McCoy, Eric Reason and Jordan Davis – Black Americans whose deaths were blamed on rap music, a form of scapegoating I call “rapwashing.”

So when I see “rap” or “rapper” in a headline to imply guilt or provoke negative associations, I’m reminded of the truth in Kiersh’s statement. It’s even more troubling when rap lyrics are used to help deliver a death sentence.

Gangsta rap’s effectiveness as a prosecutorial tool, like the minstrel shows before it, depends on audiences mistaking caricature for authenticity, and hinges on hearing artistic expression as documentary evidence of criminal actions. Using gangsta rap to justify state-sanctioned executions only extends the dark legacy of Jim Crow into the present.

The Conversation

A.D. Carson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. After the execution of James G. Broadnax in Texas, questions persist over use of rap lyrics as evidence – https://theconversation.com/after-the-execution-of-james-g-broadnax-in-texas-questions-persist-over-use-of-rap-lyrics-as-evidence-280901

Why we need to treat Earth like a spaceship

Source: The Conversation – UK – By Chris Rapley, Professor of Climate Science, UCL

ixpert/Shutterstock

Four humans recently looped around the Moon. Their vessel, an Artemis capsule, was a thin metal shell whose life-support system kept them alive: it provided a carefully balanced atmosphere, a closed water loop, a finite supply of food and a means for disposing human waste. The life support was not optional. It was a necessity.

Consider this: not once in the history of human spaceflight has an astronaut been known to tamper with their life support system. No one has ever decided to vent some oxygen for fun. No one has argued for a personal right to increase their CO₂ output. Sabotage is unthinkable – socially intolerable. Their fellow crew members and mission control would intervene immediately.

Now consider Earth.

We are doing to our planetary life support what no astronaut has done to theirs. We are damaging it – venting carbon, acidifying the oceans, stripping topsoil and collapsing biodiversity – not maliciously, but with a shrug. It is legal. It is profitable. And in most circles, it is entirely socially acceptable.

The Victorian novelist George Eliot would have understood why. In Middlemarch, she showed us a town that preferred a satisfying, simple myth (that a charismatic quack can cure ills) over difficult, complex truths (the role of germs, statistics, slow systematic change). Humans, she argued, do not naturally reach for what is true. We reach for what is near, simple and emotionally rewarding.

Climate science is the anti-myth. It is delayed, diffuse, impersonal and global. It asks us to change behaviour today for a benefit that will arrive decades away, elsewhere on the planet, for people we will never meet.

This psychological distance is a severe challenge for a brain evolved to flinch at a rustle in the grass, not a graph showing rising parts per million of atmospheric carbon dioxide.




Read more:
Earthrise to Earthset: how the planet’s climate has changed since the photo that inspired the environmental movement


The myths that let us ignore the truth are familiar.

If I recycle, I’m doing my part. (This is insufficient but feels good.)

Technology will save us before it’s too late. (Comforting but improbable, and it delays action.)

It’s already too late, so nothing matters. (This is fatalism as absolution.)

We will adapt. (The laws of nature set hard limits.)

These stories are false, but they are functional. Psychologists call them the “dragons of inaction” – the mental barriers that let us know the truth without feeling its weight. Along with disavowal (knowing something but ignoring it), they allow us to keep flying, driving, consuming and investing, without the discomfort of cognitive dissonance (the stress of simultaneously holding conflicting beliefs).

The Artemis crew members live by a different narrative. They are guided by a simple, undeniable truth. That they are in a small, fragile vessel. The life support is essential. Damaging it is not an option.

astronaut in spaceship talking to mission control
Often people don’t treat planet Earth as a precious life support system.
Gorodenkoff/Shutterstock

Earth is a vessel too. It is just larger, its support systems less visible, and the consequences of damage slower to arrive. As the economist Kenneth Boulding argued 60 years ago, we must learn to see our planet as a closed system – not an open frontier.

What narrative could protect Earth like it protects astronauts?

Not a policy paper. Not a carbon tax (though we need those). A story.

We have candidate myths already. None is perfect, but each is more powerful than the cold scientific facts.

The one pane of glass narrative outlines that Earth is not a planet we live on. It is a pressurised cabin with a single irreplaceable window. Every tonne of CO₂ scratches a crack in that glass. You wouldn’t hammer the Artemis capsule window. Why do it here?

The blood of the body myth portrays the biosphere not as nature but as the collective and extended organ system of humanity. Deforesting the Amazon and burning oil are not business as usual, they are acts of self-harm.

The crew of the damned narrative hinges on the concept that you are not a consumer. You are a temporary tenant on a multi-generational voyage. Nature and the previous shift built the vessel. The next shift will inherit it. To degrade Earth’s systems is to defile the ancestors and curse the children. That is not a crime. It is a sin that will outlast your name.




Read more:
To address the environmental polycrisis, the first step is to demand more honesty


None of these stories will work if they remain metaphors. They become common sense only when they are visibly, socially and economically enforced – when a CEO who opens a new coal mine is treated with the same universal horror as an astronaut reaching for the oxygen valve.

Imagine every human decision – personal, professional, political – tested against one simple question: “If we were in a capsule looping around the Moon, would this be a safe use of our shared life support?”

Repeated sufficiently, the right conclusion would become habitual. For those resisting, the rest of the crew would intervene. On Earth, there is no mission control – only us.

The Conversation

Chris Rapley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why we need to treat Earth like a spaceship – https://theconversation.com/why-we-need-to-treat-earth-like-a-spaceship-281606

Au Moyen Âge, les échecs ont créé un espace où la couleur de peau ne comptait pas

Source: The Conversation – in French – By Krisztina Ilko, Junior Research Fellow, Queens’ College and Affiliated Lecturer at the Faculty of History, University of Cambridge

La présence d’un joueur à la peau noire sur cette illustration du _Libro del axedrez, dados e tablas_ montre en quoi les échecs échappaient aux normes de représentation de l’époque.
« Libro del axedrez, dados e tablas »

Dans certains manuscrits médiévaux, des joueurs noirs et blancs s’affrontent sur l’échiquier dans un cadre d’égalité. Une iconographie surprenante qui montre comment les échecs pouvaient incarner un espace où la logique l’emportait sur les hiérarchies raciales.


Dans l’imaginaire médiéval européen, les représentations de la différence raciale étaient souvent très tranchées. Les personnes noires apparaissaient soit comme des figures exotiques et prestigieuses – saints ou souverains riches comme la reine de Saba –, soit comme des personnages dominés, jugés inférieurs aux chrétiens blancs. Pourtant, comme le montrent mes recherches, le jeu d’échecs offrait un autre regard : un espace où les joueurs pouvaient s’affronter d’égal à égal, quelle que soit leur couleur de peau.

Des éléments tirés du Libro de los juegos, sous-titré Libro de Axedrez, Dados e Tablas (Livre des échecs, des dés et des tables), un manuel de jeux réalisé pour le roi Alphonse X le Sage à Séville en 1283, ont renforcé cette idée. Le manuscrit contient 103 problèmes d’échecs, chacun accompagné d’un texte indiquant le vainqueur et d’une illustration. Ces images représentent une grande diversité de personnages, allant d’hommes juifs à des femmes musulmanes. On y voit aussi des joueurs asiatiques, blancs et noirs.

L’une des illustrations les plus frappantes montre un joueur noir et un joueur à la peau pâle face à face de part et d’autre d’un échiquier. Ce dernier a la tête rasée, signe qu’il est un clerc érudit. Pourtant, malgré ce marqueur d’intelligence, le texte indique que le joueur noir l’emportera. Dans ce « jeu de logique », la victoire revient à celui qui démontre les meilleures capacités stratégiques. Ce qui compte avant tout, c’est la puissance intellectuelle du joueur. Comme l’explique le Libro de los juegos, les échecs incarnent la sagesse, et ceux qui les étudient acquièrent la capacité de vaincre les autres.

Une autre image du manuscrit montre cinq personnages noirs entourant l’échiquier. Dans la culture visuelle médiévale occidentale, les scènes ne représentant que des figures noires sont rares et généralement associées à des connotations négatives. Ici, au contraire, elles apparaissent dans un cadre hautement intellectuel et dans une atmosphère qui semble conviviale.

Plusieurs personnages noirs entourant l’échiquier

« Libro de los Juegos »

Si le jeu d’échecs n’a pas fait disparaître les normes sociales dominantes en matière de préjugé racial, il a néanmoins offert aux joueurs un espace pour les remettre en question dans son propre univers ludique.

La représentation des échecs comme une rencontre entre des personnes de couleurs de peau différentes ne se limitait pas à l’Europe. Le Livre des rois ou Shâhnâmeh, poème épique retraçant l’histoire des Iraniens depuis la création du monde jusqu’à la conquête islamique, raconte ainsi l’introduction du jeu en Iran.

Selon le Shâhnâmeh, un roi indien – dont le nom n’est pas précisé – envoya une ambassade au roi sassanide avec un échiquier et un défi : en comprendre les règles ou payer un tribut. Heureusement pour le souverain, son conseiller Būzurjmihr parvint à résoudre l’énigme. Une copie du poème datant du XIVᵉ siècle situe cette scène dans un décor mongol de la fin du Moyen Âge. On y voit Būzurjmihr, à la peau plus claire, face à l’émissaire indien à la peau plus sombre.

Certains chercheurs ont soutenu que la peau sombre de ce dernier et ses « vêtements amples » visaient à souligner sa défaite. Mais plusieurs indices suggèrent une autre lecture. Sa tunique « ample » est richement ornée de dorures, contrairement à la simple robe bleue de Būzurjmihr, pourtant le plus haut diplomate de la cour. Sa peau plus foncée renvoie certes à ses origines étrangères, mais ne fait guère de lui un personnage négatif. Il apparaît au contraire comme le champion du rajah indien : celui qui transmet le jeu de logique et se présente comme le dépositaire d’un savoir indien très convoité.

Les pièces d’échecs elles-mêmes

Outre les représentations de parties d’échecs, les perceptions médiévales de la « race » peuvent aussi être étudiées à travers les pièces du jeu elles-mêmes.

Les échecs se sont diffusés à travers l’Afro-Eurasie à partir de l’Inde du VIᵉ siècle vers le reste du monde connu. Jeu de guerre, les échecs reposent sur des pièces censées représenter des soldats. Mais, au fil de leur diffusion, la forme de ces pièces a évolué, reflétant les sociétés qui les ont produites.

Par exemple, un roi d’échecs aux longs cheveux, fabriqué à Mansura ou à Multan (dans l’actuel Pakistan) au IXᵉ ou au Xᵉ siècle, reflète les idéaux de la royauté indienne. Les célèbres pièces d’échecs de Lewis, découvertes dans les Hébrides extérieures en Écosse, mais probablement sculptées en Norvège, sont quant à elles souvent considérées comme les représentations les plus emblématiques d’un jeu d’échecs médiéval. Dans cette perspective, elles ne constituent pourtant qu’un témoignage relativement tardif et géographiquement périphérique d’une tradition bien plus ancienne.

Les échecs médiévaux n’étaient pas aussi noirs et blancs que le jeu moderne. Certains échiquiers étaient blanc et rouge, ou encore bleu et or. Néanmoins, les cases alternées, tout comme les pièces elles-mêmes, étaient distinguées par des couleurs contrastées. Cela permettait de projeter sur le jeu des idées liées à la couleur de peau et aux perceptions raciales.

Un poème du XIIIᵉ siècle explique que les pièces d’échecs « sont les gens de ce monde, tirés d’un même sac, comme d’un ventre maternel, puis placés en divers endroits de ce monde ». Les pièces pouvaient ainsi représenter les différents peuples du globe. Mais l’issue de leurs affrontements sur l’échiquier restait déterminée par les règles de la logique, et non par la couleur de leur peau. Les échecs incarnaient ainsi un « monde juste », où l’intellect, plutôt que la religion ou la race, primait.

The Conversation

Krisztina Ilko a reçu des financements de The British Academy.

ref. Au Moyen Âge, les échecs ont créé un espace où la couleur de peau ne comptait pas – https://theconversation.com/au-moyen-age-les-echecs-ont-cree-un-espace-ou-la-couleur-de-peau-ne-comptait-pas-279972