A wildfire’s legacy can haunt rivers for years, putting drinking water at risk

Source: The Conversation – USA (2) – By Ben Livneh, Associate Professor of Hydrology, University of Colorado Boulder

Burned ground can become hydrophobic and almost waxlike, allowing rainfall to quickly wash contaminants downslope. Carli Brucker

A wildfire rages across a forested mountainside. The smoke billows and the flames rise. An aircraft drops vibrant red flame retardant. It’s a dramatic, often dangerous scene. But the threat is only just beginning for downstream communtiies and the water they rely on.

After the smoke clears, the soil, which was once nestled beneath a canopy of trees and a spongy layer of leaves, is now exposed. Often, that soil is charred and sterile, with the heat making the ground almost water-repellent, like a freshly waxed car.

When the first rain arrives, the water rushes downhill. It carries with it a slurry of ash, soil and contaminants from the burned landscape. This torrent flows directly into streams and then rivers that provide drinking water for communities downstream.

As a new research paper my colleagues and I just published shows, this isn’t a short-term problem. The ghost of the fire can haunt these waterways for years.

Scientists explain how wildfires can contaminate water supplies and the ways they measure the effects, summarized in their 2024 publication. University of Colorado-Boulder.

This matters because forested watersheds are the primary water source for nearly two-thirds of municipalities in the United States. As wildfires in the western U.S. become larger and more frequent, the long-term security and safety of water supplies for downstream communities is increasingly at risk.

Charting the long tail of wildfire pollution

Scientists have long known that wildfires can affect water quality, but two key questions remained: Exactly how bad is the impact? And how long does it last?

To find out, my colleagues and I led a study, coordinated by engineer Carli Brucker. We undertook one of the most extensive analyses of post-wildfire water quality to date. The results were published June 23, 2025, in the journal Nature Communications Earth & Environment.

We gathered decades of water quality data from 245 burned watersheds across the western U.S. and compared them to nearly 300 similar, unburned watersheds.

A map of watersheds in the western U.S.
A map of the basins studied shows the outlines of fires in red and burned basins in black. The blue basins did not burn and were used for comparisons.
Carli Brucker, et al., 2025, Nature Communications Earth & Environment

By creating a computer model for each basin that accounted for its normal water quality variability, based on factors such as rainfall and temperature, we were able to isolate the impact of the wildfire. This allowed us to see how much the water quality deviated after the fire, year after year.

The results were stark. In the first year after a fire, the concentrations of some contaminants skyrocketed. We found that levels of sediment and turbidity – the cloudiness of the water – were 19 to 286 times higher than prefire levels. That much sediment can clog filters at water treatment plants and require expensive treatment and maintenance. Think of trying to use a coffee filter with muddy water – the water just won’t flow through.

Concentrations of organic carbon, nitrogen and phosphorus were three to 103 times greater in the burned basins. These dissolved remnants of burned plants and soil are particularly problematic. When they mix with the chlorine used to disinfect drinking water, they can form harmful chemicals called disinfection byproducts, some of which are linked to cancer.

More surprisingly, we found the impacts to be really persistent. While the most dramatic spikes in phosphorous, nitrate, organic carbon and sediment generally occurred in the first one to three years, some contaminants lingered for much longer.

Charts show how contaminants lingered in water supplies for years after wildfires.
Contaminants including phosphorus, organic carbon and nitrates lingered in water supplies for years after wildfires. The charts show the average among all burned basins eight years before fires (light blue) and all burned basins after fires (orange). The gray bars show levels in the year immediately after the fire. The horizontal purple line shows levels that would be expected without a fire, based on the prefire years.
Carli Brucker, et al., 2025, Nature Communications Earth & Environment

We saw significantly elevated levels of nitrogen and sediment for up to eight years following a fire. Nitrogen and phosphorus act like fertilizer for algae. A surge of these nutrients can trigger algal blooms in reservoirs, which can produce toxins and create foul odors.

This extended timeline suggests that wildfires are fundamentally altering the landscape in ways that take a long time to heal. In our previous laboratory-based research, including a 2024 study, we simulated this process by burning soil and vegetation and then running water over them.

A blackened mountain slope where all of the trees have burned.
After mountain slopes burn, the rain that falls on them washes ash, charred soil and debris downstream.
Carli Brucker

The stuff that leaches out is a cocktail of carbon, nutrients and other compounds that can exacerbate flood risks and degrade water quality in ways that require more expensive treatment at water treatment facilities. In extreme cases, the water quality may be so poor that communities can’t withdraw river water at all, and that can create water shortages.

After the Buffalo Creek Fire in 1996 and then the Hayman Fire in 2002, Denver’s water utility spent more than US$27 million over several years to treat the water, remove more than 1 million cubic yards of sediment and debris from a reservoir, and fix infrastructure. State Forest Service crews planted thousands of trees to help restore the surrounding forest’s water filtering capabilities.

A growing challenge for water treatment

This long-lasting impact poses a major challenge for water treatment plants that make river water safe to drink. Our study highlights that utilities can’t just plan for a few bad months after a fire. They need to be prepared for potentially eight or more years of degraded water quality.

We also found that where a fire burns matters. Watersheds with thicker forests or more urban areas that burned tended to have even worse water quality after a fire.

Since many municipalities draw water from more than one source, understanding which watersheds are likely to have the largest water quality problems after fires can help communities locate the most vulnerable parts of their water supply systems.

As temperatures rise and more people move into wildland areas in the American West, the risk of wildfires increases, and it is becoming clear that preparing for longer-term consequences is crucial. The health of forests and our communities’ drinking water are inseparably linked, with wildfires casting a shadow that lasts long after the smoke clears.

The Conversation

Ben Livneh receives funding from the Western Water Assessment NOAA grant #NA21OAR4310309, ‘Western Water Assessment: Building Resilience to Compound Hazards in the Inter-Mountain West’.

ref. A wildfire’s legacy can haunt rivers for years, putting drinking water at risk – https://theconversation.com/a-wildfires-legacy-can-haunt-rivers-for-years-putting-drinking-water-at-risk-259118

FEMA’s flood maps often miss dangerous flash flood risks, leaving homeowners unprepared

Source: The Conversation – USA (2) – By Jeremy Porter, Professor of Quantitative Methods in the Social Sciences, City University of New York

A deadly flash flood on July 4, 2025, swept through Nancy Callery’s childhood home in Hunt, Texas. Brandon Bell/Getty Images

Destructive flash flooding in Texas and other states is raising questions about the nation’s flood maps and their ability to ensure that communities and homeowners can prepare for rising risks.

The U.S. Federal Emergency Management Agency’s flood maps are intended to be the nation’s primary tool for identifying flood risks.

Originally developed in the 1970s to support the National Flood Insurance Program, these maps, known as Flood Insurance Rate Maps, or FIRMs, are used to determine where flood insurance is required for federally backed mortgages, to inform local building codes and land-use decisions, and to guide flood plain management strategies.

A flood risk map.
A federal flood map of Kerrville, Texas, with the Guadalupe River winding through the middle in purple, shows areas considered to have a 1% annual chance of flooding in blue and a 0.2% annual chance of flooding in tan. During a flash flood on July 4, 2025, the river rose more than 30 feet at Kerrville.
FEMA

In theory, the maps enable homeowners, businesses and local officials to understand their flood risk and take appropriate steps to prepare and mitigate potential losses.

But while FEMA has improved the accuracy and accessibility of the maps over time with better data, digital tools and community input, the maps still don’t capture everything – including the changing climate. There are areas of the country that flood, some regularly, that don’t show up on the maps as at risk.

I study flood-risk mapping as a university-based researcher and at First Street, an organization created to quantify and communicate climate risk. In a 2023 assessment using newly modeled flood zones with climate-adjusted precipitation records, we found that more than twice as many properties across the country were at risk of a 100-year flood than the FEMA maps identified.

Even in places where the FEMA maps identified a flood risk, we found that the federal mapping process, its overreliance on historical data, and political influence over the updating of maps can lead to maps that don’t fully represent an area’s risk.

What FEMA flood maps miss

FEMA’s maps are essential tools for identifying flood risks, but they have significant gaps that limit their effectiveness.

One major limitation is that they don’t consider flooding driven by intense bursts of rain. The maps primarily focus on river channels and coastal flooding, largely excluding the risk of flash flooding, particularly along smaller waterways such as streams, creeks and tributaries.

This limitation has become more important in recent years due to climate change. Rising global temperatures can result in more frequent extreme downpours, leaving more areas vulnerable to flooding, yet unmapped by FEMA.

A map overlay shows how two 100-year flood maps compare. First Street shows many more streams.
A map of a section of Kerr County, Texas, where a deadly flood struck on July 4, 2025, compares the FEMA flood map’s 100-year flood zone (red) to First Street’s more detailed 100-year flood zone (blue). The more detailed map includes flash flood risks along smaller creeks and streams.
Jeremy Porter

For example, when flooding from Hurricane Helene hit unmapped areas around Asheville, North Carolina, in 2024, it caused a huge amount of uninsured damage to properties.

Even in areas that are mapped, like the Camp Mystic site in Kerr County, Texas, that was hit by a deadly flash flood on July 4, 2025, the maps may underestimate their risk because of a reliance on historic data and outdated risk assessments.

Political influence can fuel long delays

Additionally, FEMA’s mapping process is often shaped by political pressures.

Local governments and developers sometimes fight to avoid high-risk designations to avoid insurance mandates or restrictions on development, leading to maps that may understate actual risks and leave residents unaware of their true exposure.

An example is New York City’s appeal of a 2015 FEMA Flood Insurance Rate Maps update. The delay in resolving the city’s concerns has left it with maps that are roughly 20 years old, and the current mapping project is tied up in legal red tape.

On average, it takes five to seven years to develop and implement a new FEMA Flood Insurance Rate Map. As a result, many maps across the U.S. are significantly out of date, often failing to reflect current land use, urban development or evolving flood risks from extreme weather.

This delay directly affects building codes and infrastructure planning, as local governments rely on these maps to guide construction standards, development approvals and flood mitigation projects. Ultimately, outdated maps can lead to underestimating flood risks and allowing vulnerable structures to be built in areas that face growing flood threats.

How technology advances can help

New advances in satellite imaging, rainfall modeling and high-resolution lidar, which is similar to radar but uses light, make it possible to create faster, more accurate flood maps that capture risks from extreme rainfall and flash flooding.

However, fully integrating these tools requires significant federal investment. Congress controls FEMA’s mapping budget and sets the legal framework for how maps are created. For years, updating the flood maps has been an unpopular topic among many publicly elected officials, because new flood designations can trigger stricter building codes, higher insurance costs and development restrictions.

A map of Houston showing flooding extending much farther inland.
A map of Houston, produced for a 2022 study by researchers at universities and First Street, shows flood risk changing over the next 30 years as climate change worsens. Blue areas are today’s 100-year flood-risk zones. The red areas reflect the same zones in 2050.
Oliver Wing et al., 2022

In recent years, the rise of climate risk analytics models and private flood risk data have allowed the real estate, finance and insurance industries to rely less on FEMA’s maps. These new models incorporate forward-looking climate data, including projections of extreme rainfall, sea-level rise and changing storm patterns – factors FEMA’s maps generally exclude.

Real estate portals like Zillow, Redfin, Realtor.com and Homes.com now provide property-level flood risk scores that consider both historical flooding and future climate projections. The models they use identify risks for many properties that FEMA maps don’t, highlighting hidden vulnerabilities in communities across the United States.

Research shows that the availability, and accessibility, of climate data on these sites has started driving property-buying decisions that increasingly take climate change into account.

Implications for the future

As homebuyers understand more about a property’s flood risks, that may shift the desirability of some locations over time. Those shifts will have implications for property valuations, community tax-revenue assessments, population migration patterns and a slew of other considerations.

However, while these may feel like changes being brought on by new data, the risk was already there. What is changing is people’s awareness.

The federal government has an important role to play in ensuring that accurate risk assessments are available to communities and Americans everywhere. As better tools and models evolve for assessing risk evolve, FEMA’s risk maps need to evolve, too.

The Conversation

Jeremy Porter has nothing to disclose.

ref. FEMA’s flood maps often miss dangerous flash flood risks, leaving homeowners unprepared – https://theconversation.com/femas-flood-maps-often-miss-dangerous-flash-flood-risks-leaving-homeowners-unprepared-260990

How citizenship chaos was averted, for now, by a class action injunction against Trump’s birthright citizenship order

Source: The Conversation – USA – By Julie Novkov, Professor of Political Science and Women’s, Gender and Sexuality Studies, University at Albany, State University of New York

Protesters support birthright citizenship on May 15, 2025, outside of the Supreme Court in Washington. AP Photo/Jacquelyn Martin

Legal battles over President Donald Trump’s executive order to end birthright citizenship continued on July 10, 2025, after a New Hampshire federal district judge issued a preliminary injunction that will, if it’s not reversed, prevent federal officials from enforcing the order nationally.

The ruling by U.S. District Judge Joseph Laplante, a George W. Bush appointee, asserts that this policy of “highly questionable constitutionality … constitutes irreparable harm.”

In its ruling in late June, the Supreme Court allowed the Trump administration to deny citizenship to infants born to undocumented parents in many parts of the nation where individuals or states had not successfully sued to prevent implementation – including a number of mid-Atlantic, Midwest and Southern states.

Trump’s executive order limits U.S. citizenship by birth to those who have at least one parent who is a U.S. citizen or legal permanent resident. It denies citizenship to those born to undocumented people within the U.S. and to the children of those on student, work, tourist and certain other types of visas.

The preliminary injunction is on hold for seven days to allow the Trump administration to appeal.

The June 27 Supreme Court decision on birthright citizenship limited the ability of lower-court judges to issue universal injunctions to block such executive orders nationwide.

Laplante was able to avoid that limit on issuing a nationwide injunction by certifying the case as a class action lawsuit encompassing all children affected by the birthright order, following a pathway suggested by the Supreme Court’s ruling.

Pathways beyond universal injunctions

In its recent birthright citizenship ruling, Trump v. CASA, the Supreme Court noted that plaintiffs could still seek broad relief by filing such class action lawsuits that would join together large groups of individuals facing the same injury from the law they were challenging.

And that’s what happened.

Litigants filed suit in New Hampshire’s district court the same day that the Supreme Court decided CASA. They asked the court to certify a class consisting of infants born on or after Feb. 20, 2025, who would be covered by the order and their parents or prospective parents. The court allowed the suit to proceed as a class action for these infants.

Several people raise their hands as a man at a podium answers questions.
President Donald Trump takes questions on June 27, 2025, in Washington, D.C., after the Supreme Court ruled on the birthright citizenship case.
Joe Raedle/Getty Images

What if this injunction doesn’t stick?

If the U.S. Court of Appeals for the 1st Circuit or the Supreme Court invalidates the New Hampshire court’s newest national injunction and another injunction is not issued in a different venue, the order will then go into effect anywhere it is not currently barred from doing so. Implementation could begin in as many as 28 states where state attorneys general have not challenged the Trump birthright citizenship policy if no other individuals or groups secure relief.

As political science scholars who study race and immigration policy, we believe that, if implemented piecemeal, Trump’s birthright citizenship order would create administrative chaos for states determining the citizenship status of infants born in the United States. And it could lead to the first instances since the 1860s of infants being born in the U.S. being denied citizenship categorically.

States’ role in establishing citizenship

Almost all U.S.-born children are issued birth certificates by the state in which they are born.

The federal government’s standardized form, the U.S. standard certificate of live birth, collects data on parents’ birthplaces and their Social Security numbers, if available, and provides the information states need to issue birth certificates.

But it does not ask questions about their citizenship or immigration status. And no national standard exists for the format for state birth certificates, which traditionally have been the simplest way for people born in the U.S. to establish citizenship.

If Trump’s executive order goes into effect, birth certificates issued by local hospitals would be insufficient evidence of eligibility for federal government documents acknowledging citizenship. The order would require new efforts, including identification of parents’ citizenship status, before authorizing the issuance of any federal document acknowledging citizenship.

Since states control the process of issuing birth certificates, they will respond differently to implementation efforts. Several states filed a lawsuit on Jan. 21 to block the birthright citizenship order. And they will likely pursue an arsenal of strategies to resist, delay and complicate implementation.

While the Supreme Court has not yet confirmed that these states have standing to challenge the order, successful litigation could bar implementation in up to 18 states and the District of Columbia if injunctions are narrowly framed, or nationally if lawyers can persuade judges that disentangling the effects on a state-by-state basis will be too difficult.

Other states will likely collaborate with the administration to deny citizenship to some infants. Some, like Texas, had earlier attempted to make it particularly hard for undocumented parents to obtain birth certificates for their children.

Protesters hold signs in front of a federal building.
People demonstrate outside the Supreme Court of the United States on May 15, 2025, in Washington, D.C.
Matt McClain/The Washington Post via Getty Images

Potential for chaos

If the Supreme Court rejects attempts to block the executive order nationally again, implementation will be complicated.

That’s because it would operate in some places and toward some individuals while being legally blocked in other places and toward others, as Justice Sonia Sotomayor warned in her Trump v. CASA dissent.

Children born to plaintiffs anywhere in the nation who have successfully sued would have access to citizenship, while other children possibly born in the same hospitals – but not among the groups named in the suits – would not.

Babies born in the days before implementation would have substantially different rights than those born the day after. Parents’ ethnicity and countries of origin would likely influence which infants are ultimately granted or denied citizenship.

That’s because some infants and parents would be more likely to generate scrutiny from hospital employees and officials than others, including Hispanics, women giving birth near the border, and women giving birth in states such as Florida where officials are likely to collaborate enthusiastically with enforcement.

The consequences could be profound.

Some infants would become stateless, having no right to citizenship in another nation. Many people born in the U.S. would be denied government benefits, Social Security numbers and the ability to work legally in the U.S.

With the constitutionality of the executive order still unresolved, it’s unclear when, if ever, some infants born in the U.S. will be the first in the modern era to be denied citizenship.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. How citizenship chaos was averted, for now, by a class action injunction against Trump’s birthright citizenship order – https://theconversation.com/how-citizenship-chaos-was-averted-for-now-by-a-class-action-injunction-against-trumps-birthright-citizenship-order-260175

Mirando a Srebrenica para entender Gaza: ¿cómo se prueba la intención de destruir a un grupo?

Source: The Conversation – (in Spanish) – By Pilar Eirene de Prada, Profesora/Investigadora en Derecho Internacional y RRII, Universidad Francisco de Vitoria

Tumbas de civiles asesinados en Potocari, Srebrenica. dotshock/Shutterstock

Este mes de julio se cumplen treinta años de la masacre ocurrida en Srebrenica, un enclave montañoso ubicado en el este de Bosnia y Herzegovina, cercano a la frontera con Serbia. Entre el 6 y el 11 de julio de 1995, más de 8 000 hombres y niños bosniomusulmanes fueron asesinados por fuerzas serbobosnias en este lugar, declarado “zona segura” por Naciones Unidas, y bajo la protección directa de los cascos azules. Las escenas retransmitidas por los periodistas de guerra dieron la vuelta al mundo y marcaron un hito en la conciencia colectiva de Occidente.

Hoy, cuando las imágenes que llegan desde Gaza vuelven a activar el debate sobre qué constituye un genocidio, resulta imprescindible volver la mirada al caso de Srebrenica para entender cómo los tribunales internacionales interpretan este crimen internacional.

¿Por qué solo se reconoció el genocidio en Srebrenica?

El Tribunal Penal Internacional para la ex Yugoslavia (TPIY) reconoció la masacre de Srebrenica como genocidio. Fue un hito jurídico importante, pero también dejó un sabor amargo, especialmente en las víctimas. El tribunal solo calificó como genocidio los crímenes cometidos en Srebrenica, dejando fuera otros episodios de violencia igualmente sistemática contra la población bosniomusulmana ocurridos en otros municipios. ¿Por qué?

La respuesta está en una interpretación legal extremadamente restrictiva del crimen de genocidio. Según la Convención para la Prevención y la Sanción del Delito de Genocidio (1948), requiere probar que los actos (asesinatos, torturas, destrucción de condiciones de vida…) fueron cometidos con la “intención de destruir, total o parcialmente, a un grupo nacional, étnico, racial o religioso, como tal”.

El problema es cómo se interpreta esa “intención”. La jurisprudencia dominante ha adoptado lo que se conoce como purpose-based approach (enfoque fundado en la intencionalidad específica). Es decir, una exigencia de propósito consciente y deliberado de destrucción. Bajo esta lógica, no basta con demostrar que los actos tuvieron efectos devastadores para el grupo. Hay que probar que fueron cometidos con la intención específica de eliminarlo.

La intención genocida va más allá de una orden

En contextos modernos de conflicto, la intención de destruir a un grupo se evidencia a través de la acumulación de políticas públicas, decisiones militares, marcos normativos y narrativas sobre el otro. La prueba de cargo, ya sea una orden explícita, discursos que hablen abiertamente de aniquilación o referidos a un único acto de exterminio, se vuelve casi imposible de objetivar.

Además, limitar el genocidio a un episodio concentrado y mediáticamente visible deja fuera del radar muchas otras formas de destrucción colectiva, más complejas y larvadas.

El contexto es relevante

En este sentido, resulta cada vez más necesario repensar la manera en que identificamos el genocidio, que se aleja también del modelo totalizador del holocausto (para algunos único genocidio posible y por lo tanto irrepetible).

Una alternativa a este enfoque tradicional es lo que la doctrina jurídica crítica ha denominado knowledge-based approach, o enfoque basado en el conocimiento. Esta perspectiva no exige demostrar una voluntad explícita de exterminio, sino que se pregunta si los perpetradores sabían –o no podían ignorar– que sus actos contribuían a un patrón sistemático de destrucción del grupo.

Este enfoque se apoya en una visión más estructural y menos individualista del genocidio. El crimen no se comete solo desde una voluntad interna subjetiva, sino a través de patrones más amplios que podemos construir si tenemos en cuenta importantes elementos contextuales: políticas públicas, marcos legales de excepción, discursos deshumanizantes y/o decisiones estratégicas sostenidas en el tiempo. Bajo esta lógica, la responsabilidad penal no se diluye, sino que se adapta a las realidades contemporáneas de la violencia colectiva.

Del marco téorico a la práctica: Gaza

Este debate trasciende el marco teórico y tiene una gran relevancia en la práctica. Desde octubre de 2023, Gaza ha sido sometida a una campaña de destrucción progresiva: bombardeos masivos, cortes de suministro, desplazamientos forzados, hambre inducida y colapso sanitario. Más de 55 000 personas han muerto y cientos de miles han sido heridas. Pero más allá de las cifras, lo que está en juego es una forma sostenida de aniquilación del modo de vida palestino.

Así lo planteó Sudáfrica en la demanda presentada ante la Corte Internacional de Justicia en diciembre de 2023. Y se ratificó por la Corte en sus providencias de medidas provisionales.

En ellas, el principal órgano judicial de Naciones Unidas determinó que existe un riesgo real e inminente de que se cause un perjuicio irreparable al derecho del pueblo palestino en Gaza a ser protegido frente a actos genocidas y otras conductas prohibidas por la Convención. Y sin embargo, el reconocimiento jurídico de este escenario como genocidio sigue siendo objeto de gran disputa.

Rafael Lemkin, el jurista que acuñó el término “genocidio” en 1944, entendía este crimen no solo como la destrucción física de personas, sino como la eliminación de la vida colectiva de un grupo, su cultura, sus símbolos y sus condiciones de existencia.

Genocidio cultural: más allá de la violencia directa

Sin embargo, la definición legal vigente, moldeada por intereses coloniales y centrada en el modelo del Holocausto, excluyó deliberadamente el genocidio cultural. Esta visión estrecha ignora que los grupos humanos también pueden ser destruidos mediante políticas de desplazamiento y asimilación forzada. Unas estrategias que borran la memoria, el idioma o el vínculo con el territorio. Los grupos humanos no se destruyen solo con violencia directa.

Treinta años después de Srebrenica, urge una relectura crítica de la figura del genocidio. No para vaciarla de contenido, sino para restaurar su capacidad protectora frente a nuevas formas de destrucción colectiva. El conocimiento del perpetrador sobre el impacto de sus actos debe bastar para generar responsabilidad genocida. Especialmente si esos actos contribuyen a un plan sistemático de eliminación del grupo.

En un mundo donde el exterminio se administra burocráticamente, la justicia internacional debe aprender a reconocer y nombrar las violencias del presente. Incluso cuando no se ajustan a las categorías del pasado o correrá el riesgo de dejar impunes las formas contemporáneas de destrucción colectiva.

The Conversation

Pilar Eirene de Prada no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Mirando a Srebrenica para entender Gaza: ¿cómo se prueba la intención de destruir a un grupo? – https://theconversation.com/mirando-a-srebrenica-para-entender-gaza-como-se-prueba-la-intencion-de-destruir-a-un-grupo-260461

Why it can be hard to warn people about dangers like floods – communication researchers explain the role of human behavior

Source: The Conversation – USA – By Keri K. Stephens, Professor & Co-Director, Technology & Information Policy Institute, The University of Texas at Austin

How emergency alerts convey risks matters. AP Photo/Eric Gay

Flash floods like the one that swept down the Guadalupe River in Texas on July 4, 2025, can be highly unpredictable. While there are sophisticated flood prediction models and different types of warning systems in some places, effective flood protection requires extensive preparedness and awareness.

It also requires an understanding of how people receive, interpret and act on risk information and warnings. Technology can be part of the solution, but ultimately people are the critical element in any response.

As researchers who study emergency communications, we have found that simply providing people with technical information and data is often not enough to effectively communicate the danger and prompt them to act.

The human element

One of us, Keri Stephens, has led teams studying flood risk communication. They found that people who have experienced a flood are more aware of the risks. Conversely, groups that have not lived through floods typically don’t understanding various flood risks such as storm surges and flash floods. And while first responders often engage in table-top exercises and drills – very important for their readiness to respond – there are only a few examples of entire communities actively participating in warning drills.

Messages used to communicate flood risk also matter, but people need to receive them. To that end, Keri’s teams have worked with the Texas Water Development Board to develop resources that help local flood officials sort through and prioritize information about a flood hazard so they can share what is most valuable with their local communities.

The commonly used “Turn Around Don’t Drown” message, while valuable, may not resonate equally with all groups. Newly developed and tested messages such as “Keep Your Car High and Dry” appeal specifically to young adults who typically feel invincible but don’t want their prized vehicles damaged. While more research is needed, this is an example of progress in understanding an important aspect of flood communication: how recipients of the information make decisions.

Interviews conducted by researchers often include responses along these lines: “Another flash flood warning. We get these all the time. It’s never about flooding where I am.” This common refrain reveals a fundamental challenge in flood communication. When people hear “flood warning,” they often think of different things, and interpretations can vary depending on a person’s proximity to the flooding event.

Some people equate flood warnings with streamflow gauges and sensors that monitor water levels – the technical infrastructure that triggers alerts when rivers exceed certain thresholds. Others think of mobile phone alerts, county- or geographic-specific notification systems, or even sirens.

a small yellow triangle enclosing a black exclamation point sits above a block of text
A typical alert from the National Weather Service.
AP Photo/Lisa Rathke

Beyond technologies and digital communication, warnings still come through informal networks in many communities. Emergency managers directly coordinate with and share information with major businesses and organizations, saying, “Hey, John, be sure you have somebody up tonight watching the National Weather Service alerts and rivers.”

This human-centered approach, similar to neighborhood-level systems we have studied in Japan, can provide direct confirmation that warnings have been received. This is something mass media and mobile systems cannot guarantee, especially during infrastructure failures such as power and cell tower outages.

Effective messages

Research shows that effective warning messages need to include five critical components: a clear hazard description, location-specific information, actionable guidance, timing cues and a credible source. The Federal Emergency Management Agency’s integrated public alert and warning system message design dashboard assists authorities in rapidly drafting effective messages.

This warning system, known as IPAWS, provides nationwide infrastructure for wireless emergency alerts and Emergency Alert System messages. While powerful, IPAWS has limitations − not all emergency managers are trained to use it, and messages may extend beyond intended geographic areas. Also, many older mobile devices lack the latest capabilities, so they may not receive the most complete messages when they are sent.

Hyperlocal community opt-in systems can complement IPAWS by allowing residents to register for targeted notifications. These systems, which can be run by communities or local agencies, face their own challenges. People must know they exist, be willing to share phone numbers, and remember to update their information. Social media platforms add another communication channel, with emergency managers increasingly using social media to share updates, though these primarily reach only certain demographics, and not everyone checks social media regularly.

The key is redundancy through multiple communication channels. Research has found that multiple warnings are needed for people to develop a sense of urgency, and the most effective strategy is simple: Tell another person what’s going on. Interpersonal networks help ensure the message is delivered and can prompt actions. As former Natural Hazards Center Director Dennis Mileti observed: The wireless emergency alerts system “is fast. Mama is faster.”

A Colorado news report explains why emergency alerts have to be tailored for local needs and conditions and use multiple communication channels.

Warning fatigue

Professionals from the National Weather Service, FEMA and the Federal Communications Commission, along with researchers, are increasingly concerned about warning fatigue – when people tune out warnings because they receive too many of them.

However, there is limited empirical data about how and when people experience warning fatigue – or about its impact.

This creates a double bind: Officials have an obligation to warn people at risk, yet frequent warnings can desensitize recipients. More research is needed to determine the behavioral implications of and differences between warnings that people perceive as irrelevant to their immediate geographic area versus those that genuinely don’t apply to them. This distinction becomes especially critical when people might drive into flooded areas outside their immediate vicinity.

The key to effective emergency communication is to develop messages that resonate with specific audiences and build community networks that complement technological systems. We are now studying how to do this effectively in the United States and internationally. It’s also important to apply behavioral insights to the design of every level of communication warning systems. And it’s important to remember to test not just the technology but the entire end-to-end system, from threat identification to community response.

Finally, maintaining true redundancy across multiple communication channels is an important strategy when trying to reach as many people as possible. Technology supports human decision-making, but it doesn’t replace it.

The Conversation

Keri K. Stephens’ research reported here has been externally funded by the Texas Water Development Board, Texas General Land Office, and the National Science Foundation. Results published are peer-reviewed, and opinions reflect those of the author, not the funder.

Hamilton Bean has earned research funding from U.S. Department of Homeland Security and the National Oceanic and Atmospheric Administration. Results published are peer-reviewed, and opinions reflect those of the author, not the funder.

ref. Why it can be hard to warn people about dangers like floods – communication researchers explain the role of human behavior – https://theconversation.com/why-it-can-be-hard-to-warn-people-about-dangers-like-floods-communication-researchers-explain-the-role-of-human-behavior-260780

Berg winds in South Africa: the winter weather pattern that increases wildfire risks

Source: The Conversation – Africa – By Sheldon Strydom, Senior Lecturer & Head of Department, Department of Geography, Rhodes University

After a fire. Hendrik van den Berg, via Wikimedia Commons., CC BY

Winter in some parts of South Africa is a time of low (or no) rainfall and high fire danger. Sheldon Strydom studies the relationship between weather and fire, in particular how Berg winds, also known as mountain flow events, are linked to periods of enhanced fire danger. Mid-July is typically a high risk period. He shares what he has learnt during his research in the midlands of KwaZulu-Natal province in South Africa, close to the country’s largest mountain range, the Drakensberg.

What are Berg winds and how do they form?

It’s long been known that mountain winds (“foëhn winds”, “chinook winds” and the like) increase fire danger. There’s case study evidence from around the globe.

In South Africa, these mountain winds are known as Berg winds. They are generally experienced as warm and dry.

A mountain wind starts when a mass of air is forced to rise along a windward slope (the side of the mountain that wind is blowing towards). As the mass of air rises it cools. When it reaches the peak of the slope or mountain it descends on the leeward (sheltered) side. As it gets lower, the air gets warmer.

Berg winds commonly occur in South African winters when high atmospheric pressure systems are situated over the interior of the country and low pressure systems are situated off the coast. (Atmospheric pressure is the pressure of air over the land, and affects the movement of air.)

Usually, a coastal low pressure system happens a day or two before a cold front. The pressure gradient (difference in pressure that drives wind) between the interior high pressure cell and coastal low pressure cell results in air flowing towards the coast from the interior of the country, down the mountain escarpment. The air reaches coastal areas as a warm, dry wind.

Why study the relationship between Berg winds and fires?

Winds can spread fires in the landscape.

Our study, using data from four sites in the midlands of KwaZulu-Natal, quantified the effect of Berg winds on the microclimate (local weather conditions) and emphasised how these changes influence fire danger.

The sources of fires in South Africa, as elsewhere, vary. For example, wildfires can be started when prescribed burning, or the planned use of fire, becomes uncontrolled due to changes in weather conditions. Accidental fires and arson are the most common causes of wildfires. Research shows that wildfires and fire disasters are common in areas where prescribed burning is used.

Prescribed burning, or the planned use of fire, is an important aspect of agricultural management. It promotes the dispersal and germination of seeds from a number of species and also removes ground litter. Prescribed burning is used to manage grasslands and has been linked to decreasing the number of disease-borne vectors such as ticks.

But if they get out of control, fires pose a threat to farmland and plantations.

It’s therefore vital to have weather forecasts and monitoring systems that warn of conditions conducive to the development and spread of fires.

Internationally, fire danger indices or meters are used to monitor conditions. In South Africa, the South African Weather Service and other interested and affected parties currently use the Lowveld fire danger index. The index is calculated using records of air temperature, relative humidity and wind speed and rainfall. These are measured once a day. Daily forecasts are available from the Weather Service and disseminated to local fire protection associations.

Much research in South Africa has focused on pyrogeography (understanding when and where fires occur) and fire ecology. Little research has been done to quantify the effects of Berg winds on fire danger using available historical hourly meteorological data.

The midlands of KwaZulu-Natal province serve as a perfect environment to study the effects of Berg winds on the microclimate and fire danger. The area is close to the Drakensberg mountains and experiences frequent fires. It’s also a largely agricultural area.




Read more:
Southern Africa’s rangelands do many jobs, from feeding cattle to storing carbon: a review of 60 years of research


What did you discover?

The study developed a fuzzy logic system (a mathematical method for handling uncertainty) to identify periods of Berg wind conditions using historical hourly meteorological data in four sites.

We analysed variables like the air temperature, relative humidity, wind speed, and fire danger at different times of the day and night, before and during Berg winds.

The analysis revealed the significance of change experienced in the local weather conditions (within 2km) during periods of Berg winds, and how these changes influence fire danger.

It found that:

  • Berg winds were more common during daytime hours and affected the microclimate most during the day

  • during daytime Berg wind events, air temperatures rose by an average of 5.5°C; humidity fell by an average of 16%; and wind speed increased by an average of 5.2 metres per second

  • daytime Berg wind events significantly elevated fire danger

  • night-time Berg winds, while less common, did still result in significant change in the microclimate

  • at night, fire danger increases when a combination of variables change significantly.

The fuzzy logic system can be useful in two ways: to quantify the effects of Berg winds on the microclimate and to complement any fire danger monitoring system. It can measure conditions at a higher temporal resolution, such as every 10 minutes, or hour – making it more useful for monitoring near real-time changes in fire danger.

The system could be valuable for operational use by agencies like the KwaZulu-Natal Provincial Disaster Management Centre, and could be applied in other regions vulnerable to fire risk.

The Conversation

Sheldon Strydom receives funding from Rhodes University, and the National Research Foundation.

Michael John Savage has received funding from the NRF.

ref. Berg winds in South Africa: the winter weather pattern that increases wildfire risks – https://theconversation.com/berg-winds-in-south-africa-the-winter-weather-pattern-that-increases-wildfire-risks-260612

AI in health care could save lives and money − but change won’t happen overnight

Source: The Conversation – USA (3) – By Turgay Ayer, Professor of Industrial and Systems Engineering, Georgia Institute of Technology

AI will help human physicians by analyzing patient data prior to surgery. Boy_Anupong/Moment via Getty Images

Imagine walking into your doctor’s office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what’s wrong.

This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives.

What’s more, a 2023 study found that if the health care industry significantly increased its use of AI, up to US$360 billion annually could be saved.

But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low.

A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses.

I’m a professor and researcher who studies AI and health care analytics. I’ll try to explain why AI’s growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI’s widespread adoption by the medical industry.

Inaccurate diagnoses, racial bias

Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care.

AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care.

But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn’t perfectly match the patient in front of them.

As a result, AI doesn’t always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations.

Racial and ethnic bias is another issue. If data includes bias because it doesn’t include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened.

Humans and AI are beginning to work together at this Florida hospital.

Data-sharing concerns, unrealistic expectations

Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor’s offices simply don’t have the time, personnel, money or will to implement AI.

Also, many cutting-edge AI systems operate as opaque “black boxes.” They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification.

But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings.

There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records.

For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient’s data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards.

Privacy concerns also extend to patients’ trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care.

The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises.

Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they’re safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations.

AI could rapidly accelerate the discovery of new medications.

Incremental change

Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time.

Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help.

Suffice to say that health care’s transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI’s potential to treat millions and save trillions awaits.

The Conversation

Turgay Ayer owns shares in Value Analytics Labs, a healthcare technology company. He received funding from government agencies, including NSF, NIH, and CDC.

ref. AI in health care could save lives and money − but change won’t happen overnight – https://theconversation.com/ai-in-health-care-could-save-lives-and-money-but-change-wont-happen-overnight-241551

Muscle weakness in cancer survivors may be caused by treatable weakness in blood vessels – new research

Source: The Conversation – USA (3) – By Jalees Rehman, Department Chair and Professor of Biochemistry and Molecular Genetics, University of Illinois Chicago

Poorly functioning blood vessels lead to the characteristic muscle weakness that so many cancer patients experience. Artur Plawgo/Science Photo Library via Getty Images

Tumors can destroy the blood vessels of muscles even when the muscles are nowhere close to the tumor. That is the key finding of a new study that my colleagues and I recently published in the journal Nature Cancer.

Muscle loss in cancer patients is a major health problem, but the exact causes of how precisely tumors affect muscles remain an active area of research.

Scientists in my lab were curious whether one explanation for the muscle loss in cancer patients could be that the cancer impairs the blood vessels that are necessary to supply nutrients and oxygen to muscles. Healthy blood vessels ensure that blood containing oxygen and nutrients is transported from the heart to all tissues and organs in the body, and then circulates back to the heart. Unhealthy blood vessels lose the ability to circulate sufficient blood and develop leaks, with nutrients seeping into the tissue prematurely and thereby cutting off the supply of nutrients to tissues that are further downstream.

To tackle this question, my colleagues and I worked with several other scientific research teams with expertise in advanced microscopy, cancer research and metabolism. We used animal models to study several kinds of tumors – lung cancer, skin cancer, colon cancer and pancreatic cancer. We consistently observed that the blood vessels in the muscles became fewer and leakier even before the muscle weakness set in.

We also found that tumors release a protein called Activin-A, which acts on blood vessels to cause the leakiness and, ultimately, loss of blood vessels in the muscle. When we used a gene therapy to restore blood vessel health by counteracting the effects of Activin-A, we were able to prevent the muscle loss.

So we examined the muscles of patients who had passed away because of cancer and found that the muscles of cancer patients contained fewer blood vessels than expected.

Why Activin-A matters

Millions of cancer survivors struggle with muscle weakness, which can be so profound that they may have difficulties walking up a couple of flights of stairs or going shopping for groceries on their own.

Severe muscle weakness and muscle loss during cancer is called cancer cachexia, which occurs in up to 80% of patients with advanced cancer.

Recent research indicates that cachexia is far more common among cancer patients than previously suspected, with approximately half the patients who see their cancer doctor for the first time already showing signs of muscle weakness.

Importantly, cachexia can persist even after the cancer is successfully treated and cured. This can have a devastating impact on the quality of life for cancer survivors.

Our discovery that the loss of blood vessel function in the muscles occurs early on during the progression of the cancer suggests that fixing blood vessels in cancer patients and cancer survivors could be a new way to prevent or reverse cachexia.

The reasons for the muscle loss in cancer are complicated and involve poor nutrition due to loss of appetite and inflammation, which are initially caused by the tumor but persist even when the tumor is removed.

Older man leaning forward over his kitchen sink, suggesting he is not feeling well.
New research shows that lack of sufficient blood vessels could explain why many cancer survivors still experience muscle weakness even after the tumor is removed.
FG Trade/E+ via Getty Images

What other research is being done

There are currently no treatments approved by the Food and Drug Administration for cachexia, but new therapies are on the horizon.

One such therapy is an antibody drug that targets the molecule GDF-15, a protein that is thought to suppress appetite.

Other studies are using a combination of targeted nutrition and exercise programs to help patients with cancer cachexia regain muscle mass and muscle strength.

All these studies suggest that we will need a combination of approaches to enhance exercise, nutrition, appetite, muscle regeneration and – as we propose – blood vessel health.

What’s next

We are now evaluating drugs and exercise programs that are known to improve blood vessel health. Repurposing these treatments that are traditionally designed for cardiovascular patients could be a rapid way to help cancer patients regain muscle strength.

We hope that our work highlights how important it is for cancer patients to receive comprehensive medical care, which includes improving cardiovascular health and overall quality of life.

The Research Brief is a short take on interesting academic work.

The Conversation

Jalees Rehman receives funding from the National Institutes of Health.

ref. Muscle weakness in cancer survivors may be caused by treatable weakness in blood vessels – new research – https://theconversation.com/muscle-weakness-in-cancer-survivors-may-be-caused-by-treatable-weakness-in-blood-vessels-new-research-259765

IRS says churches may endorse political candidates despite a decades-old federal statute barring them from doing that

Source: The Conversation – USA (3) – By Lloyd Hitoshi Mayer, Professor of Law, University of Notre Dame

Former New York Gov. Andrew Cuomo speaks at a church in Harlem during his failed campaign to become the Democratic nominee in the 2025 New York City mayoral race. Mostafa Bassim/Anadolu via Getty Images

Churches and other houses of worship can endorse political candidates without risking the loss of their tax-exempt status, the Internal Revenue Service said in a legal document the tax-collection agency filed on July 7, 2025. This guidance is at odds with a law Congress passed more than 70 years ago that’s known as the Johnson Amendment and applies to all charitable nonprofits, whether they are secular or religious.

The Conversation U.S. asked Lloyd Hitoshi Mayer, a law professor who has studied the regulation of churches’ political activities, to explain what this statute is, how the IRS seeks to change its purview and why this matters.

What’s the Johnson Amendment?

The Johnson Amendment is a provision that Lyndon B. Johnson added to a tax bill passed by Congress in 1954, when he was a senator. It says that any charity that wants to be tax-exempt under section 501(c)(3) of the Internal Revenue Code cannot “participate in, or intervene in … any political campaign on behalf of … any candidate for public office.” In the U.S., all houses of worship are designated as charities by the IRS.

The IRS has interpreted the Johnson Amendment for more than 70 years to mean that charities cannot speak in favor of political candidates or take any other action that supports or opposes them.

The IRS is prohibited from publicly disclosing audits of specific tax-exempt nonprofits under taxpayer privacy laws, so there’s no way to know the extent to which the law has been enforced. The public only learns about audits tied to possible Johnson Amendment violations if the nonprofit discloses that information or the IRS revoked their tax-exempt status.

However, the IRS did conduct a broad enforcement campaign in the 2000s known as the Political Activity Compliance Initiative. The reports it issued for 2004 and 2006 stated that it had audited hundreds of charities, including churches, for possible Johnson Amendment violations. The IRS generally found that most violations were minor and often inadvertent – warranting no more than a warning letter.

It’s unknown whether any nonprofits lost their tax-exempt status as a result of this initiative, which the IRS appears to have ended in 2008.

There’s only one known instance of a church losing its tax-exempt status because it violated the Johnson Amendment. In that case, a church in Binghamton, New York, published full-page newspaper ads criticizing Bill Clinton during his 1992 presidential campaign.

Why does the Trump administration want to change its enforcement?

The National Religious Broadcasters, two churches and another religious nonprofit sued the IRS in 2024, challenging the constitutionality of the Johnson Amendment on First Amendment free speech and free exercise of religion grounds and on Fifth Amendment due process grounds. The plaintiffs also argued that applying the Johnson Amendment to religious nonprofits violated the federal Religious Freedom Restoration Act.

The plaintiffs and the IRS filed a joint motion on July 7 to settle the case. They asked the U.S. District Court for the Eastern District of Texas to order the IRS not to enforce the Johnson Amendment against the two church plaintiffs. They also asked the court to incorporate in its order a statement that the Johnson Amendment does not apply to “speech by a house of worship to its congregation, in connection with religious services through its customary channels of communication on matters of faith, concerning electoral politics viewed through the lens of religious faith.”

This represents the first time the IRS has said there’s an exception to the Johnson Amendment for houses of worship. While lawmakers have periodically sought to repeal or modify the statute, neither chamber of Congress has ever passed such legislation.

President Donald Trump asserted during his first term that he had “gotten rid of” the Johnson Amendment. But that referred to his 2017 executive order that directed the Treasury Department – to which the IRS belongs – to respect freedom of religion with respect to religious organizations speaking about political issues as “consistent with law.”

Under the IRS interpretation of the Johnson Amendment at the time, it would not have been consistent with law for churches or other religious nonprofits to support or oppose candidates for elected public office.

How might the IRS treat religious political activity differently?

If the court approves this new joint motion, that order will only apply to the two churches that are plaintiffs in the case – not other religious nonprofits or the National Religious Broadcasters that joined them in suing the IRS. But the filing tells other houses of worship that the IRS will not enforce the Johnson Amendment against them for speech to their congregations, at least not during the Trump administration.

I think that the government may have a hard time applying this exception for several reasons.

The IRS will have to determine when a charity is a “church,” the term the IRS uses for a house of worship of any faith. That has become increasingly difficult in recent years, as some organizations that stretch the conventional definition of a church have won IRS recognition as such.

The IRS will also have to clarify what constitutes speech made “in connection with religious services” and what are “customary channels of communication.” For example, it’s unclear whether inviting a political candidate to address the congregation about how their religious faith relates to their candidacy falls within the exception.

Donald Trump shakes a woman's hand in a sanctuary with a large cross and several American flags.
Donald Trump participates in a community roundtable at a church in Detroit during his successful 2024 presidential campaign.
Jim Watson/AFP via Getty Images

Will only conservative politicians benefit?

Establishing this exception does not necessarily give conservative politicians any advantages.

It is true that recent attempts to repeal or modify the Johnson Amendment are associated with conservative Christian groups such as the Alliance Defending Freedom, which represented the plaintiffs in this lawsuit.

But historically, many progressive houses of worship have also pushed against the Johnson Amendment, including Black churches that often serve as political as well as religious centers for their communities.

A Texas Tribune and ProPublica investigation documented apparent violations of the Johnson Amendment in the 2022 midterm elections by almost 20 churches in Texas from across the political spectrum. Interestingly, most of the church leaders involved were aware of the amendment.

Many said they were not violating it because they avoided explicitly endorsing candidates, while at the same time clearly expressing their support for specific candidates by, for example, praying for an individual who was identified to the congregation as a candidate.

How could this new guidance change political campaigning?

Americans generally don’t want to see churches get involved in politics, including majorities in most denominations. Nonetheless, church leaders of all stripes who were already inclined to support particular candidates will probably feel emboldened to explicitly endorse candidates when preaching to their congregations.

There are two ways that this new exception could do more than that.

First, it isn’t limited to sermons by pastors, priests, rabbis, imams and other religious leaders. It extends to any speech to a house of worship’s congregation “in connection with religious services through its customary channels of communication on matters of faith.” It therefore almost certainly includes church bulletins and other written materials distributed as part of a religious service.

What’s less clear is whether “customary channels of communication” includes people who watch religious services streamed over the internet or on TV, rather than just those who attend services in person.

Second, the change will increase pressure on church leaders to support candidates.

For example, George W. Bush’s 2004 campaign reportedly sought to recruit thousands of congregations to distribute campaign information. It’s natural to expect such efforts to multiply and become more direct for both Democratic and Republican candidates from now on.

And church leaders will also likely face pressure from politically active congregants to endorse candidates, and have a harder time resisting it.

The Conversation

Lloyd Hitoshi Mayer previously worked at the law firm of Caplin & Drysdale, Chartered, including when the firm represented All Saints Episcopal Church of Pasadena, California with respect to an IRS audit of the church for allegedly violating the Johnson Amendment. He was not personally involved in this representation.

ref. IRS says churches may endorse political candidates despite a decades-old federal statute barring them from doing that – https://theconversation.com/irs-says-churches-may-endorse-political-candidates-despite-a-decades-old-federal-statute-barring-them-from-doing-that-260854

Spacecraft equipped with a solar sail could deliver earlier warnings of space weather threats to Earth’s technologies

Source: The Conversation – USA – By Mojtaba Akhavan-Tafti, Associate Research Scientist, University of Michigan

The SWIFT constellation, shown not to scale in this illustration, will fly farther than its predecessors to improve space weather warning time. Steve Alvey

The burgeoning space industry and the technologies society increasingly relies on – electric grids, aviation and telecommunications – are all vulnerable to the same threat: space weather.

Space weather encompasses any variations in the space environment between the Sun and Earth. One common type of space weather event is called an interplanetary coronal mass ejection.

These ejections are bundles of magnetic fields and particles that originate from the Sun. They can travel at speeds up to 1,242 miles per second (2,000 kilometers per second) and may cause geomagnetic storms.

They create beautiful aurora displays – like the northern lights you can sometimes see in the skies – but can also disrupt satellite operations, shut down the electric grid and expose astronauts aboard future crewed missions to the Moon and Mars to lethal doses of radiation.

An animation shows coronal mass ejection erupting from the Sun.

I’m a heliophysicist and space weather expert, and my team is leading the development of a next-generation satellite constellation called SWIFT, which is designed to predict potentially dangerous space weather events in advance. Our goal is to forecast extreme space weather more accurately and earlier.

The dangers of space weather

Commercial interests now make up a big part of space exploration, focusing on space tourism, building satellite networks, and working toward extracting resources from the Moon and nearby asteroids.

Space is also a critical domain for military operations. Satellites provide essential capabilities for military communication, surveillance, navigation and intelligence.

As countries such as the U.S. grow to depend on infrastructure in space, extreme space weather events pose a greater threat. Today, space weather threatens up to US$2.7 trillion in assets globally.

In September 1859, the most powerful recorded space weather event, known as the Carrington event, caused fires in North America and Europe by supercharging telegraph lines. In August 1972, another Carrington-like event nearly struck the astronauts orbiting the Moon. The radiation dose could have been fatal. More recently, in February 2022, SpaceX lost 39 of its 49 newly launched Starlink satellites because of a moderate space weather event.

Today’s space weather monitors

Space weather services heavily rely on satellites that monitor the solar wind, which is made up of magnetic field lines and particles coming from the Sun, and communicate their observations back to Earth. Scientists can then compare those observations with historical records to predict space weather and explore how the Earth may respond to the observed changes in the solar wind.

A drawing showing the Earth surrounded by a magnetic field with solar energy compressing one side.
The Earth’s magnetic field acts as a shield that deflects most solar wind.
NASA via Wikimedia Commons

Earth’s magnetic field naturally protects living things and Earth-orbiting satellites from most adverse effects of space weather. However, extreme space weather events may compress – or in some cases, peel back – the Earth’s magnetic shield.

This process allows solar wind particles to make it into our protected environment – the magnetosphere – exposing satellites and astronauts onboard space stations to harsh conditions.

Most satellites that continuously monitor Earth-bound space weather orbit relatively close to the planet. Some satellites are positioned in low Earth orbit, about 100 miles (161 kilometers) above Earth’s surface, while others are in geosynchronous orbit, approximately 25,000 miles (40,000 km) away.

At these distances, the satellites remain within Earth’s protective magnetic shield and can reliably measure the planet’s response to space weather conditions. However, to more directly study incoming solar wind, researchers use additional satellites located farther upstream – hundreds of thousands of miles from Earth.

The U.S., the European Space Agency and India all operate space weather monitoring satellites positioned around the L1 Lagrange point – nearly 900,000 miles (1,450,000 km) from Earth – where the gravitational forces of the Sun and Earth balance. From this vantage point, space weather monitors can provide up to 40 minutes of advance warning for incoming solar events.

A diagram showing the Earth, the Sun and the Moon, with the five Lagrange points labeled. L1 is beyond the Moon's orbital path around Earth, closer to the Sun.
The Lagrange points are equilibrium points for smaller objects, like the Earth, that orbit around a larger object, like the Sun. The L1 point is between the Earth and the Sun, where the gravitational pulls of the two objects balance out. Since the Sun’s pull is so much stronger than the Earth’s, the point is much closer to Earth.
Xander89/Wikimedia Commons, CC BY-SA

Advance warning for space weather

Increasing the warning time beyond 40 minutes – the current warning time – would help satellite operators, electric grid planners, flight directors, astronauts and Space Force officers better prepare for extreme space weather events.

For instance, during geomagnetic storms, the atmosphere heats up and expands, increasing drag on satellites in low Earth orbit. With enough advance warning, operators can update their drag calculations to prevent satellites from descending and burning up during these events. With the updated drag calculations, satellite operators could use the satellites’ propulsion systems to maneuver them higher up in orbit.

Airlines could change their routes to avoid exposing passengers and staff to high radiation doses during geomagnetic storms. And future astronauts on the way to or working on the Moon or Mars, which lack protection from these particles, could be alerted in advance to take cover.

Aurora lovers would also appreciate having more time to get to their favorite viewing destinations.

The Space Weather Investigation Frontier

My team and I have been developing a new space weather satellite constellation, named the Space Weather Investigation Frontier. SWIFT will, for the first time, place a space weather monitor beyond the L1 point, at 1.3 million miles (2.1 million kilometers) from Earth. This distance would allow scientists to inform decision-makers of any Earth-bound space weather events up to nearly 60 minutes before arrival.

Satellites with traditional chemical and electric propulsion systems cannot maintain an orbit at that location – farther from Earth and closer to the Sun – for long. This is because they would need to continuously burn fuel to counteract the Sun’s gravitational pull.

To address this issue, our team has spent decades designing and developing a new propulsion system. Our solution is designed to affordably reach a distance that is closer to the Sun than the traditional L1 point, and to operate there reliably for more than a decade by harnessing an abundant and reliable resource – sunlight.

SWIFT would use a fuelless propulsion system called a solar sail to reach its orbit. A solar sail is a hair-thin reflective surface – simulating a very thin mirror – that spans about a third of a football field. It balances the force of light particles coming from the Sun, which pushes it away, with the Sun’s gravity, which pulls it inward.

While a sailboat harnesses the lift created by wind flowing over its curved sails to move across water, a solar sail uses the momentum of photons from sunlight, reflected off its large, shiny sail, to propel a spacecraft through space. Both the sailboat and solar sail exploit the transfer of energy from their respective environments to drive motion without relying on traditional propellants.

A solar sail could enable SWIFT to enter an otherwise unstable sub-L1 orbit without the risk of running out of fuel.

NASA successfully launched its first solar sail in 2010. This in-space demonstration, named NanoSail-D2, featured a 107-square-foot (10 m2 ) sail and was placed in low Earth orbit. That same year, the Japanese Space Agency launched a larger solar sail mission, IKAROS, which deployed a 2,110 ft2 (196 m2 ) sail in the solar wind and successfully orbited Venus.

An illustration of a solar sail, which looks like a large, thin square of foil, flying through space.
An illustration of the solar sail used on the IKAROS space probe. These sails use light particles as propulsion.
Andrzej Mirecki, CC BY-SA

The Planetary Society and NASA followed up by launching two sails in low Earth orbit: LightSail, with an area of 344 ft2 (32 m2 ), and the advanced composite solar sail system, with an area of 860 ft2 (80 m2 ).

The SWIFT team’s solar sail demonstration mission, Solar Cruiser, will be equipped with a much larger sail – it will have area of 17,793 ft2 (1,653 m2 ) and launch as early as 2029. We successfully deployed a quadrant of the sail on Earth early last year.

If successful, the Solar Cruiser mission will pave the way for a small satellite constellation that will monitor the solar wind.

To transport it to space, the team will meticulously fold and tightly pack the sail inside a small canister. The biggest challenge to overcome will be deploying the sail once in space and using it to guide the satellite along its orbital path.

If successful, Solar Cruiser will pave the way for SWIFT’s constellation of four satellites. The constellation would include one satellite equipped with sail propulsion, set to be placed in an orbit beyond L1, and three smaller satellites with chemical propulsion in orbit at the L1 Lagrange point.

The satellites will be indefinitely parked at and beyond L1, collecting data in the solar wind without interruption. Each of the four satellites can observe the solar wind from different locations, helping scientists better predict how it may evolve before reaching Earth.

As modern life depends more on space infrastructure, continuing to invest in space weather prediction can protect both space- and ground-based technologies.

The Conversation

Mojtaba Akhavan-Tafti receives funding from NASA. He is the Principal Investigator of Space Weather Investigation Frontier (SWIFT).

ref. Spacecraft equipped with a solar sail could deliver earlier warnings of space weather threats to Earth’s technologies – https://theconversation.com/spacecraft-equipped-with-a-solar-sail-could-deliver-earlier-warnings-of-space-weather-threats-to-earths-technologies-259877