Certain brain injuries may be linked to violent crime – identifying them could help reveal how people make moral choices

Source: The Conversation – USA – By Christopher M. Filley, Professor Emeritus of Neurology, University of Colorado Anschutz Medical Campus

Neurological evidence is widely used in murder trials, but it’s often unclear how to interpret it. gorodenkoff/iStock via Getty Images Plus

On Oct. 25, 2023, a 40-year old man named Robert Card opened fire with a semi-automatic rifle at a bowling alley and nearby bar in Lewiston, Maine, killing 18 people and wounding 13 others. Card was found dead by suicide two days later. His autopsy revealed extensive damage to the white matter of his brain thought to be related to a traumatic brain injury, which some neurologists proposed may have played a role in his murderous actions.

Neurological evidence such as magnetic resonance imaging, or MRI, is widely used in court to show whether and to what extent brain damage induced a person to commit a violent act. That type of evidence was introduced in 12% of all murder trials and 25% of death penalty trials between 2014 and 2024. But it’s often unclear how such evidence should be interpreted because there’s no agreement on what specific brain injuries could trigger behavioral shifts that might make someone more likely to commit crimes.

We are two behavioral neurologists and a philosopher of neuroscience who have been collaborating over the past six years to investigate whether damage to specific regions of the brain might be somehow contributing to people’s decision to commit seemingly random acts of violence – as Card did.

With new technologies that go beyond simply visualizing the brain to analyze how different brain regions are connected, neuroscientists can now examine specific brain regions involved in decision-making and how brain damage may predispose a person to criminal conduct. This work may in turn shed light on how exactly the brain plays a role in people’s capacity to make moral choices.

Linking brain and behavior

The observation that brain damage can cause changes to behavior stretches back hundreds of years. In the 1860s, the French physician Paul Broca was one of the first in the history of modern neurology to link a mental capacity to a specific brain region. Examining the autopsied brain of a man who had lost the ability to speak after a stroke, Broca found damage to an area roughly beneath the left temple.

Broca could study his patients’ brains only at autopsy. So he concluded that damage to this single area caused the patient’s speech loss – and therefore that this area governs people’s ability to produce speech. The idea that cognitive functions were localized to specific brain areas persisted for well over a century, but researchers today know the picture is more complicated.

Researchers use powerful brain imaging technologies to identify how specific brain areas are involved in a variety of behaviors.

As brain imaging tools such as MRI have improved since the early 2000s it’s become increasingly possible to safely visualize people’s brains in stunning detail while they are alive. Meanwhile, other techniques for mapping connections between brain regions have helped reveal coordinated patterns of activity across a network of brain areas related to certain mental tasks.

With these tools, investigators can detect areas that have been damaged by brain disorders, such as strokes, and test whether that damage can be linked to specific changes in behavior. Then they can explore how that brain region interacts with others in the same network to get a more nuanced view of how the brain regulates those behaviors.

This approach can be applied to any behavior, including crime and immorality.

White matter and criminality

Complex human behaviors emerge from interacting networks that are made up of two types of brain tissue: gray matter and white matter.Gray matter consists of regions of nerve cell bodies and branching nerve fibers called dendrites, as well as points of connection between nerve cells. It’s in these areas that the brain’s heavy computational work is done. White matter, so named because of a pale, fatty substance called myelin that wraps the bundles of nerves, carries information between gray matter areas like highways in the brain.

Brain imaging studies of criminality going back to 2009 have suggested that damage to a swath of white matter called the right uncinate fasciculus is somehow involved when people commit violent acts. This tract connects the right amygdala, an almond-shaped structure deep in the brain involved in emotional processing, with the right orbitofrontal cortex, a region in the front of the brain involved in complex decision-making. However, it wasn’t clear from these studies whether damage to this tract caused people to commit crimes or was just a coincidence.

In a 2025 study, we analyzed 17 cases from the medical literature in which people with no criminal history committed crimes such as murder, assault and rape after experiencing brain damage from a stroke, tumor or traumatic brain injury. We first mapped the location of damage in their brains using an atlas of brain circuitry derived from people whose brains were uninjured. Then we compared imaging of the damage with brain imaging from more than 700 people who had not committed crimes but who had a brain injury causing a different symptom, such as memory loss or depression.

An MRI scan of the brain with the right uncinate fasciculus highlighted
Brain injuries that may play a role in violent criminal behavior damage white matter connections in the brain, shown here in orange and yellow, especially a specific tract called the right uncinate fasciculus.
Isaiah Kletenik, CC BY-NC-ND

In the people who committed crimes, we found the brain region that popped up the most often was the right uncinate fasciculus. Our study aligns with past research in linking criminal behavior to this brain area, but the way we conducted it makes our findings more definitive: These people committed their crimes only after they sustained their brain injuries, which suggests that damage to the right uncinate fasciculus played a role in triggering their criminal behavior.

These findings have an intriguing connection to research on morality. Other studies have found a link between strokes that damaged the right uncinate fasciculus with loss of empathy, suggesting this tract somehow regulates emotions that affect moral conduct. Meanwhile, other work has shown that people with psychopathy, which often aligns with immoral behavior, have abnormalities in their amygdala and the orbitofrontal cortex regions that are directly connected by the uncinate fasciculus.

Neuroscientists are now testing whether the right uncinate fasciculus may be synthesizing information within a network of brain regions dedicated to moral values.

Making sense of it all

As intriguing as these findings are, it is important to note that many people with damage to their right uncinate fasciculus do not commit violent crimes. Similarly, most people who commit crimes do not have damage to this tract. This means that even if damage to this area can contribute to criminality, it’s only one of many possible factors underlying it.

Still, knowing that neurological damage to a specific brain structure can increase a person’s risk of committing a violent crime can be helpful in various contexts. For example, it can help the legal system assess neurological evidence when judging criminal responsibility. Similarly, doctors may be able to use this knowledge to develop specific interventions for people with brain disorders or injuries.

More broadly, understanding the neurological roots of morality and moral decision-making provides a bridge between science and society, revealing constraints that define how and why people make choices.

The Conversation

Isaiah Kletenik receives funding from the NIH.

Nothing to disclose.

Christopher M. Filley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Certain brain injuries may be linked to violent crime – identifying them could help reveal how people make moral choices – https://theconversation.com/certain-brain-injuries-may-be-linked-to-violent-crime-identifying-them-could-help-reveal-how-people-make-moral-choices-262034

Building with air – how nature’s hole-filled blueprints shape manufacturing

Source: The Conversation – USA – By Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout

Engineers use structures found in nature – like the honeycomb – to create lightweight, sturdy materials. Matthew T. Rader, CC BY-NC-SA

If you break open a chicken bone, you won’t find a solid mass of white material inside. Instead, you will see a complex, spongelike network of tiny struts and pillars, and a lot of empty space.

It looks fragile, yet that internal structure allows a bird’s wing to withstand high winds while remaining light enough for flight. Nature rarely builds with solid blocks. Instead, it builds with clever, porous patterns to maximize strength while minimizing weight.

A cross-section view of bone, showing large, roughly circular holes in a white material.
Cross-section of the bone of a bird’s skull: Holes keep the material light enough that the bird can fly, but it’s still sturdy.
Steve Gschmeissner/Science Photo Library via Getty Images

Human engineers have always envied this efficiency. You can see it in the hexagonal perfection of a honeycomb, which uses the least amount of wax to store the most honey, and in the internal spiraling architecture of seashells that resist crushing pressures.

For centuries, however, manufacturing limitations meant engineers couldn’t easily copy these natural designs. Traditional manufacturing has usually been subtractive, meaning it starts with a heavy block of metal that is carved down, or formative, which entails pouring liquid plastic into a mold. Neither method can easily create complex, spongelike interiors hidden inside a solid shell.

If engineers wanted to make a part stronger, they generally had to make it thicker and heavier. This approach is often inefficient, wastes material and results in heavier products that require more energy to transport.

I am a mechanical engineer and associate professor at the University of Wisconsin-Stout, where I research the intersection of advanced manufacturing and biology. For several years, my work has focused on using additive manufacturing to create materials that, like a bird’s wing, are both incredibly light and capable of handling intense physical stress. While these “holey” designs have existed in nature for millions of years, it is only recently that 3D printing has made it possible for us to replicate them in the lab.

The invisible architecture

That paradigm changed with the maturation of additive manufacturing, commonly known as 3D printing, when it evolved from a niche prototyping tool into a robust industrial force. While the technology was first patented in the 1980s, it truly took off over the past decade as it became capable of producing end-use parts for high-stakes industries like aerospace and health care.

A 3D printer printing out an object filled with holes.
3D printing makes it far easier to manufacture lightweight, hole-filled materials.
kynny/iStock via Getty Images

Instead of cutting away material, printers build objects layer by layer, depositing plastic or metal powder only exactly where it’s needed based on a digital file. This technology unlocked a new frontier in materials science focused on mesostructures.

A mesostructure represents the in-between scale. It is not the microscopic atomic makeup of the material, nor is it the macroscopic overall shape of the object, like a whole shoe. It is the internal architecture, including the engineered pattern of air and material hidden inside.

It’s the difference between a solid brick and the intricate iron latticework of the Eiffel Tower. Both are strong, but one uses vastly less material to achieve that strength because of how the empty space is arranged.

From the lab to your closet

While the concept of using additive manufacturing to create parts that take advantage of mesostructures started in research labs around the year 2000, consumers are now seeing these bio-inspired designs in everyday products.

The footwear industry is a prime example. If you look closely at the soles of certain high-end running shoes, you won’t see a solid block of foam. Instead, you will see a complex, weblike lattice structure that looks suspiciously like the inside of a bird bone. This printed design mimics the springiness and weight distribution found in natural porous structures, offering tuned performance that solid foam cannot match.

Engineers use the same principle to improve safety gear. Modern bike helmets and football helmet liners are beginning to replace traditional foam padding with 3D-printed lattices. These tiny, repeating jungle gym structures are designed to crumple and rebound to absorb the energy more efficiently than solid materials, much like how the porous bone inside your own skull protects your brain.

Testing the limits

In my research, I look for the rules nature uses to build strong objects.

For example, seashells are tough because they are built like a brick wall, with hard mineral blocks held together by a thin layer of stretchy glue. This pattern allows the hard bricks to slide past each other instead of snapping when put under pressure. The shell absorbs energy and stops cracks from spreading, which makes the final structure much tougher than a solid piece of the same material.

I use advanced computer models to crush thousands of virtual designs to see exactly when and how they fail. I have even used neural networks, a type of artificial intelligence, to find the best patterns for absorbing energy.

My studies have shown that a wavy design can be very effective, especially when we fine-tune the thickness of the lines and the number of turns in the pattern. By finding these perfect combinations, we can design products that fail gradually and safely – much like the crumple zone on the front of a car.

By understanding the mechanics of these structures, engineers can tailor them for specific jobs, making one area of a product stiff and another area flexible within a single continuous printed part.

The sustainable future

Beyond performance, mimicking nature’s less-is-more approach is a significant win for sustainability. By “printing air” into the internal structure of a product, manufacturers can use significantly less raw material while maintaining the necessary strength.

As industrial 3D printing becomes faster and cheaper, manufacturing will move further away from the solid-block era and closer to the elegant efficiency of the biological world. Nature has spent millions of years perfecting these blueprints through evolution – and engineers are finally learning how to read them.

The Conversation

Anne Schmitz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Building with air – how nature’s hole-filled blueprints shape manufacturing – https://theconversation.com/building-with-air-how-natures-hole-filled-blueprints-shape-manufacturing-270640

The Supreme Court may soon diminish Black political power, undoing generations of gains

Source: The Conversation – USA – By Robert D. Bland, Assistant Professor of History and Africana Studies, University of Tennessee

U.S. Rep. Cleo Fields, a Democrat who represents portions of central Louisiana in the House, could lose his seat if the Supreme Court invalidates Louisiana’s congressional map. AP Photo/Gerald Herbert

Back in 2013, the Supreme Court tossed out a key provision of the Voting Rights Act regarding federal oversight of elections. It appears poised to abolish another pillar of the law.

In a case known as Louisiana v. Callais, the court appears ready to rule against Louisiana and its Black voters. In doing so, the court may well abolish Section 2 of the Voting Rights Act, a provision that prohibits any discriminatory voting practice or election rule that results in less opportunity for political clout for minority groups.

The dismantling of Section 2 would open the floodgates for widespread vote dilution by allowing primarily Southern state legislatures to redraw political districts, weakening the voting power of racial minorities.

The case was brought by a group of Louisiana citizens who declared that the federal mandate under Section 2 to draw a second majority-Black district violated the equal protection clause of the 14th Amendment and thus served as an unconstitutional act of racial gerrymandering.

There would be considerable historical irony if the court decides to use the 14th Amendment to provide the legal cover for reversing a generation of Black political progress in the South. Initially designed to enshrine federal civil rights protections for freed people facing a battery of discriminatory “Black Codes” in the postbellum South, the 14th Amendment’s equal protection clause has been the foundation of the nation’s modern rights-based legal order, ensuring that all U.S. citizens are treated fairly and preventing the government from engaging in explicit discrimination.

The cornerstone of the nation’s “second founding,” the Reconstruction-era amendments to the Constitution, including the 14th Amendment, created the first cohort of Black elected officials.

I am a historian who studies race and memory during the Civil War era. As I highlight in my new book “Requiem for Reconstruction,” the struggle over the nation’s second founding not only highlights how generational political progress can be reversed but also provides a lens into the specific historical origins of racial gerrymandering in the United States.

Without understanding this history – and the forces that unraveled Reconstruction’s initial promise of greater racial justice – we cannot fully comprehend the roots of those forces that are reshaping our contemporary political landscape in a way that I believe subverts the true intentions of the Constitution.

The long history of gerrymandering

Political gerrymandering, or shaping political boundaries to benefit a particular party, has been considered constitutional since the nation’s 18th-century founding, but racial gerrymandering is a practice with roots in the post-Civil War era.

Expanding beyond the practice of redrawing district lines after each decennial census, late 19th-century Democratic state legislatures built on the earlier cartographic practice to create a litany of so-called Black districts across the postbellum South.

The nation’s first wave of racial gerrymandering emerged as a response to the political gains Southern Black voters made during the administration of President Ulysses S. Grant in the 1870s. Georgia, Alabama, Florida, Mississippi, North Carolina and Louisiana all elected Black congressmen during that decade. During the 42nd Congress, which met from 1871 to 1873, South Carolina sent Black men to the House from three of its four districts.

A group portrait depicts the first Black senator and a half-dozen Black representatives.
The first Black senator and representatives were elected in the 1870s, as shown in this historic print.
Library of Congress

Initially, the white Democrats who ruled the South responded to the rise of Black political power by crafting racist narratives that insinuated that the emergence of Black voters and Black officeholders was a corruption of the proper political order. These attacks often provided a larger cultural pretext for the campaigns of extralegal political violence that terrorized Black voters in the South, assassinated political leaders, and marred the integrity of several of the region’s major elections.

Election changes

Following these pogroms during the 1870s, southern legislatures began seeking legal remedies to make permanent the counterrevolution of “Redemption,” which sought to undo Reconstruction’s advancement of political equality. A generation before the Jim Crow legal order of segregation and discrimination was established, southern political leaders began to disfranchise Black voters through racial gerrymandering.

These newly created Black districts gained notoriety for their cartographic absurdity. In Mississippi, a shoestring-shaped district was created to snake and swerve alongside the state’s famous river. North Carolina created the “Black Second” to concentrate its African American voters to a single district. Alabama’s “Black Fourth” did similar work, leaving African American voters only one possible district in which they could affect the outcome in the state’s central Black Belt.

South Carolina’s “Black Seventh” was perhaps the most notorious of these acts of Reconstruction-era gerrymandering. The district “sliced through county lines and ducked around Charleston back alleys” – anticipating the current trend of sophisticated, computer-targeted political redistricting.

Possessing 30,000 more voters than the next largest congressional district in the state, South Carolina’s Seventh District radically transformed the state’s political landscape by making it impossible for its Black-majority to exercise any influence on national politics, except for the single racially gerrymandered district.

A map showing South Carolina's congressional districts in the 1880s.
South Carolina’s House map was gerrymandered in 1882 to minimize Black representation, heavily concentrating Black voters in the 7th District.
Library of Congress, Geography and Map Division

Although federal courts during the late 19th century remained painfully silent on the constitutionality of these antidemocratic measures, contemporary observers saw these redistricting efforts as more than a simple act of seeking partisan advantage.

“It was the high-water mark of political ingenuity coupled with rascality, and the merits of its appellation,” observed one Black congressman who represented South Carolina’s 7th District.

Racial gerrymandering in recent times

The political gains of the Civil Rights Movement of the 1950s and 1960s, sometimes called the “Second Reconstruction,” were made tangible by the 1965 Voting Rights Act. The law revived the postbellum 15th Amendment, which prevented states from creating voting restrictions based on race. That amendment had been made a dead letter by Jim Crow state legislatures and an acquiescent Supreme Court.

In contrast to the post-Civil War struggle, the Second Reconstruction had the firm support of the federal courts. The Supreme Court affirmed the principal of “one person, one vote” in its 1962 Baker v. Carr and 1964 Reynolds v. Sims decisions – upending the Solid South’s landscape of political districts that had long been marked by sparsely populated Democratic districts controlled by rural elites.

The Voting Rights Act gave the federal government oversight over any changes in voting policy that might affect historically marginalized groups. Since passage of the 1965 law and its subsequent revisions, racial gerrymandering has largely served the purpose of creating districts that preserve and amplify the political representation of historically marginalized groups.

This generational work may soon be undone by the current Supreme Court. The court, which heard oral arguments in the Louisiana case in October 2025, will release its decision by the end of June 2026.

The Conversation

Robert D. Bland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Supreme Court may soon diminish Black political power, undoing generations of gains – https://theconversation.com/the-supreme-court-may-soon-diminish-black-political-power-undoing-generations-of-gains-274179

Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism

Source: The Conversation – USA – By Nicholas Jacobs, Goldfarb Family Distinguished Chair in American Government, Colby College; Institute for Humane Studies

Protesters against Immigration and Customs Enforcement march through Minneapolis, Minn., on Jan. 25, 2026. Roberto Schmidt/AFP via Getty Images

An unusually large majority of Americans agree that the recent scenes of Immigration and Customs Enforcement operations in Minneapolis are disturbing.

Federal immigration agents have deployed with weapons and tactics more commonly associated with military operations than with civilian law enforcement. The federal government has sidelined state and local officials, and it has cut them out of investigations into whether state and local law has been violated.

It’s understandable to look at what’s happening and reach a familiar conclusion: This looks like a slide into authoritarianism.

There is no question that the threat of democratic backsliding is real. President Donald Trump has long treated federal authority not as a shared constitutional set of rules and obligations but as a personal instrument of control.

In my research on the presidency and state power, including my latest book with Sidney Milkis, “Subverting the Republic,” I have argued that the Trump administration has systematically weakened the norms and practices that once constrained executive power – often by turning federalism itself into a weapon of national administrative power.

But there is another possibility worth taking seriously, one that cuts against Americans’ instincts at moments like this. What if what America is seeing is not institutional collapse but institutional friction: the system doing what it was designed to do, even if it looks ugly when it does it?

For many Americans, federalism is little more than a civics term – something about states’ rights or decentralization.

In practice, however, federalism functions less as a clean division of authority and more as a system for managing conflict among multiple governments with overlapping jurisdiction. Federalism does not block national authority. It ensures that national decisions are subject to challenge, delay and revision by other levels of government.

Dividing up authority

At its core, federalism works through a small number of institutional mechanics – concrete ways of keeping authority divided, exposed and contestable. Minneapolis shows each of them in action.

First, there’s what’s called “jurisdictional overlap.”

State, local and federal authorities all claim the right to govern the same people and places. In Minneapolis, that overlap is unavoidable: Federal immigration agents, state law enforcement, city officials and county prosecutors all assert authority over the same streets, residents and incidents. And they disagree sharply about how that authority should be exercised.

Second, there’s institutional rivalry.

Because authority is divided, no single level of government can fully monopolize legitimacy. And that creates tension. That rivalry is visible in the refusal of state and local officials across the country to simply defer to federal enforcement.

Instead, governors, mayors and attorneys general have turned to courts, demanded access to evidence and challenged efforts to exclude them from investigations. That’s evident in Minneapolis and also in states that have witnessed the administration’s deployment of National Guard troops against the will of their Democratic governors.

It’s easy to imagine a world where state and local prosecutors would not have to jump through so many procedural hoops to get access to evidence for the death of citizens within their jurisdiction. But consider the alternative.

If state and local officials were barred without consent from seeking evidence – the absence of federalism – or if local institutions had no standing to contest how national power is exercised there, federal authority would operate not just forcefully but without meaningful political constraint.

Protesters fight with law enforcement as tear gas fills the air.
Protesters clash with law enforcement after a federal agent shot and killed a man on Jan. 24, 2026, in Minneapolis, Minn.
Arthur Maiorella/Anadolu via Getty Images

Third, confrontation is local and place-specific.

Federalism pushes conflict into the open. Power struggles become visible, noisy and politically costly. What is easy to miss is why this matters.

Federalism was necessary at the time of the Constitution’s creation because Americans did not share a single political identity. They could not decide whether they were members of one big community or many small communities.

In maintaining their state governments and creating a new federal government, they chose to be both at the same time. And although American politics nationalized to a remarkable degree over the 20th century, federal authority is still exercised in concrete places. Federal authority still must contend with communities that have civic identities and whose moral expectations may differ sharply from those assumed by national actors.

In Minneapolis it has collided with a political community that does not experience federal immigration enforcement as ordinary law enforcement.

The chaos of federalism

Federalism is not designed to keep things calm. It is designed to keep power unsettled – so that authority cannot move smoothly, silently or all at once.

By dividing responsibility and encouraging overlap, federalism ensures that power has to push, explain and defend itself at every step.

“A little chaos,” the scholar Daniel Elazar has said, “is a good thing!”

As chaos goes, though, federalism is more often credited for Trump’s ascent. He won the presidency through the Electoral College – a federalist institution that allocates power by state rather than by national popular vote, rewarding geographically concentrated support even without a national majority.

Partisan redistricting, which takes place in the states, further amplifies that advantage by insulating Republicans in Congress from electoral backlash. And decentralized election administration – in which local officials control voter registration, ballot access and certification – can produce vulnerabilities that Trump has exploited in contesting state certification processes and pressuring local election officials after close losses.

Forceful but accountable

It’s helpful to also understand how Minneapolis is different from the most well-known instances of aggressive federal power imposed on unwilling states: the civil rights era.

Hundreds of students protest the arrival of a Black student to their school.
Hundreds of Ole Miss students call for continued segregation on Sept. 20, 1962, as James Meredith prepares to become the first Black man to attend the university.
AP Photo

Then, too, national authority was asserted forcefully. Federal marshals escorted the Black student James Meredith into the University of Mississippi in 1962 over the objections of state officials and local crowds. In Little Rock in 1957, President Dwight D. Eisenhower federalized the Arkansas National Guard and sent in U.S. Army troops after Gov. Orval Faubus attempted to block the racial integration of Central High School.

Violence accompanied these interventions. Riots broke out in Oxford, Mississippi. Protesters and bystanders were killed in clashes with police and federal authorities in Birmingham and Selma, Alabama.

What mattered during the civil rights era was not widespread agreement at the outset – nationwide resistance to integration was fierce and sustained. Rather, it was the way federal authority was exercised through existing constitutional channels.

Presidents acted through courts, statutes and recognizable chains of command. State resistance triggered formal responses. Federal power was forceful, but it remained legible, bounded and institutionally accountable.

Those interventions eventually gained public acceptance. But in that process, federalism was tarnished by its association with Southern racism and recast as an obstacle to progress rather than the institutional framework through which progress was contested and enforced.

After the civil rights era, many Americans came to assume that national power would normally be aligned with progressive moral aims – and that when it was, federalism was a problem to be overcome.

Minneapolis exposes the fragility of that assumption. Federalism does not distinguish between good and bad causes. It does not certify power because history is “on the right side.” It simply keeps power contestable.

When national authority is exercised without broad moral agreement, federalism does not stop it. It only prevents it from settling quietly.

Why talk about federalism now, at a time of widespread public indignation?

Because in the long arc of federalism’s development, it has routinely proven to be the last point in our constitutional system where power runs into opposition. And when authority no longer encounters rival institutions and politically independent officials, authoritarianism stops being an abstraction.

The Conversation

Nicholas Jacobs does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism – https://theconversation.com/federal-power-meets-local-resistance-in-minneapolis-a-case-study-in-how-federalism-staves-off-authoritarianism-274685

Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk

Source: The Conversation – USA (3) – By Michael I Goran, Professor of Pediatrics and Vice Chair for Research, University of Southern California

Consuming less highly processed foods and sugary drinks and more whole grains can meaningfully improve your health. fizkes/iStock via Getty Images Plus

The Dietary Guidelines for Americans aim to translate the most up-to-date nutrition science into practical advice for the public as well as to guide federal policy for programs such as school lunches.

But the newest version of the guidelines, released on Jan. 7, 2026, seems to be spurring more confusion than clarity about what people should be eating.

I’ve been studying nutrition and chronic disease for over 35 years, and in 2020 I wrote “Sugarproof,” a book about reducing consumption of added sugars to improve health. I served as a scientific adviser for the new guidelines.

I chose to participate in this process, despite its accelerated and sometimes controversial nature, for two reasons. First, I wanted to help ensure the review was conducted with scientific rigor. And second, federal health officials prioritized examining areas where the evidence has become especially strong – particularly food processing, added sugars and sugary beverages, which closely aligns with my research.

My role, along with colleagues, was to review and synthesize that evidence and help clarify where the science is strongest and most consistent.

The latest dietary guidelines, published on Jan. 7, 2026, have received mixed reviews from nutrition experts.

What’s different in the new dietary guidelines?

The dietary guidelines, first published in 1980, are updated every five years. The newest version differs from the previous versions in a few key ways.

For one thing, the new report is shorter, at nine pages rather than 400. It offers simpler advice directly to the public, whereas previous guidelines were more directed at policymakers and nutrition experts.

Also, the new guidelines reflect an important paradigm shift in defining a healthy diet. For the past half-century, dietary advice has been shaped by a focus on general dietary patterns and targets for individual nutrients, such as protein, fat and carbohydrate. The new guidelines instead emphasize overall diet quality.

Some health and nutrition experts have criticized specific aspects of the guidelines, such as how the current administration developed them, or how they address saturated fat, beef, dairy, protein and alcohol intake. These points have dominated the public discourse. But while some of them are valid, they risk overshadowing the strongest, least controversial and most actionable conclusions from the scientific evidence.

What we found in our scientific assessment was that just a few straightforward changes to your diet – specifically, reducing highly processed foods and sugary drinks, and increasing whole grains – can meaningfully improve your health.

What the evidence actually shows

My research assistants and I evaluated the conclusions of studies on consuming sugar, highly processed foods and whole grains, and assessed how well they were conducted and how likely they were to be biased. We graded the overall quality of the findings as low, moderate or high based on standardized criteria such as their consistency and plausibility.

We found moderate to high quality evidence that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.

Similarly, we found moderately solid evidence that people who drink more sugar-sweetened beverages have a higher risk of obesity and Type 2 diabetes, as well as quite conclusive evidence that children who drink fruit juice have a higher risk of obesity. And consuming more beverages containing artificial sweeteners raises the risk of death from any cause and Alzheimer’s disease, based on moderately good evidence.

Whole grains, on the other hand, have a protective effect on health. We found high-quality evidence that people who eat more whole grains have a lower risk of cardiovascular disease and death from any cause. People who consume more dietary fiber, which is abundant in whole grains, have a lower risk of Type 2 diabetes and death from any cause, based on moderate-quality research.

According to the research we evaluated, it’s these aspects – too much highly processed foods and sweetened beverages, and too little whole grain foods – that are significantly contributing to the epidemic of chronic diseases such as obesity, Type 2 diabetes and heart disease in this country – and not protein, beef or dairy intake.

Different types of food on rustic wooden table
Evidence suggests that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.
fcafotodigital/E+ via Getty Images

From scientific evidence to guidelines

Our report was the first one to recommend that the guidelines explicitly mention decreasing consumption of highly processed foods. Overall, though, research on the negative health effects of sugar and processed foods and the beneficial effects of whole grains has been building for many years and has been noted in previous reports.

On the other hand, research on how strongly protein, red meat, saturated fat and dairy are linked with chronic disease risk is much less conclusive. Yet the 2025 guidelines encourage increasing consumption of those foods – a change from previous versions.

The inverted pyramid imagery used to represent the 2025 guidelines also emphasizes protein – specifically, meat and dairy – by putting these foods in a highly prominent spot in the top left corner of the image. Whole grains sit at the very bottom; and except for milk, beverages are not represented.

Scientific advisers were not involved in designing the image.

Making small changes that can improve your health

An important point we encountered repeatedly in reviewing the research was that even small dietary changes could meaningfully lower people’s chronic disease risks.

For example, consuming just 10% fewer calories per day from highly processed foods could lower the risk of diabetes by 14%, according to one of the lead studies we relied on for the evidence review. Another study showed that eating one less serving of highly processed foods per day lowers the risk of heart disease by 4%.

You can achieve that simply by switching from a highly processed packaged bread to one with fewer ingredients or replacing one fast-food meal per week with a simple home-cooked meal. Or, switch your preferred brands of daily staples such as tomato sauce, yogurt, salad dressing, crackers and nut butter to ones that have fewer ingredients like added sugars, sweeteners, emulsifiers and preservatives.

Cutting down on sugary beverages – for example, soda, sweet teas, juices and energy drinks – had an equally dramatic effect. Simply drinking the equivalent of one can less per day lowers the risk of diabetes by 26% and the risk of heart disease by 14%.

And eating just one additional serving of whole grains per day – say, replacing packaged bread with whole grain bread – results in an 18% lower risk of diabetes and a 13% lower risk of death from all causes combined.

How to adopt ‘kitchen processing’

Another way to make these improvements is to take basic elements of food processing back from manufacturers and return them to your own kitchen – what I call “kitchen processing.” Humans have always processed food by chopping, cooking, fermenting, drying or freezing. The problem with highly processed foods isn’t just the industrial processing that transforms the chemical structure of natural ingredients, but also what chemicals are added to improve taste and shelf life.

Kitchen processing, though, can instead be optimized for health and for your household’s flavor preferences – and you can easily do it without cooking from scratch. Here are some simple examples:

  • Instead of flavored yogurts, buy plain yogurt and add your favorite fruit or some homemade simple fruit compote.

  • Instead of sugary or diet beverages, use a squeeze of citrus or even a splash of juice to flavor plain sparkling water.

  • Start with a plain whole grain breakfast cereal and add your own favorite source of fiber and/or fruit.

  • Instead of packaged “energy bars” make your own preferred mixture of nuts, seeds and dried fruit.

  • Instead of bottled salad dressing, make a simple one at home with olive oil, vinegar or lemon juice, a dab of mustard and other flavorings of choice, such as garlic, herbs, or honey.

You can adapt this way of thinking to the foods you eat most often by making similar types of swaps. They may seem small, but they will build over time and have an outsized effect on your health.

The Conversation

Michael I Goran receives funding from the National Institutes of Health and the Dr Robert C and Veronica Atkins Foundation. He is a scientific advisor to Eat Real (non-profit promoting better school meals) and has previously served as a scientific advisor to Bobbi (infant formula) and Begin Health (infant probiotics).

ref. Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk – https://theconversation.com/confused-by-the-new-dietary-guidelines-focus-on-these-simple-evidence-based-shifts-to-lower-your-chronic-disease-risk-273701

Private credit rating agencies shape Africa’s access to debt. Better oversight is needed

Source: The Conversation – Africa – By Daniel Cash, Senior Fellow, United Nations University; Aston University

Africa’s development finance challenge has reached a critical point. Mounting debt pressure is squeezing fiscal space. And essential needs in infrastructure, health and education remain unmet. The continent’s governments urgently need affordable access to international capital markets. Yet many continue to face borrowing costs that make development finance unviable.

Sovereign credit ratings – the assessments that determine how financial markets price a country’s risk – play a central role in this dynamic. These judgements about a government’s ability and willingness to repay debt are made by just three main agencies – S&P Global, Moody’s and Fitch. The grades they assign, ranging from investment grade to speculative or default, directly influence the interest rates governments pay when they borrow.

Within this system, the stakes for African economies are extremely high. Borrowing costs rise sharply once countries fall below investment grade. And when debt service consumes large shares of budgets, less remains for schools, hospitals or climate adaptation. Many institutional investors also operate under mandates restricting them to investment-grade bonds.




Read more:
Africa’s development banks are being undermined: the continent will pay the price


Countries rated below this threshold are excluded from large pools of capital. In practice it means that credit ratings shape the cost of borrowing, as well as whether borrowing is possible at all.

I am a researcher who has examined how sovereign credit ratings operate within the international financial system. And I’ve followed debates about their role in development finance. Much of the criticism directed at the agencies has focused on: their distance from the countries they assess; the suitability of some analytical approaches; and the challenges of applying standardised models across different economic contexts.

Less attention has been paid to the position ratings now occupy within the global financial architecture. Credit rating agencies are private companies that assess the likelihood that governments and firms will repay their debts. They sell these assessments to investors, banks and financial institutions, rather than working for governments or international organisations. But their assessments have become embedded in regulation, investment mandates and policy processes in ways that shape public outcomes.

This has given ratings a governance-like influence over access to finance, borrowing costs and fiscal space. In practice, ratings help determine how expensive it is for governments to borrow. This determines how much room they have to spend on public priorities like health, education, and infrastructure. Yet, credit rating agencies were not created to play this role. They emerged as private firms in the early 1900s to provide information to investors. The frameworks for coordinating and overseeing their wider public impact – which grew long after they were established – developed gradually and unevenly over time.

The question isn’t whether ratings should be replaced. Rather, it’s how this influence is understood and managed.

Beyond the bias versus capacity debate

Discussions about Africa’s sovereign ratings often focus on two explanations. One is that African economies are systemically underrated, with critics pointing to rapid downgrades and assessments that appear harsher than those applied to comparable countries elsewhere.

Factors often cited include the location of analytical teams in advanced economies, limited exposure to domestic policy processes in the global south, and incentive structures shaped by closer engagement with regulators and market actors in major financial centres.

The other explanation emphasises macroeconomic fundamentals, the basic economic conditions that shape a government’s ability to service debt, such as growth prospects, export earnings, institutional strength and fiscal buffers. When these are weaker or more volatile, borrowing costs tend to be more sensitive to global shocks.

Both perspectives have merit. Yet neither fully explains a persistent pattern: governments often undertake significant reforms, sometimes at high political and social costs, but changes in ratings can lag well behind those efforts. During that period, borrowing costs remain high and market access constrained. It is this gap between reform and recognition that points to a deeper structural issue in how credit ratings operate within the global financial system.

Design by default

Credit ratings began as a commercial information service for investors. Over several decades, from the 1970s to the 2000s, they became embedded in financial regulation. United States regulators first incorporated ratings into capital rules in 1975 as benchmarks for determining risk charges. The European Union followed in the late 1980s and 1990s. Key international bodies followed.

This process was incremental, not the result of deliberate public design. Ratings were adopted because they were available, standardised and widely recognised. It’s argued that private sector reliance on ratings typically followed their incorporation into public regulation. But in fact markets relied informally on credit rating assessments long before regulators formalised their use.

By the late 1990s, ratings had become deeply woven into how financial markets function. The result was that formal regulatory reliance increased until ratings became essential for distinguishing creditworthiness. This, some have argued, may have encouraged reliance on ratings at the expense of independent risk assessment.

Today, sovereign credit ratings influence which countries can access development finance, at what cost, and on what terms. They shape the fiscal options available to governments, and therefore the policy space for pursuing development goals.

Yet ratings agencies remain private firms, operating under commercial incentives. They developed outside the multilateral system and were not originally designed for a governance role. The power they wield is real. But the mechanisms for coordinating that power over public development objectives emerged later and separately. This created a governance function without dedicated coordination or oversight structures.

Designing the missing layer

African countries have initiated reform efforts to address their development finance challenge. For instance, some work with credit rating agencies to improve data quality and strengthen institutions. But these efforts don’t always translate into timely changes in assessments.

Part of the difficulty lies in shared information constraints. The link between fiscal policy actions and market perception remains complex. Governments need ways to credibly signal reform. Agencies need reliable mechanisms to verify change. And investors need confidence that assessments reflect current conditions rather than outdated assumptions.




Read more:
Africa’s new credit rating agency could change the rules of the game. Here’s how


While greater transparency can help, public debt data remains fragmented across databases and institutions.

A critical missing element in past reform efforts has been coordination infrastructure: dialogue platforms and credibility mechanisms that allow complex information to flow reliably between governments, agencies, investors and multilateral institutions.

Evidence suggests that external validation can help reforms gain market recognition. In practice, this points to the need for more structured interaction between governments, rating agencies, development partners and regional credit rating agencies around data, policy commitments and reform trajectories.

One option is the Financing for Development process. This is a multistakeholder forum coordinated by the United Nations that negotiates how the global financial system should support sustainable development. Addressing how credit ratings function within the financial system is a natural extension of this process.

Building a coordination layer need not mean replacing ratings. Or shifting them into the public sector. It means creating the transparency, dialogue and accountability structures that help any system function more effectively.

Recognising this reality helps explain how development finance actually works. As debt pressures rise and climate adaptation costs grow, putting this governance layer in place is now critical to safeguarding development outcomes in Africa.

The Conversation

Daniel Cash is affiliated with UN University Centre for Policy Research.

ref. Private credit rating agencies shape Africa’s access to debt. Better oversight is needed – https://theconversation.com/private-credit-rating-agencies-shape-africas-access-to-debt-better-oversight-is-needed-274858

Data centers told to pitch in as storms and cold weather boost power demand

Source: The Conversation – USA (2) – By Nikki Luke, Assistant Professor of Human Geography, University of Tennessee

During winter storms, physical damage to wires and high demand for heating put pressure on the electrical grid. Brett Carlsen/Getty Images

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow and freezing temperatures, it left more than a million people without power, mostly in the Southeast.

Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels.

Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOT – the company that manages the Texas power grid – as well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators.

The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their “industrial diesel generators” could “generate 35 gigawatts of power, or enough electricity to power many millions of homes.”

We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through and recover from winter storms.

A close-up of a rack of electronics.
The electronics in data centers consume large amounts of electricity.
RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

Data centers use enormous quantities of energy

Before Wright’s order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies.

This is a pressing question, because data centers’ power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJM’s.

And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030 – enough power to supply 30 million new homes, but nearly all going to new data centers. PJM’s job is to coordinate that energy – and figure out how much the public, or others, should pay to supply it.

The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution.

Local ordinances, regulations created by state utility commissions and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need.

Always-on connections?

In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time.

But since the 1970s, utilities have encouraged “demand response” programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation.

Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as “virtual power plants.”

A different approach

The terms of data center agreements with local governments and utilities often aren’t available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use.

In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems.

Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide “data center demand response by targeting machine learning workloads,” shifting “non-urgent compute tasks” away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers’ power use off the grid during power shortages.

An aerial view of metal equipment and wires with a city skyline in the background.
Large amounts of power move through parts of the U.S. electricity grid.
Joe Raedle/Getty Images

Flexibility for the future

One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacity – the amount that would power around 70 million households – could be added to the grid without adding new generation and transmission.

In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted.

Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms.

Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasn’t enough electricity for their homes but because their power lines were down.

The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist.

Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold.

The Conversation

Nikki Luke is a fellow at the Climate and Community Institute. She receives funding from the Alfred P. Sloan Foundation. She previously worked at the U.S. Department of Energy.

Conor Harrison receives funding from Alfred P. Sloan Foundation and has previously received funding from the U.S. National Science Foundation.

ref. Data centers told to pitch in as storms and cold weather boost power demand – https://theconversation.com/data-centers-told-to-pitch-in-as-storms-and-cold-weather-boost-power-demand-274604

Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games

Source: The Conversation – USA (2) – By Steven R. Fassnacht, Professor of Snow Hydrology, Colorado State University

Italy’s Predazzo Ski Jumping Stadium, which is hosting events for the 2026 Winter Olympics, needed snowmaking machines for the Italian National Championship Open on Dec. 23, 2025. Mattia Ozbot/Getty Images

Watching the Winter Olympics is an adrenaline rush as athletes fly down snow-covered ski slopes, luge tracks and over the ice at breakneck speeds and with grace.

When the first Olympic Winter Games were held in Chamonix, France, in 1924, all 16 events took place outdoors. The athletes relied on natural snow for ski runs and freezing temperatures for ice rinks.

Two skaters on ice outside with mountains in the background. They are posing as if gliding together.
Sonja Henie, left, and Gilles Grafstrom at the Olympic Winter Games in Chamonix, France, in 1924.
The Associated Press

Nearly a century later, in 2022, the world watched skiers race down runs of 100% human-made snow near Beijing. Luge tracks and ski jumps have their own refrigeration, and four of the original events are now held indoors: figure skaters, speed skaters, curlers and hockey teams all compete in climate-controlled buildings.

Innovation made the 2022 Winter Games possible in Beijing. Ahead of the 2026 Winter Olympics in northern Italy, where snowfall was below average for the start of the season, officials had large lakes built near major venues to provide enough water for snowmaking. But snowmaking can go only so far in a warming climate.

As global temperatures rise, what will the Winter Games look like in another century? Will they be possible, even with innovations?

Former host cities that would be too warm

The average daytime temperature of Winter Games host cities in February has increased steadily since those first events in Chamonix, rising from 33 degrees Fahrenheit (0.4 Celsius) in the 1920s-1950s to 46 F (7.8 C) in the early 21st century.

In a recent study, scientists looked at the venues of 19 past Winter Olympics to see how each might hold up under future climate change.

A cross-country skier falls in front of another during a race. The second skier has his mouth open as if shouting.
Human-made snow was used to augment trails at the Sochi Games in Russia in 2014. Some athletes complained that it made the trails icier and more dangerous.
AP Photo/Dmitry Lovetsky

They found that by midcentury, four former host cities – Chamonix; Sochi, Russia; Grenoble, France; and Garmisch-Partenkirchen, Germany – would no longer have a reliable climate for hosting the Games, even under the United Nations’ best-case scenario for climate change, which assumes the world quickly cuts its greenhouse gas emissions. If the world continues burning fossil fuels at high rates, Squaw Valley, California, and Vancouver, British Columbia, would join that list of no longer being a reliable climate for hosting the Winter Games.

By the 2080s, the scientists found, the climates in 12 of 22 former venues would be too unreliable to host the Winter Olympics’ outdoor events; among them were Turin, Italy; Nagano, Japan; and Innsbruck, Austria.

In 2026, there are now five weeks between the Winter Olympics and the Paralympics, which last through mid-March. Host countries are responsible for both events, and some venues may increasingly find it difficult to have enough snow on the ground, even with snowmaking capabilities, as snow seasons shorten.

Ideal snowmaking conditions today require a dewpoint temperature – the combination of coldness and humidity – of around 28 F (-2 C) or less. More moisture in the air melts snow and ice at colder temperatures, which affects snow on ski slopes and ice on bobsled, skeleton and luge tracks.

Stark white lines etched on a swath of brown mountains delineate ski routes and bobsled course.
A satellite view clearly shows the absence of natural snow during the 2022 Winter Olympics. Beijing’s bid to host the Winter Games had explained how extensively it would rely on snowmaking.
Joshua Stevens/NASA Earth Observatory
A gondola passes by with dark ground below and white ski slopes behind it.
The finish area of the Alpine ski venue at the 2022 Winter Olympics was white because of machine-made snow.
AP Photo/Robert F. Bukaty

As Colorado snow and sustainability scientists and also avid skiers, we’ve been watching the developments and studying the climate impact on the mountains and winter sports we love.

Conditions vary by location and year to year

The Earth’s climate will be warmer overall in the coming decades. Warmer air can mean more winter rain, particularly at lower elevations. Around the globe, snow has been covering less area. Low snowfall and warm temperature made the start to the 2025-26 winter season particularly poor for Colorado’s ski resorts.

However, local changes vary. For example, in northern Colorado, the amount of snow has decreased since the 1970s, but the decline has mostly been at higher elevations.

Several machines pump out sprays of snow across a slope.
Snow cannons spray machine-made snow on a ski slope ahead of the 2026 Winter Olympics.
Mattia Ozbot/Getty Images

A future climate may also be more humid, which affects snowmaking and could affect bobsled, luge and skeleton tracks.

Of the 16 Winter Games sports today, half are affected by temperature and snow: Alpine skiing, biathlon, cross-country skiing, freestyle skiing, Nordic combined, ski jumping, ski mountaineering and snowboarding. And three are affected by temperature and humidity: bobsled, luge and skeleton.

Technology also changes

Developments in technology have helped the Winter Games adapt to some changes over the past century.

Hockey moved indoors, followed by skating. Luge and bobsled tracks were refrigerated in the 1960s. The Lake Placid Winter Games in 1980 in New York used snowmaking to augment natural snow on the ski slopes.

Today, indoor skiing facilities make skiing possible year-round. Ski Dubai, open since 2005, has five ski runs on a hill the height of a 25-story building inside a resort attached to a shopping mall.

Resorts are also using snowfarming to collect and store snow. The method is not new, but due to decreased snowfall and increased problems with snowmaking, more ski resorts are keeping leftover snow to be prepared for the next winter.

Two workers pack snow on an indoor ski slope with a sloped ceiling overhead.
Dubai has an indoor ski slope with multiple runs and a chairlift, all part of a shopping mall complex.
AP Photo/Jon Gambrell

But making snow and keeping it cold requires energy and water – and both become issues in a warming world. Water is becoming scarcer in some areas. And energy, if it means more fossil fuel use, further contributes to climate change.

The International Olympic Committee recognizes that the future climate will have a big impact on the Olympics, both winter and summer. It also recognizes the importance of ensuring that the adaptations are sustainable.

The Winter Olympics could become limited to more northerly locations, like Calgary, Alberta, or be pushed to higher elevations.

Summer Games are feeling climate pressure, too

The Summer Games also face challenges. Hot temperatures and high humidity can make competing in the summer difficult, but these sports have more flexibility than winter sports.

For example, changing the timing of typical summer events to another season can help alleviate excessive temperatures. The 2022 World Cup, normally a summer event, was held in November so Qatar could host it.

What makes adaptation more difficult for the Winter Games is the necessity of snow or ice for all of the events.

A snowboarder with 'USA' on her gloves puts her arms out for balance on a run.
Climate change threatens the ideal environments for snowboarders, like U.S. Olympian Hailey Langland, competing here during the women’s snowboard big air final in Beijing in 2022.
AP Photo/Jae C. Hong

Future depends on responses to climate change

In uncertain times, the Olympics offer a way for the world to come together.

People are thrilled by the athletic feats, like Jean-Claude Killy winning all three Alpine skiing events in 1968, and stories of perseverance, like the 1988 Jamaican bobsled team competing beyond all expectations.

The Winter Games’ outdoor sports may look very different in the future. How different will depend heavily on how countries respond to climate change.

This updates an article originally published Feb. 19, 2022, with the 2026 Winter Games.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games – https://theconversation.com/climate-change-threatens-the-winter-olympics-future-and-even-snowmaking-has-limits-for-saving-the-games-274800

Clergy protests against ICE turned to a classic – and powerful – American playlist

Source: The Conversation – USA (3) – By David W. Stowe, Professor of Religious Studies, Michigan State University

Clergy and community leaders demonstrate outside Minneapolis-St. Paul International Airport on Jan. 23, 2026, amid a surge by federal immigration agents. Brandon Bell/Getty Images

On Jan. 28, 2026, Bruce Springsteen released “Streets of Minneapolis,” a hard-hitting protest against the immigration enforcement surge in the city, including the killings of Renee Good and Alex Pretti. The song is all over social media, and the official video has already been streamed more than 5 million times. It’s hard to remember a time when a major artist has released a song in the midst of a specific political crisis.

Yet some of the most powerful music coming out of Minneapolis is of a much older vintage. Hundreds of clergy from around the country converged on the city in late January to take part in faith-based protests. Many were arrested while blocking a road near the airport. And they have been singing easily recognizable religious songs used during the Civil Rights movement of the 1950s and ‘60s, like “Amazing Grace,” “We Shall Overcome, and ”This Little Light of Mine.“

I have been studying the politics of music and religion for more than 25 years, and I wrote about songs I called “secular spirituals” in my 2004 book, “How Sweet the Sound: Music in the Spiritual Lives of Americans.” Sometimes called “freedom songs,” they were galvanizing more than 60 years ago, and are still in use today.

But why these older songs, and why do they usually come out of the church? There have been many protest movements since the mid-20th century, and they have all produced new music. The freedom songs, though, have a unique staying power in American culture – partly because of their historical associations and partly because of the songs themselves.

‘We Shall Overcome’ was one of several songs at the 1963 March on Washington.

Stronger together

Some of protest music’s power has to do with singing itself. Making music in a group creates a tangible sense of community and collective purpose. Singing is a physical activity; it comes out of our core and helps foster solidarity with fellow singers.

Young activists working in the Deep South during the most violent years of the Civil Rights Movement spoke of the courage that came from singing freedom songs like “We Shall Overcome” in moments of physical danger. In addition to helping quell fear, the songs were unnerving to authorities trying to maintain segregation. “If you have to sing, do you have to sing so loud?” one activist recalled an armed deputy saying.

And when locked up for days in a foul jail, there wasn’t much else to do but sing. When a Birmingham, Alabama, police commissioner released young demonstrators he’d arrested, they recalled him complaining that their singing “made him sick.”

Test of time

Sometimes I ask students if they can think of more recent protest songs that occupy the same place as the freedom songs of the 1960s. There are some well-known candidates: Bob Marley’s “Get Up, Stand Up,” Green Day’s “American Idiot” and Public Enemy’s “Fight the Power,” to name a few. The Black Lives Matter movement alone helped produce several notable songs, including Beyonce’s “Freedom,” Kendrick Lamar’s “Alright and Childish Gambino’s ”This Is America.“

But the older religious songs have advantages for on-the-ground protests. They have been around for a long time, meaning that more people have had more chances to learn them. Protesters typically don’t struggle to learn or remember the tune. As iconic church songs that have crossed over into secular spirituals, they were written to be memorable and singable, crowd-tested for at least a couple of generations. They are easily adaptable, so protesters can craft new verses for their cause – as when civil rights activists added “We are not afraid” to the lyrics of “We shall overcome.”

A black-and-white photo shows a row of seated women inside a van or small space clapping as they sing.
Protesters sing at a civil rights demonstration in New York in 1963.
Bettmann Archive/Getty Images

And freedom songs link the current protesters to one of the best-known – and by some measures, most successful – protest movements of the past century. They create bonds of solidarity not just among those singing them in Minneapolis, but with protesters and activists of generations past.

These religious songs are associated with nonviolence, an important value in a citizen movement protesting violence committed by federal law enforcement. And for many activists, including the clergy who poured into Minneapolis, religious values are central to their willingness to stand up for citizens targeted by ICE.

Deep roots

The best-known secular spirituals actually predate the Civil Rights Movement. “We Shall Overcome” first appeared in written form in 1900 as “I’ll Overcome Some Day,” by the Methodist minister Charles Tindley, though the words and tunes are different. It was sung by striking Black tobacco workers in South Carolina in 1945 and made its way to the Highlander Folk School in Tennessee, an integrated training center for labor organizers and social justice activists.

It then came to the attention of iconic folk singer Pete Seeger, who changed some words and gave it wide exposure. “We Shall Overcome” has been sung everywhere from the 1963 March on Washington and anti-apartheid rallies in South Africa to South Korea, Lebanon and Northern Ireland.

“Amazing Grace” has an even longer history, dating back to a hymn written by John Newton: an 18th-century ship captain in the slave trade who later became an Anglican clergyman and penned an essay against slavery. Pioneering American gospel singer Mahalia Jackson recorded it in 1947 and sang it regularly during the 1960s.

Mahalia Jackson sings the Gospel hymn ‘How I Got Over’ at the March on Washington.

Firmly rooted in Protestant Christian theology, the song crossed over into a more secular audience through a 1970 cover version by folk singer Judy Collins, which reached No. 15 on the Billboard charts. During Mississippi Freedom Summer of 1964, an initiative to register Black voters, Collins heard the legendary organizer Fannie Lou Hamer singing “Amazing Grace,” a song she remembered from her Methodist childhood.

Opera star Jessye Norman sang it at Nelson Mandela’s 70th birthday tribute in London, and bagpipers played it at a 2002 interfaith service near Ground Zero to commemorate victims of 9/11.

‘This little light’

Another gospel song used in protests against ICE – “This little light of mine, I’m gonna let it shine” – has similarly murky historical origins and also passed through the Highlander Folk School into the Civil Rights Movement.

It expresses the impulse to be seen and heard, standing up for human rights and contributing to a movement much larger than each individual. But it could also mean letting a light shine on the truth – for example, demonstrators’ phones documenting what happened in the two killings in Minneapolis, contradicting some officials’ claims.

Like the Civil Rights Movement, the protests in Minneapolis involve protecting people of color from violence – as well as, more broadly, protecting immigrants’ and refugees’ legal right to due process. A big difference is that in the 1950s and 1960s, the federal government sometimes intervened to protect people subjected to violence by states and localities. Now, many Minnesotans are trying to protect people in their communities from agents of the federal government.

The Conversation

David W. Stowe does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clergy protests against ICE turned to a classic – and powerful – American playlist – https://theconversation.com/clergy-protests-against-ice-turned-to-a-classic-and-powerful-american-playlist-274585

Quand les médicaments du quotidien nous rendent malades : mésusages et effets indésirables

Source: The Conversation – in French – By Clément Delage, Maitre de Conférences en Pharmacologie (Faculté de Pharmacie de Paris) – Unité Inserm UMR-S 1144 "Optimisation Thérapeutique en Neuropsychopharmacologie" – Pharmacien Hospitalier (Hôpital Lariboisière, AP-HP), Université Paris Cité

Entre banalisation de l’automédication et méconnaissance des risques, les médicaments le plus utilisés – comme le paracétamol – peuvent être à l’origine d’effets indésirables parfois sévères. Comprendre pourquoi et comment le remède peut devenir poison, c’est poser les bases d’un bon usage du médicament.


Chaque année, le mésusage de médicaments est responsable d’environ 2 760 décès et de 210 000 hospitalisations en France, selon une étude du Réseau français de centres de pharmacovigilance. Cela représente 8,5 % des hospitalisations et 1,5 fois plus de décès que les accidents de la route.

Si la part exacte de l’automédication dans ces chiffres reste difficile à établir, ces données rappellent une réalité souvent négligée. Les effets indésirables ne concernent pas seulement les traitements rares ou complexes, mais aussi les médicaments de tous les jours : paracétamol, ibuprofène, antihistaminiques (des médicaments qui bloquent la production d’histamine responsable de symptômes comme le gonflement, les rougeurs, les démangeaisons, les éternuements, etc. dans les situations d’allergies, ndlr), somnifères, sirops contre le rhume, etc.

Quand le remède devient poison

« Tout est poison, rien n’est poison : seule la dose fait le poison. »
– Paracelse (1493–1541)

Copie présente au Musée du Louvre du portrait perdu de Paracelse par Quentin Metsys
Portrait de Paracelse, d’après Quentin Metsys (1466-1530).
CC BY-NC

Cet adage fondateur de la pharmacie, enseigné dès la première année aux futurs pharmaciens, garde aujourd’hui toute sa pertinence. Paracelse avait compris dès le XVIᵉ siècle qu’une substance pouvait être bénéfique ou toxique selon la dose, la durée et le contexte d’exposition. D’ailleurs, le mot pharmacie dérive du mot grec phármakon qui signifie à la fois « remède » et « poison ».

Le paracétamol, antalgique et antipyrétique (médicament qui combat la douleur et la fièvre, ndlr) largement consommé, est perçu comme inoffensif. Pourtant, il est responsable d’hépatites médicamenteuses aiguës, notamment lors de surdosages accidentels ou d’associations involontaires entre plusieurs spécialités qui en contiennent. En France, la mauvaise utilisation du paracétamol est la première cause de greffe hépatique (c’est-à-dire de foie, ndlr) d’origine médicamenteuse, alerte l’Agence nationale de sécurité du médicament et des produits de santé (ANSM).

L’ibuprofène, également très utilisé pour soulager douleurs et fièvre, peut quant à lui provoquer des ulcères gastriques, des hémorragies digestives ou une insuffisance rénale, s’il est pris à forte dose et de manière prolongée ou avec d’autres traitements agissant sur le rein. Par exemple, associé aux inhibiteurs de l’enzyme de conversion (les premiers médicaments prescrits dans l’hypertension artérielle), il peut déclencher une insuffisance rénale fonctionnelle.

L’aspirine, médicament que l’on trouve encore dans beaucoup d’armoires à pharmacie, fluidifie le sang et peut favoriser les saignements et hémorragies, notamment digestifs. En cas de surdosage très important, il peut même conduire à des troubles de l’équilibre acide-base dans le sang et mener au coma voire au décès s’il n’y a pas de prise en charge rapide.

Ces exemples illustrent un principe fondamental : il n’existe pas de médicaments sans risque. Tous peuvent, dans certaines conditions, provoquer des effets délétères. Dès lors, la question n’est pas de savoir si un médicament est dangereux, mais dans quelles conditions il le devient.

Pourquoi tous les médicaments présentent-ils des effets indésirables ?

Comprendre l’origine des effets indésirables nécessite un détour par la pharmacologie, la science qui étudie le devenir et l’action du médicament dans l’organisme.

Chaque médicament agit en se liant à une cible moléculaire spécifique – le plus souvent un récepteur, une enzyme ou un canal ionique – afin de modifier une fonction biologique. Mais ces substances actives, exogènes à l’organisme, ne sont jamais parfaitement sélectives : elles peuvent interagir avec d’autres cibles, provoquant ainsi des effets indésirables – autrefois appelés effets secondaires.

De plus, la plupart des effets, qu’ils soient bénéfiques ou nocifs, sont dose-dépendant. La relation entre la concentration d’un médicament dans le corps et l’intensité de son effet s’exprime par une courbe dose-effet de forme sigmoïde.

Chaque effet (thérapeutique ou indésirable) a sa propre courbe, et la zone thérapeutique optimale (l’index thérapeutique) est celle où l’effet bénéfique est maximal et la toxicité minimale. C’est cette recherche d’équilibre entre efficacité et sécurité qui fonde la balance bénéfice/risque, notion centrale de toute décision thérapeutique.

Courbe effet/dose d’un médicament

Ainsi, même pour des molécules familières, un écart de posologie peut faire basculer le traitement du côté de la toxicité.

Contre-indications et interactions : quand d’autres facteurs s’en mêlent

Les effets indésirables ne dépendent pas uniquement de la dose. Les susceptibilités individuelles, les interactions médicamenteuses ainsi que des facteurs physiologiques ou pathologiques peuvent favoriser la survenue des effets indésirables.

Chez les personnes atteintes d’insuffisance hépatique (qui concerne le foie, ndlr), par exemple, la dégradation du paracétamol normalement assurée par le foie est ralentie, ce qui favorise son accumulation et augmente le risque de toxicité de ce médicament pour le foie. On parle d’hépatotoxicité.

L’alcool, en agissant sur les mêmes récepteurs cérébraux que les benzodiazépines (on citera, dans cette famille de médicaments, les anxiolytiques tels que le bromazépam/Lexomil ou l’alprazolam/Xanax), potentialise leurs effets sédatifs et de dépression respiratoire (qui correspond à une diminution de la fréquence respiratoire, qui peut alors devenir trop faible pour assurer l’approvisionnement du corps en oxygène). Les courbes d’effet/dose de chacun des composés vont alors s’additionner et déclencher de manière plus rapide et plus puissante l’apparition des effets indésirables.

De même, certains médicaments interagissent entre eux en modifiant leur métabolisme, leur absorption ou leur élimination. Dans ce cas, la courbe effet/dose du premier composé sera déplacée vers la droite ou la gauche par le second composé.

Ces mécanismes expliquent la nécessité de contre-indications, de précautions d’emploi et de limites posologiques strictes, précisées pour chaque médicament dans son autorisation de mise sur le marché. Avant chaque prise de médicament, l’usager doit consulter sa notice dans laquelle sont résumées ces informations indispensables.

Comment les risques médicamenteux sont-ils encadrés ?

Avant sa commercialisation, tout médicament fait l’objet d’une évaluation rigoureuse du rapport bénéfice/risque. En France, cette mission relève de l’ANSM.

L’autorisation de mise sur le marché (AMM) est délivrée après analyse des données précliniques et cliniques, qui déterminent notamment :

  • les indications thérapeutiques ;

  • les doses et durées recommandées ;

  • les contre-indications et interactions connues.

Mais l’évaluation ne s’arrête pas après l’autorisation de mise sur le marché. Dès qu’un médicament est utilisé en vie réelle, il entre dans une phase de pharmacovigilance : un suivi continu des effets indésirables signalés par les professionnels de santé ou les patients eux-mêmes.

Depuis 2020, le portail de signalement des événements sanitaires indésirables permet à chacun de déclarer facilement un effet suspecté, contribuant à la détection précoce de signaux de risque.

Les médicaments les plus à risque ne sont disponibles que sur prescription médicale, car la balance bénéfice/risque doit être évaluée patient par patient, par le médecin. Les autres, accessibles sans ordonnance, restent délivrés exclusivement en pharmacie, où le pharmacien joue un rôle déterminant d’évaluation et de conseil. Cette médiation humaine constitue un maillon essentiel du système de sécurité médicamenteuse.

Prévenir la toxicité médicamenteuse : un enjeu collectif

La prévention des accidents liés à des médicaments repose sur plusieurs niveaux de vigilance.

Au niveau individuel, une acculturation au bon usage du médicament est nécessaire.

Quelques gestes simples réduisent considérablement les risques de surdosage ou d’interactions médicamenteuses :

  • Bien lire la notice avant de prendre un médicament.
  • Ne pas conserver les médicaments obtenus avec une prescription lorsque le traitement est terminé et ne pas les réutiliser sans avis médical.
  • Ne pas partager ses médicaments avec autrui.
  • Ne pas prendre les informations trouvées sur Internet pour des avis médicaux.
  • Ne pas cumuler plusieurs médicaments contenant la même molécule.

Mais la responsabilité ne peut reposer uniquement sur le patient : les médecins ont évidemment un rôle clé d’éducation et d’orientation, mais les pharmaciens également. Ces derniers, en tant que premiers professionnels de santé de proximité, sont les mieux placés pour déceler et prévenir un mésusage.

La promotion du bon usage des médicaments est également le rôle des instances de santé, par la diffusion de messages de prévention, la simplification des notices et la transparence sur les signaux de sécurité permettent de renforcer la confiance du public sans nier les risques. L’amélioration de la pharmacovigilance constitue également un levier majeur de santé publique. À ce titre, elle s’est considérablement renforcée depuis le scandale du Mediator en 2009.

Enfin, il convient d’intégrer à cette vigilance les produits de phytothérapie – les traitements à base de plantes (sous forme de gélules, d’huiles essentielles ou de tisanes) et compléments alimentaires et même certains aliments, dont les interactions avec les traitements classiques sont souvent sous-estimées.

Comme pour les médicaments, la phytothérapie provoque aussi des effets indésirables à forte dose et pourra interagir avec eux. Par exemple, le millepertuis (Hypericum perforatum), plante qu’on retrouve dans des tisanes réputées anxiolytiques, va augmenter le métabolisme et l’élimination de certains médicaments, pouvant les rendre inefficaces.

Un équilibre à reconstruire entre confiance et prudence

Le médicament n’est ni un produit de consommation comme les autres ni un poison à éviter. C’est une arme thérapeutique puissante, qui exige discernement et respect. Sa sécurité repose sur une relation de confiance éclairée entre patients, soignants et institutions. Face à la montée de l’automédication et à la circulation massive d’informations parfois contradictoires, l’enjeu n’est pas de diaboliser le médicament, mais d’en restaurer la compréhension rationnelle.

Bien utilisé, il soigne ; mal utilisé, il blesse. C’est tout le sens du message de Paracelse, encore cinq siècles plus tard.

The Conversation

Clément Delage ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Quand les médicaments du quotidien nous rendent malades : mésusages et effets indésirables – https://theconversation.com/quand-les-medicaments-du-quotidien-nous-rendent-malades-mesusages-et-effets-indesirables-273656