NASA’s Artemis II plans to send a crew around the Moon to test equipment and lay the groundwork for a future landing

Source: The Conversation – USA – By Margaret Landis, Assistant Professor of Earth and Space Exploration, Arizona State University

A banner signed by NASA employees and contractors outside Launch Complex 39B, where NASA’s Artemis II rocket is visible in the background. NASA/Joel Kowsky, CC BY-NC-ND

Almost as tall as a football field, NASA’s Space Launch System rocket and capsule stack traveled slowly – just under 1 mile per hourout to the Artemis II launchpad, its temporary home at the Kennedy Space Center in Florida, on Jan. 17, 2026. That slow crawl is in stark contrast to the peak velocity it will reach on launch day, over 22,000 miles per hour, when it will send a crew of four on a journey around the Moon.

While its first launch opportunity is on Feb. 8., a rocket launch is always at the mercy of a variety of factors outside of the launch team’s control – from the literal position of the planets down to flocks of birds or rogue boats near the launchpad. Artemis II may not be able to launch on Feb. 8, but it has backup launch windows available in March and April. In fact, Feb. 8 already represents a small schedule change from the initially estimated Feb. 6 launch opportunity opening.

Artemis II’s goal is to send people to pass by the Moon and be sure all engineering systems are tested in space before Artemis III, which will land astronauts near the lunar south pole.

If Artemis II is successful, it will be the first time any person has been back to the Moon since 1972, when Apollo 17 left to return to Earth. The Artemis II astronauts will fly by the far side of the Moon before returning home. While they won’t land on the surface, they will provide the first human eyes on the lunar far side since the 20th century.

To put this in perspective, no one under the age of about 54 has yet lived in a world where humans were that far away from Earth. The four astronauts will orbit the Moon on a 10-day voyage and return through a splashdown in the Pacific Ocean. As a planetary geologist, I’m excited for the prospect of people eventually returning to the Moon to do fieldwork on the first stepping stone away from Earth’s orbit.

A walkthrough of the Artemis II mission, which plans to take a crew around the Moon.

Why won’t Artemis II land on the Moon?

If you wanted to summit Mount Everest, you would first test out your equipment and check to make sure everything works before heading up the mountain. A lunar landing is similar. Testing all the components of the launch system and crew vehicle is a critical part of returning people safely to the surface of the Moon and then flying them back to Earth.

And compared to the lunar surface, Everest is a tropical paradise.

NASA has accomplished lunar landings before, but the 54-year hiatus means that most of the engineers who worked on Apollo have retired. Only four of the 12 astronauts who have walked on the Moon are still alive.

Technology now is also vastly different. The Apollo lunar landing module’s computer only had about 4 kilobytes of RAM. A single typical iPhone photo is a few megabytes in size, over 1,000 times larger than the Apollo lunar landing module’s memory.

The two components of the Artemis II project are the rocket (the Space Launch System) and the crew capsule. Both have had a long road to the launchpad.

The Orion capsule was developed as part of the Constellation program, announced in 2005 and concluded in 2010. This program was a President George W. Bush-era attempt to move people beyond the space shuttle and International Space Station.

The Space Launch System started development in the early 2010s as a replacement vehicle for the Ares rocket, which was meant to be used with the Orion capsule in the Constellation program. The SLS rocket was used in 2022 for the Artemis I launch, which flew around the Moon without a crew. Boeing is the main contractor tasked with building the SLS, though over 1,000 separate vendors have been involved in the rocket’s fabrication.

The Apollo program, too, first sent a crewed capsule around the Moon without landing. Apollo 8, the first crewed spacecraft to leave Earth orbit, launched and returned home in December 1968. William Anders, one of the astronauts on board tasked with testing the components of the Apollo lunar spacecraft, captured the iconic “Earthrise” image during the mission.

The white and blue cloudy Earth is visible above a gray edge of the Moon's surface
The Apollo 8 ‘Earthrise’ image, showing the Earth over the horizon from the Moon. This image, acquired by William Anders, became famous for its portrayal of the Earth in its planetary context.
NASA

“Earthrise” was the first time people were able to look back at the Earth as part of a spacefaring species. The Earthrise image has been reproduced in a variety of contexts, including on a U.S. postage stamp. It fundamentally reshaped how people thought of their environment. Earth is still far and beyond the most habitable location in the solar system for life as we know it.

Unique Artemis II science

The Artemis II astronauts will be the first to see the lunar far side since the final Apollo astronauts left over 50 years ago. From the window of the Orion capsule, the Moon will appear at its largest to be about the size of a beach ball held at arm’s length.

Over the past decades, scientists have used orbiting satellites to image much of the lunar surface. Much imaging of the lunar surface has been accomplished, especially at high spatial resolution, by the lunar reconnaissance orbiter camera, LROC.

LROC is made up of a few different cameras. The LROC’s wide angle and narrow angle cameras have both captured images of more than 90% of the lunar surface. The LROC Wide Angle Camera has a resolution on the lunar surface of about 100 meters per pixel – with each pixel in the image being about the length of an American football field.

The LROC narrow angle camera provides about 0.5 to 2 meters per pixel resolution. This means the average person would fit within about the length of one pixel from the narrow angle camera’s orbital images. It can clearly see large rocks and the Apollo lunar landing sites.

If the robotic LROC has covered most of the lunar surface, why should the human crew of Artemis II look at it, at lower resolution?

Most images from space are not what would be considered “true” color, as seen by the human eye. Just like how the photos you take of an aurora in the night sky with a cellphone camera appear more dramatic than with the naked eye, the image depends on the wavelengths the detection systems are sensitive to.

Human astronauts will see the lunar surface in different colors than LROC. And something that human astronauts have that an orbital camera system cannot have is geology training. The Artemis II astronauts will make observations of the lunar far side and almost instantly interpret and adjust their observations.

The proceeding mission, Artemis III, which will include astronauts landing on the lunar surface, is currently scheduled to launch by 2028.

What’s next for Artemis II

The Artemis II crew capsule and SLS rocket are now waiting on the launchpad. Before launch, NASA still needs to complete several final checks, including testing systems while the rocket is fueled. These systems include the emergency exit for the astronauts in case something goes wrong, as well as safely moving fuel, which is made of hydrazine – a molecule made up of nitrogen and hydrogen that is incredibly energy-dense.

Completing these checks follows the old aerospace adage of “test like you fly.” They will ensure that the Artemis II astronauts have everything working on the ground before departing for the Moon.

The Conversation

Margaret Landis receives research funding from NASA. She is affiliated with the Planetary Society as a member for over 20 years.

ref. NASA’s Artemis II plans to send a crew around the Moon to test equipment and lay the groundwork for a future landing – https://theconversation.com/nasas-artemis-ii-plans-to-send-a-crew-around-the-moon-to-test-equipment-and-lay-the-groundwork-for-a-future-landing-273688

A human tendency to value expertise, not just sheer power, explains how some social hierarchies form

Source: The Conversation – USA – By Thomas Morgan, Associate Professor of Evolutionary Anthropology, Institute of Human Origins, Arizona State University

Leaders can seem to emerge from the group naturally, based on their skill and expertise. Hiraman/E+ via Getty Images

Born on the same day, Bill and Ben both grew up to have high status. But in every other way they were polar opposites.

As children, Bill was well-liked, with many friends, while Ben was a bully, picking on smaller kids. During adolescence, Bill earned a reputation for athleticism and intelligence. Ben, flanked by his henchmen, was seen as formidable and dangerous. In adulthood, Bill was admired for his decision-making and diplomacy, but Ben was feared for his aggression and intransigence.

People sought out Bill’s company and listened to his advice. Ben was avoided, but he got his way through force.

How did Ben get away with this? Well, there’s one more difference: Bill is a human, and Ben is a chimp.

This hypothetical story of Bill and Ben highlights a deep difference between human and animal social life. Many mammals exhibit dominance hierarchies; forms of inequality in which stronger individuals use strength, aggression and allies to get better access to food or mating opportunities.

Human societies are more peaceable but not necessarily more equal. We have hierarchies, too – leaders, captains and bosses. Does this mean we are no more than clothed apes, our domineering tendencies cloaked under superficial civility?

I’m an evolutionary anthropologist, part of a team of researchers who set out to come to grips with the evolutionary history of human social life and inequality.

Building on decades of discoveries, our work supports the idea that human societies are fundamentally different from those of other species. People can be coercive, but unlike other species, we also create hierarchies of prestige – voluntary arrangements that allocate labor and decision-making power according to expertise.

This tendency matters because it can inform how we, as a society, think about the kinds of social hierarchies that emerge in a workplace, on a sports team or across society more broadly. Prestige hierarchies can be steep, with clear differences between high and low status. But when they work well, they can form part of a healthy group life from which everyone benefits.

several chimpanzees walking in a loose line following each other
In other primates, leaders secure their dominant roles with physical strength and aggression.
Anup Shah/DigitalVision via Getty Images

Equal by nature?

Primate-style dominance hierarchies, along with the aggressive displays and fights that build them, are so alien to most humans that some researchers have concluded our species simply doesn’t “do” hierarchy. Add to this the limited archaeological evidence for wealth differences prior to farming, and a picture emerges of humans as a peaceful and egalitarian species, at least until agriculture upended things 12,000 years ago.

But new evidence tells a more interesting story. Even the most egalitarian groups, such as the Ju/‘hoansi and Hadza in Africa or Tsimané in South America, still show subtle inequalities in status, influence and power. And these differences matter: High-ranking men get their pick of partners, sometimes multiple partners, and go on to have more children. Archaeologists have also uncovered sites that display wealth differences even without agriculture.

So, are we more like other species than we might care to imagine, or is there still something different about human societies?

Dominance and prestige

One oddity is in how human hierarchies form. In other animals, fighting translates physical strength into dominance. In humans, however, people often happily defer to leaders, even seeking them out. This deference creates hierarchies of prestige, not dominance.

Why do people do this? One current hypothesis is that we, uniquely, live in a world that relies on complex technologies, teaching and cooperation. In this world, expertise matters. Some people know how to build a kayak; others don’t. Some people can organize a team to build a house; others need someone else to organize them. Some people are great hunters; others couldn’t catch a cold.

In a world like this, everyone keeps an eye out for who has the skills and knowledge they need. Adept individuals can translate their ability into power and status. But, crucially, this status benefits everyone, not just the person on top.

That’s the theory, but where’s the evidence?

One man watches another closely as he is woodworking
People pay attention to those who are skilled.
Virojt Changyencham/Moment via Getty Images

There are plenty of anthropological accounts of skillful people earning social status and bullies being quickly cut down. Lab studies have also found that people do keep an eye on how well others are doing, what they’re good at, and even whom others are paying attention to, and they use this to guide their own information-seeking.

What my colleagues and I wanted to do was investigate how these everyday decisions might lead to larger-scale hierarchies of status and influence.

From theory to practice

In a perfect world, we’d monitor whole societies for decades, mapping individual decisions to social consequences. In reality, this kind of study is impossible, so my team turned to a classic tool in evolutionary research: computer models. In place of real-world populations, we can build digital ones and watch their history play out in milliseconds instead of years.

In these simulated worlds, virtual people copied each other, watched whom others were learning from and accrued prestige. The setup was simple, but a clear pattern emerged: The stronger the tendency to seek out prestigious people, the steeper social influence hierarchies became.

Each dot represents a simulated person, sized according to their social influence. When prestige psychology is weak, most dots are of medium size, corresponding to an egalitarian group. When prestige psychology is strong, a handful of extremely prominent leaders emerge, as shown by the very large dots. The color of the dots corresponds to the beliefs of the simulated people. In egalitarian groups, beliefs are fluid and spread across the group. With hierarchical groups, leaders end up surrounded by like-minded followers.

Below a threshold, societies stayed mostly egalitarian; above it, they were led by a powerful few. In other words, “prestige psychology” – the mental machinery that guides whom people learn from – creates a societal tipping point.

The next step was to bring real humans into the lab and measure their tendency to follow prestigious leaders. This can tell us whether we, as a species, fall above or below the tipping point – that is, whether our psychology favors egalitarian or hierarchical groups.

To do this, my colleagues and I put participants into small groups and gave them problems to solve. We recorded whom participants listened to, and let them know whom their group mates were learning from, and we used this information to find the value of the human “hierarchy-forming” tendency. It was high – well above the tipping point for hierarchies to emerge, and our experimental groups ended up with clear leaders.

One doubt lingered: Our volunteers were from the modern United States. Can they really tell us about the whole human species?

Rather than repeat the study across dozens of cultures, we returned to modeling. This time, we let prestige psychology evolve. Each simulated person had their own tendency for how much they deferred to prestige. It guided their actions, affected their fitness and was passed on to their children with minor mutations.

Over thousands of generations, natural selection identified the most successful psychology: a sensitivity to prestige nearly identical to that we measured in real humans – and strong enough to produce the same sharp hierarchies.

Inequality for everyone?

In other primates, being at the bottom of the social ladder can be brutal, with routine harassment and bullying by group mates. Thankfully, human prestige hierarchies look nothing like this. Even without any coercion, people often choose to follow skilled or respected individuals because good leadership makes life easier for everyone. Natural selection, it seems, has favored the psychology that makes this possible.

Of course, reality is messier than any model or lab experiment. Our simulations and experiment didn’t allow for coercion or bullying, and so they give an optimistic view of how human societies might work – not how they do.

In the real world, leaders can selfishly abuse their authority or simply fail to deliver collective benefits. Even in our experiment, some groups rallied around below-average teammates, the snowballing tendency of prestige swamping signs of their poor ability. Leaders should always be held to account for the outcomes of their choices, and an evolutionary basis to prestige does not justify the oppression of the powerless by the powerful.

So hierarchies remain a double-edged sword. Human societies are unique in the benefits that hierarchies can bring to followers, but the old forces of dominance and exploitation have not disappeared. Still, the fact that natural selection favored a psychology that drives voluntary deference and powerful leaders suggests that, most of the time, prestige hierarchies are worth the risks. When they work well, we all reap the rewards.

The Conversation

Thomas Morgan has received research funding from DARPA, the NSF and the Templeton World Charity Foundation.

ref. A human tendency to value expertise, not just sheer power, explains how some social hierarchies form – https://theconversation.com/a-human-tendency-to-value-expertise-not-just-sheer-power-explains-how-some-social-hierarchies-form-271711

Certain brain injuries may be linked to violent crime – identifying them could help reveal how people make moral choices

Source: The Conversation – USA – By Christopher M. Filley, Professor Emeritus of Neurology, University of Colorado Anschutz Medical Campus

Neurological evidence is widely used in murder trials, but it’s often unclear how to interpret it. gorodenkoff/iStock via Getty Images Plus

On Oct. 25, 2023, a 40-year old man named Robert Card opened fire with a semi-automatic rifle at a bowling alley and nearby bar in Lewiston, Maine, killing 18 people and wounding 13 others. Card was found dead by suicide two days later. His autopsy revealed extensive damage to the white matter of his brain thought to be related to a traumatic brain injury, which some neurologists proposed may have played a role in his murderous actions.

Neurological evidence such as magnetic resonance imaging, or MRI, is widely used in court to show whether and to what extent brain damage induced a person to commit a violent act. That type of evidence was introduced in 12% of all murder trials and 25% of death penalty trials between 2014 and 2024. But it’s often unclear how such evidence should be interpreted because there’s no agreement on what specific brain injuries could trigger behavioral shifts that might make someone more likely to commit crimes.

We are two behavioral neurologists and a philosopher of neuroscience who have been collaborating over the past six years to investigate whether damage to specific regions of the brain might be somehow contributing to people’s decision to commit seemingly random acts of violence – as Card did.

With new technologies that go beyond simply visualizing the brain to analyze how different brain regions are connected, neuroscientists can now examine specific brain regions involved in decision-making and how brain damage may predispose a person to criminal conduct. This work may in turn shed light on how exactly the brain plays a role in people’s capacity to make moral choices.

Linking brain and behavior

The observation that brain damage can cause changes to behavior stretches back hundreds of years. In the 1860s, the French physician Paul Broca was one of the first in the history of modern neurology to link a mental capacity to a specific brain region. Examining the autopsied brain of a man who had lost the ability to speak after a stroke, Broca found damage to an area roughly beneath the left temple.

Broca could study his patients’ brains only at autopsy. So he concluded that damage to this single area caused the patient’s speech loss – and therefore that this area governs people’s ability to produce speech. The idea that cognitive functions were localized to specific brain areas persisted for well over a century, but researchers today know the picture is more complicated.

Researchers use powerful brain imaging technologies to identify how specific brain areas are involved in a variety of behaviors.

As brain imaging tools such as MRI have improved since the early 2000s it’s become increasingly possible to safely visualize people’s brains in stunning detail while they are alive. Meanwhile, other techniques for mapping connections between brain regions have helped reveal coordinated patterns of activity across a network of brain areas related to certain mental tasks.

With these tools, investigators can detect areas that have been damaged by brain disorders, such as strokes, and test whether that damage can be linked to specific changes in behavior. Then they can explore how that brain region interacts with others in the same network to get a more nuanced view of how the brain regulates those behaviors.

This approach can be applied to any behavior, including crime and immorality.

White matter and criminality

Complex human behaviors emerge from interacting networks that are made up of two types of brain tissue: gray matter and white matter.Gray matter consists of regions of nerve cell bodies and branching nerve fibers called dendrites, as well as points of connection between nerve cells. It’s in these areas that the brain’s heavy computational work is done. White matter, so named because of a pale, fatty substance called myelin that wraps the bundles of nerves, carries information between gray matter areas like highways in the brain.

Brain imaging studies of criminality going back to 2009 have suggested that damage to a swath of white matter called the right uncinate fasciculus is somehow involved when people commit violent acts. This tract connects the right amygdala, an almond-shaped structure deep in the brain involved in emotional processing, with the right orbitofrontal cortex, a region in the front of the brain involved in complex decision-making. However, it wasn’t clear from these studies whether damage to this tract caused people to commit crimes or was just a coincidence.

In a 2025 study, we analyzed 17 cases from the medical literature in which people with no criminal history committed crimes such as murder, assault and rape after experiencing brain damage from a stroke, tumor or traumatic brain injury. We first mapped the location of damage in their brains using an atlas of brain circuitry derived from people whose brains were uninjured. Then we compared imaging of the damage with brain imaging from more than 700 people who had not committed crimes but who had a brain injury causing a different symptom, such as memory loss or depression.

An MRI scan of the brain with the right uncinate fasciculus highlighted
Brain injuries that may play a role in violent criminal behavior damage white matter connections in the brain, shown here in orange and yellow, especially a specific tract called the right uncinate fasciculus.
Isaiah Kletenik, CC BY-NC-ND

In the people who committed crimes, we found the brain region that popped up the most often was the right uncinate fasciculus. Our study aligns with past research in linking criminal behavior to this brain area, but the way we conducted it makes our findings more definitive: These people committed their crimes only after they sustained their brain injuries, which suggests that damage to the right uncinate fasciculus played a role in triggering their criminal behavior.

These findings have an intriguing connection to research on morality. Other studies have found a link between strokes that damaged the right uncinate fasciculus with loss of empathy, suggesting this tract somehow regulates emotions that affect moral conduct. Meanwhile, other work has shown that people with psychopathy, which often aligns with immoral behavior, have abnormalities in their amygdala and the orbitofrontal cortex regions that are directly connected by the uncinate fasciculus.

Neuroscientists are now testing whether the right uncinate fasciculus may be synthesizing information within a network of brain regions dedicated to moral values.

Making sense of it all

As intriguing as these findings are, it is important to note that many people with damage to their right uncinate fasciculus do not commit violent crimes. Similarly, most people who commit crimes do not have damage to this tract. This means that even if damage to this area can contribute to criminality, it’s only one of many possible factors underlying it.

Still, knowing that neurological damage to a specific brain structure can increase a person’s risk of committing a violent crime can be helpful in various contexts. For example, it can help the legal system assess neurological evidence when judging criminal responsibility. Similarly, doctors may be able to use this knowledge to develop specific interventions for people with brain disorders or injuries.

More broadly, understanding the neurological roots of morality and moral decision-making provides a bridge between science and society, revealing constraints that define how and why people make choices.

The Conversation

Isaiah Kletenik receives funding from the NIH.

Nothing to disclose.

Christopher M. Filley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Certain brain injuries may be linked to violent crime – identifying them could help reveal how people make moral choices – https://theconversation.com/certain-brain-injuries-may-be-linked-to-violent-crime-identifying-them-could-help-reveal-how-people-make-moral-choices-262034

Building with air – how nature’s hole-filled blueprints shape manufacturing

Source: The Conversation – USA – By Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout

Engineers use structures found in nature – like the honeycomb – to create lightweight, sturdy materials. Matthew T. Rader, CC BY-NC-SA

If you break open a chicken bone, you won’t find a solid mass of white material inside. Instead, you will see a complex, spongelike network of tiny struts and pillars, and a lot of empty space.

It looks fragile, yet that internal structure allows a bird’s wing to withstand high winds while remaining light enough for flight. Nature rarely builds with solid blocks. Instead, it builds with clever, porous patterns to maximize strength while minimizing weight.

A cross-section view of bone, showing large, roughly circular holes in a white material.
Cross-section of the bone of a bird’s skull: Holes keep the material light enough that the bird can fly, but it’s still sturdy.
Steve Gschmeissner/Science Photo Library via Getty Images

Human engineers have always envied this efficiency. You can see it in the hexagonal perfection of a honeycomb, which uses the least amount of wax to store the most honey, and in the internal spiraling architecture of seashells that resist crushing pressures.

For centuries, however, manufacturing limitations meant engineers couldn’t easily copy these natural designs. Traditional manufacturing has usually been subtractive, meaning it starts with a heavy block of metal that is carved down, or formative, which entails pouring liquid plastic into a mold. Neither method can easily create complex, spongelike interiors hidden inside a solid shell.

If engineers wanted to make a part stronger, they generally had to make it thicker and heavier. This approach is often inefficient, wastes material and results in heavier products that require more energy to transport.

I am a mechanical engineer and associate professor at the University of Wisconsin-Stout, where I research the intersection of advanced manufacturing and biology. For several years, my work has focused on using additive manufacturing to create materials that, like a bird’s wing, are both incredibly light and capable of handling intense physical stress. While these “holey” designs have existed in nature for millions of years, it is only recently that 3D printing has made it possible for us to replicate them in the lab.

The invisible architecture

That paradigm changed with the maturation of additive manufacturing, commonly known as 3D printing, when it evolved from a niche prototyping tool into a robust industrial force. While the technology was first patented in the 1980s, it truly took off over the past decade as it became capable of producing end-use parts for high-stakes industries like aerospace and health care.

A 3D printer printing out an object filled with holes.
3D printing makes it far easier to manufacture lightweight, hole-filled materials.
kynny/iStock via Getty Images

Instead of cutting away material, printers build objects layer by layer, depositing plastic or metal powder only exactly where it’s needed based on a digital file. This technology unlocked a new frontier in materials science focused on mesostructures.

A mesostructure represents the in-between scale. It is not the microscopic atomic makeup of the material, nor is it the macroscopic overall shape of the object, like a whole shoe. It is the internal architecture, including the engineered pattern of air and material hidden inside.

It’s the difference between a solid brick and the intricate iron latticework of the Eiffel Tower. Both are strong, but one uses vastly less material to achieve that strength because of how the empty space is arranged.

From the lab to your closet

While the concept of using additive manufacturing to create parts that take advantage of mesostructures started in research labs around the year 2000, consumers are now seeing these bio-inspired designs in everyday products.

The footwear industry is a prime example. If you look closely at the soles of certain high-end running shoes, you won’t see a solid block of foam. Instead, you will see a complex, weblike lattice structure that looks suspiciously like the inside of a bird bone. This printed design mimics the springiness and weight distribution found in natural porous structures, offering tuned performance that solid foam cannot match.

Engineers use the same principle to improve safety gear. Modern bike helmets and football helmet liners are beginning to replace traditional foam padding with 3D-printed lattices. These tiny, repeating jungle gym structures are designed to crumple and rebound to absorb the energy more efficiently than solid materials, much like how the porous bone inside your own skull protects your brain.

Testing the limits

In my research, I look for the rules nature uses to build strong objects.

For example, seashells are tough because they are built like a brick wall, with hard mineral blocks held together by a thin layer of stretchy glue. This pattern allows the hard bricks to slide past each other instead of snapping when put under pressure. The shell absorbs energy and stops cracks from spreading, which makes the final structure much tougher than a solid piece of the same material.

I use advanced computer models to crush thousands of virtual designs to see exactly when and how they fail. I have even used neural networks, a type of artificial intelligence, to find the best patterns for absorbing energy.

My studies have shown that a wavy design can be very effective, especially when we fine-tune the thickness of the lines and the number of turns in the pattern. By finding these perfect combinations, we can design products that fail gradually and safely – much like the crumple zone on the front of a car.

By understanding the mechanics of these structures, engineers can tailor them for specific jobs, making one area of a product stiff and another area flexible within a single continuous printed part.

The sustainable future

Beyond performance, mimicking nature’s less-is-more approach is a significant win for sustainability. By “printing air” into the internal structure of a product, manufacturers can use significantly less raw material while maintaining the necessary strength.

As industrial 3D printing becomes faster and cheaper, manufacturing will move further away from the solid-block era and closer to the elegant efficiency of the biological world. Nature has spent millions of years perfecting these blueprints through evolution – and engineers are finally learning how to read them.

The Conversation

Anne Schmitz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Building with air – how nature’s hole-filled blueprints shape manufacturing – https://theconversation.com/building-with-air-how-natures-hole-filled-blueprints-shape-manufacturing-270640

The Supreme Court may soon diminish Black political power, undoing generations of gains

Source: The Conversation – USA – By Robert D. Bland, Assistant Professor of History and Africana Studies, University of Tennessee

U.S. Rep. Cleo Fields, a Democrat who represents portions of central Louisiana in the House, could lose his seat if the Supreme Court invalidates Louisiana’s congressional map. AP Photo/Gerald Herbert

Back in 2013, the Supreme Court tossed out a key provision of the Voting Rights Act regarding federal oversight of elections. It appears poised to abolish another pillar of the law.

In a case known as Louisiana v. Callais, the court appears ready to rule against Louisiana and its Black voters. In doing so, the court may well abolish Section 2 of the Voting Rights Act, a provision that prohibits any discriminatory voting practice or election rule that results in less opportunity for political clout for minority groups.

The dismantling of Section 2 would open the floodgates for widespread vote dilution by allowing primarily Southern state legislatures to redraw political districts, weakening the voting power of racial minorities.

The case was brought by a group of Louisiana citizens who declared that the federal mandate under Section 2 to draw a second majority-Black district violated the equal protection clause of the 14th Amendment and thus served as an unconstitutional act of racial gerrymandering.

There would be considerable historical irony if the court decides to use the 14th Amendment to provide the legal cover for reversing a generation of Black political progress in the South. Initially designed to enshrine federal civil rights protections for freed people facing a battery of discriminatory “Black Codes” in the postbellum South, the 14th Amendment’s equal protection clause has been the foundation of the nation’s modern rights-based legal order, ensuring that all U.S. citizens are treated fairly and preventing the government from engaging in explicit discrimination.

The cornerstone of the nation’s “second founding,” the Reconstruction-era amendments to the Constitution, including the 14th Amendment, created the first cohort of Black elected officials.

I am a historian who studies race and memory during the Civil War era. As I highlight in my new book “Requiem for Reconstruction,” the struggle over the nation’s second founding not only highlights how generational political progress can be reversed but also provides a lens into the specific historical origins of racial gerrymandering in the United States.

Without understanding this history – and the forces that unraveled Reconstruction’s initial promise of greater racial justice – we cannot fully comprehend the roots of those forces that are reshaping our contemporary political landscape in a way that I believe subverts the true intentions of the Constitution.

The long history of gerrymandering

Political gerrymandering, or shaping political boundaries to benefit a particular party, has been considered constitutional since the nation’s 18th-century founding, but racial gerrymandering is a practice with roots in the post-Civil War era.

Expanding beyond the practice of redrawing district lines after each decennial census, late 19th-century Democratic state legislatures built on the earlier cartographic practice to create a litany of so-called Black districts across the postbellum South.

The nation’s first wave of racial gerrymandering emerged as a response to the political gains Southern Black voters made during the administration of President Ulysses S. Grant in the 1870s. Georgia, Alabama, Florida, Mississippi, North Carolina and Louisiana all elected Black congressmen during that decade. During the 42nd Congress, which met from 1871 to 1873, South Carolina sent Black men to the House from three of its four districts.

A group portrait depicts the first Black senator and a half-dozen Black representatives.
The first Black senator and representatives were elected in the 1870s, as shown in this historic print.
Library of Congress

Initially, the white Democrats who ruled the South responded to the rise of Black political power by crafting racist narratives that insinuated that the emergence of Black voters and Black officeholders was a corruption of the proper political order. These attacks often provided a larger cultural pretext for the campaigns of extralegal political violence that terrorized Black voters in the South, assassinated political leaders, and marred the integrity of several of the region’s major elections.

Election changes

Following these pogroms during the 1870s, southern legislatures began seeking legal remedies to make permanent the counterrevolution of “Redemption,” which sought to undo Reconstruction’s advancement of political equality. A generation before the Jim Crow legal order of segregation and discrimination was established, southern political leaders began to disfranchise Black voters through racial gerrymandering.

These newly created Black districts gained notoriety for their cartographic absurdity. In Mississippi, a shoestring-shaped district was created to snake and swerve alongside the state’s famous river. North Carolina created the “Black Second” to concentrate its African American voters to a single district. Alabama’s “Black Fourth” did similar work, leaving African American voters only one possible district in which they could affect the outcome in the state’s central Black Belt.

South Carolina’s “Black Seventh” was perhaps the most notorious of these acts of Reconstruction-era gerrymandering. The district “sliced through county lines and ducked around Charleston back alleys” – anticipating the current trend of sophisticated, computer-targeted political redistricting.

Possessing 30,000 more voters than the next largest congressional district in the state, South Carolina’s Seventh District radically transformed the state’s political landscape by making it impossible for its Black-majority to exercise any influence on national politics, except for the single racially gerrymandered district.

A map showing South Carolina's congressional districts in the 1880s.
South Carolina’s House map was gerrymandered in 1882 to minimize Black representation, heavily concentrating Black voters in the 7th District.
Library of Congress, Geography and Map Division

Although federal courts during the late 19th century remained painfully silent on the constitutionality of these antidemocratic measures, contemporary observers saw these redistricting efforts as more than a simple act of seeking partisan advantage.

“It was the high-water mark of political ingenuity coupled with rascality, and the merits of its appellation,” observed one Black congressman who represented South Carolina’s 7th District.

Racial gerrymandering in recent times

The political gains of the Civil Rights Movement of the 1950s and 1960s, sometimes called the “Second Reconstruction,” were made tangible by the 1965 Voting Rights Act. The law revived the postbellum 15th Amendment, which prevented states from creating voting restrictions based on race. That amendment had been made a dead letter by Jim Crow state legislatures and an acquiescent Supreme Court.

In contrast to the post-Civil War struggle, the Second Reconstruction had the firm support of the federal courts. The Supreme Court affirmed the principal of “one person, one vote” in its 1962 Baker v. Carr and 1964 Reynolds v. Sims decisions – upending the Solid South’s landscape of political districts that had long been marked by sparsely populated Democratic districts controlled by rural elites.

The Voting Rights Act gave the federal government oversight over any changes in voting policy that might affect historically marginalized groups. Since passage of the 1965 law and its subsequent revisions, racial gerrymandering has largely served the purpose of creating districts that preserve and amplify the political representation of historically marginalized groups.

This generational work may soon be undone by the current Supreme Court. The court, which heard oral arguments in the Louisiana case in October 2025, will release its decision by the end of June 2026.

The Conversation

Robert D. Bland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Supreme Court may soon diminish Black political power, undoing generations of gains – https://theconversation.com/the-supreme-court-may-soon-diminish-black-political-power-undoing-generations-of-gains-274179

Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism

Source: The Conversation – USA – By Nicholas Jacobs, Goldfarb Family Distinguished Chair in American Government, Colby College; Institute for Humane Studies

Protesters against Immigration and Customs Enforcement march through Minneapolis, Minn., on Jan. 25, 2026. Roberto Schmidt/AFP via Getty Images

An unusually large majority of Americans agree that the recent scenes of Immigration and Customs Enforcement operations in Minneapolis are disturbing.

Federal immigration agents have deployed with weapons and tactics more commonly associated with military operations than with civilian law enforcement. The federal government has sidelined state and local officials, and it has cut them out of investigations into whether state and local law has been violated.

It’s understandable to look at what’s happening and reach a familiar conclusion: This looks like a slide into authoritarianism.

There is no question that the threat of democratic backsliding is real. President Donald Trump has long treated federal authority not as a shared constitutional set of rules and obligations but as a personal instrument of control.

In my research on the presidency and state power, including my latest book with Sidney Milkis, “Subverting the Republic,” I have argued that the Trump administration has systematically weakened the norms and practices that once constrained executive power – often by turning federalism itself into a weapon of national administrative power.

But there is another possibility worth taking seriously, one that cuts against Americans’ instincts at moments like this. What if what America is seeing is not institutional collapse but institutional friction: the system doing what it was designed to do, even if it looks ugly when it does it?

For many Americans, federalism is little more than a civics term – something about states’ rights or decentralization.

In practice, however, federalism functions less as a clean division of authority and more as a system for managing conflict among multiple governments with overlapping jurisdiction. Federalism does not block national authority. It ensures that national decisions are subject to challenge, delay and revision by other levels of government.

Dividing up authority

At its core, federalism works through a small number of institutional mechanics – concrete ways of keeping authority divided, exposed and contestable. Minneapolis shows each of them in action.

First, there’s what’s called “jurisdictional overlap.”

State, local and federal authorities all claim the right to govern the same people and places. In Minneapolis, that overlap is unavoidable: Federal immigration agents, state law enforcement, city officials and county prosecutors all assert authority over the same streets, residents and incidents. And they disagree sharply about how that authority should be exercised.

Second, there’s institutional rivalry.

Because authority is divided, no single level of government can fully monopolize legitimacy. And that creates tension. That rivalry is visible in the refusal of state and local officials across the country to simply defer to federal enforcement.

Instead, governors, mayors and attorneys general have turned to courts, demanded access to evidence and challenged efforts to exclude them from investigations. That’s evident in Minneapolis and also in states that have witnessed the administration’s deployment of National Guard troops against the will of their Democratic governors.

It’s easy to imagine a world where state and local prosecutors would not have to jump through so many procedural hoops to get access to evidence for the death of citizens within their jurisdiction. But consider the alternative.

If state and local officials were barred without consent from seeking evidence – the absence of federalism – or if local institutions had no standing to contest how national power is exercised there, federal authority would operate not just forcefully but without meaningful political constraint.

Protesters fight with law enforcement as tear gas fills the air.
Protesters clash with law enforcement after a federal agent shot and killed a man on Jan. 24, 2026, in Minneapolis, Minn.
Arthur Maiorella/Anadolu via Getty Images

Third, confrontation is local and place-specific.

Federalism pushes conflict into the open. Power struggles become visible, noisy and politically costly. What is easy to miss is why this matters.

Federalism was necessary at the time of the Constitution’s creation because Americans did not share a single political identity. They could not decide whether they were members of one big community or many small communities.

In maintaining their state governments and creating a new federal government, they chose to be both at the same time. And although American politics nationalized to a remarkable degree over the 20th century, federal authority is still exercised in concrete places. Federal authority still must contend with communities that have civic identities and whose moral expectations may differ sharply from those assumed by national actors.

In Minneapolis it has collided with a political community that does not experience federal immigration enforcement as ordinary law enforcement.

The chaos of federalism

Federalism is not designed to keep things calm. It is designed to keep power unsettled – so that authority cannot move smoothly, silently or all at once.

By dividing responsibility and encouraging overlap, federalism ensures that power has to push, explain and defend itself at every step.

“A little chaos,” the scholar Daniel Elazar has said, “is a good thing!”

As chaos goes, though, federalism is more often credited for Trump’s ascent. He won the presidency through the Electoral College – a federalist institution that allocates power by state rather than by national popular vote, rewarding geographically concentrated support even without a national majority.

Partisan redistricting, which takes place in the states, further amplifies that advantage by insulating Republicans in Congress from electoral backlash. And decentralized election administration – in which local officials control voter registration, ballot access and certification – can produce vulnerabilities that Trump has exploited in contesting state certification processes and pressuring local election officials after close losses.

Forceful but accountable

It’s helpful to also understand how Minneapolis is different from the most well-known instances of aggressive federal power imposed on unwilling states: the civil rights era.

Hundreds of students protest the arrival of a Black student to their school.
Hundreds of Ole Miss students call for continued segregation on Sept. 20, 1962, as James Meredith prepares to become the first Black man to attend the university.
AP Photo

Then, too, national authority was asserted forcefully. Federal marshals escorted the Black student James Meredith into the University of Mississippi in 1962 over the objections of state officials and local crowds. In Little Rock in 1957, President Dwight D. Eisenhower federalized the Arkansas National Guard and sent in U.S. Army troops after Gov. Orval Faubus attempted to block the racial integration of Central High School.

Violence accompanied these interventions. Riots broke out in Oxford, Mississippi. Protesters and bystanders were killed in clashes with police and federal authorities in Birmingham and Selma, Alabama.

What mattered during the civil rights era was not widespread agreement at the outset – nationwide resistance to integration was fierce and sustained. Rather, it was the way federal authority was exercised through existing constitutional channels.

Presidents acted through courts, statutes and recognizable chains of command. State resistance triggered formal responses. Federal power was forceful, but it remained legible, bounded and institutionally accountable.

Those interventions eventually gained public acceptance. But in that process, federalism was tarnished by its association with Southern racism and recast as an obstacle to progress rather than the institutional framework through which progress was contested and enforced.

After the civil rights era, many Americans came to assume that national power would normally be aligned with progressive moral aims – and that when it was, federalism was a problem to be overcome.

Minneapolis exposes the fragility of that assumption. Federalism does not distinguish between good and bad causes. It does not certify power because history is “on the right side.” It simply keeps power contestable.

When national authority is exercised without broad moral agreement, federalism does not stop it. It only prevents it from settling quietly.

Why talk about federalism now, at a time of widespread public indignation?

Because in the long arc of federalism’s development, it has routinely proven to be the last point in our constitutional system where power runs into opposition. And when authority no longer encounters rival institutions and politically independent officials, authoritarianism stops being an abstraction.

The Conversation

Nicholas Jacobs does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism – https://theconversation.com/federal-power-meets-local-resistance-in-minneapolis-a-case-study-in-how-federalism-staves-off-authoritarianism-274685

Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk

Source: The Conversation – USA (3) – By Michael I Goran, Professor of Pediatrics and Vice Chair for Research, University of Southern California

Consuming less highly processed foods and sugary drinks and more whole grains can meaningfully improve your health. fizkes/iStock via Getty Images Plus

The Dietary Guidelines for Americans aim to translate the most up-to-date nutrition science into practical advice for the public as well as to guide federal policy for programs such as school lunches.

But the newest version of the guidelines, released on Jan. 7, 2026, seems to be spurring more confusion than clarity about what people should be eating.

I’ve been studying nutrition and chronic disease for over 35 years, and in 2020 I wrote “Sugarproof,” a book about reducing consumption of added sugars to improve health. I served as a scientific adviser for the new guidelines.

I chose to participate in this process, despite its accelerated and sometimes controversial nature, for two reasons. First, I wanted to help ensure the review was conducted with scientific rigor. And second, federal health officials prioritized examining areas where the evidence has become especially strong – particularly food processing, added sugars and sugary beverages, which closely aligns with my research.

My role, along with colleagues, was to review and synthesize that evidence and help clarify where the science is strongest and most consistent.

The latest dietary guidelines, published on Jan. 7, 2026, have received mixed reviews from nutrition experts.

What’s different in the new dietary guidelines?

The dietary guidelines, first published in 1980, are updated every five years. The newest version differs from the previous versions in a few key ways.

For one thing, the new report is shorter, at nine pages rather than 400. It offers simpler advice directly to the public, whereas previous guidelines were more directed at policymakers and nutrition experts.

Also, the new guidelines reflect an important paradigm shift in defining a healthy diet. For the past half-century, dietary advice has been shaped by a focus on general dietary patterns and targets for individual nutrients, such as protein, fat and carbohydrate. The new guidelines instead emphasize overall diet quality.

Some health and nutrition experts have criticized specific aspects of the guidelines, such as how the current administration developed them, or how they address saturated fat, beef, dairy, protein and alcohol intake. These points have dominated the public discourse. But while some of them are valid, they risk overshadowing the strongest, least controversial and most actionable conclusions from the scientific evidence.

What we found in our scientific assessment was that just a few straightforward changes to your diet – specifically, reducing highly processed foods and sugary drinks, and increasing whole grains – can meaningfully improve your health.

What the evidence actually shows

My research assistants and I evaluated the conclusions of studies on consuming sugar, highly processed foods and whole grains, and assessed how well they were conducted and how likely they were to be biased. We graded the overall quality of the findings as low, moderate or high based on standardized criteria such as their consistency and plausibility.

We found moderate to high quality evidence that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.

Similarly, we found moderately solid evidence that people who drink more sugar-sweetened beverages have a higher risk of obesity and Type 2 diabetes, as well as quite conclusive evidence that children who drink fruit juice have a higher risk of obesity. And consuming more beverages containing artificial sweeteners raises the risk of death from any cause and Alzheimer’s disease, based on moderately good evidence.

Whole grains, on the other hand, have a protective effect on health. We found high-quality evidence that people who eat more whole grains have a lower risk of cardiovascular disease and death from any cause. People who consume more dietary fiber, which is abundant in whole grains, have a lower risk of Type 2 diabetes and death from any cause, based on moderate-quality research.

According to the research we evaluated, it’s these aspects – too much highly processed foods and sweetened beverages, and too little whole grain foods – that are significantly contributing to the epidemic of chronic diseases such as obesity, Type 2 diabetes and heart disease in this country – and not protein, beef or dairy intake.

Different types of food on rustic wooden table
Evidence suggests that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.
fcafotodigital/E+ via Getty Images

From scientific evidence to guidelines

Our report was the first one to recommend that the guidelines explicitly mention decreasing consumption of highly processed foods. Overall, though, research on the negative health effects of sugar and processed foods and the beneficial effects of whole grains has been building for many years and has been noted in previous reports.

On the other hand, research on how strongly protein, red meat, saturated fat and dairy are linked with chronic disease risk is much less conclusive. Yet the 2025 guidelines encourage increasing consumption of those foods – a change from previous versions.

The inverted pyramid imagery used to represent the 2025 guidelines also emphasizes protein – specifically, meat and dairy – by putting these foods in a highly prominent spot in the top left corner of the image. Whole grains sit at the very bottom; and except for milk, beverages are not represented.

Scientific advisers were not involved in designing the image.

Making small changes that can improve your health

An important point we encountered repeatedly in reviewing the research was that even small dietary changes could meaningfully lower people’s chronic disease risks.

For example, consuming just 10% fewer calories per day from highly processed foods could lower the risk of diabetes by 14%, according to one of the lead studies we relied on for the evidence review. Another study showed that eating one less serving of highly processed foods per day lowers the risk of heart disease by 4%.

You can achieve that simply by switching from a highly processed packaged bread to one with fewer ingredients or replacing one fast-food meal per week with a simple home-cooked meal. Or, switch your preferred brands of daily staples such as tomato sauce, yogurt, salad dressing, crackers and nut butter to ones that have fewer ingredients like added sugars, sweeteners, emulsifiers and preservatives.

Cutting down on sugary beverages – for example, soda, sweet teas, juices and energy drinks – had an equally dramatic effect. Simply drinking the equivalent of one can less per day lowers the risk of diabetes by 26% and the risk of heart disease by 14%.

And eating just one additional serving of whole grains per day – say, replacing packaged bread with whole grain bread – results in an 18% lower risk of diabetes and a 13% lower risk of death from all causes combined.

How to adopt ‘kitchen processing’

Another way to make these improvements is to take basic elements of food processing back from manufacturers and return them to your own kitchen – what I call “kitchen processing.” Humans have always processed food by chopping, cooking, fermenting, drying or freezing. The problem with highly processed foods isn’t just the industrial processing that transforms the chemical structure of natural ingredients, but also what chemicals are added to improve taste and shelf life.

Kitchen processing, though, can instead be optimized for health and for your household’s flavor preferences – and you can easily do it without cooking from scratch. Here are some simple examples:

  • Instead of flavored yogurts, buy plain yogurt and add your favorite fruit or some homemade simple fruit compote.

  • Instead of sugary or diet beverages, use a squeeze of citrus or even a splash of juice to flavor plain sparkling water.

  • Start with a plain whole grain breakfast cereal and add your own favorite source of fiber and/or fruit.

  • Instead of packaged “energy bars” make your own preferred mixture of nuts, seeds and dried fruit.

  • Instead of bottled salad dressing, make a simple one at home with olive oil, vinegar or lemon juice, a dab of mustard and other flavorings of choice, such as garlic, herbs, or honey.

You can adapt this way of thinking to the foods you eat most often by making similar types of swaps. They may seem small, but they will build over time and have an outsized effect on your health.

The Conversation

Michael I Goran receives funding from the National Institutes of Health and the Dr Robert C and Veronica Atkins Foundation. He is a scientific advisor to Eat Real (non-profit promoting better school meals) and has previously served as a scientific advisor to Bobbi (infant formula) and Begin Health (infant probiotics).

ref. Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk – https://theconversation.com/confused-by-the-new-dietary-guidelines-focus-on-these-simple-evidence-based-shifts-to-lower-your-chronic-disease-risk-273701

Data centers told to pitch in as storms and cold weather boost power demand

Source: The Conversation – USA (2) – By Nikki Luke, Assistant Professor of Human Geography, University of Tennessee

During winter storms, physical damage to wires and high demand for heating put pressure on the electrical grid. Brett Carlsen/Getty Images

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow and freezing temperatures, it left more than a million people without power, mostly in the Southeast.

Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels.

Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOT – the company that manages the Texas power grid – as well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators.

The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their “industrial diesel generators” could “generate 35 gigawatts of power, or enough electricity to power many millions of homes.”

We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through and recover from winter storms.

A close-up of a rack of electronics.
The electronics in data centers consume large amounts of electricity.
RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

Data centers use enormous quantities of energy

Before Wright’s order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies.

This is a pressing question, because data centers’ power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJM’s.

And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030 – enough power to supply 30 million new homes, but nearly all going to new data centers. PJM’s job is to coordinate that energy – and figure out how much the public, or others, should pay to supply it.

The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution.

Local ordinances, regulations created by state utility commissions and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need.

Always-on connections?

In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time.

But since the 1970s, utilities have encouraged “demand response” programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation.

Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as “virtual power plants.”

A different approach

The terms of data center agreements with local governments and utilities often aren’t available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use.

In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems.

Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide “data center demand response by targeting machine learning workloads,” shifting “non-urgent compute tasks” away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers’ power use off the grid during power shortages.

An aerial view of metal equipment and wires with a city skyline in the background.
Large amounts of power move through parts of the U.S. electricity grid.
Joe Raedle/Getty Images

Flexibility for the future

One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacity – the amount that would power around 70 million households – could be added to the grid without adding new generation and transmission.

In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted.

Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms.

Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasn’t enough electricity for their homes but because their power lines were down.

The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist.

Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold.

The Conversation

Nikki Luke is a fellow at the Climate and Community Institute. She receives funding from the Alfred P. Sloan Foundation. She previously worked at the U.S. Department of Energy.

Conor Harrison receives funding from Alfred P. Sloan Foundation and has previously received funding from the U.S. National Science Foundation.

ref. Data centers told to pitch in as storms and cold weather boost power demand – https://theconversation.com/data-centers-told-to-pitch-in-as-storms-and-cold-weather-boost-power-demand-274604

Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games

Source: The Conversation – USA (2) – By Steven R. Fassnacht, Professor of Snow Hydrology, Colorado State University

Italy’s Predazzo Ski Jumping Stadium, which is hosting events for the 2026 Winter Olympics, needed snowmaking machines for the Italian National Championship Open on Dec. 23, 2025. Mattia Ozbot/Getty Images

Watching the Winter Olympics is an adrenaline rush as athletes fly down snow-covered ski slopes, luge tracks and over the ice at breakneck speeds and with grace.

When the first Olympic Winter Games were held in Chamonix, France, in 1924, all 16 events took place outdoors. The athletes relied on natural snow for ski runs and freezing temperatures for ice rinks.

Two skaters on ice outside with mountains in the background. They are posing as if gliding together.
Sonja Henie, left, and Gilles Grafstrom at the Olympic Winter Games in Chamonix, France, in 1924.
The Associated Press

Nearly a century later, in 2022, the world watched skiers race down runs of 100% human-made snow near Beijing. Luge tracks and ski jumps have their own refrigeration, and four of the original events are now held indoors: figure skaters, speed skaters, curlers and hockey teams all compete in climate-controlled buildings.

Innovation made the 2022 Winter Games possible in Beijing. Ahead of the 2026 Winter Olympics in northern Italy, where snowfall was below average for the start of the season, officials had large lakes built near major venues to provide enough water for snowmaking. But snowmaking can go only so far in a warming climate.

As global temperatures rise, what will the Winter Games look like in another century? Will they be possible, even with innovations?

Former host cities that would be too warm

The average daytime temperature of Winter Games host cities in February has increased steadily since those first events in Chamonix, rising from 33 degrees Fahrenheit (0.4 Celsius) in the 1920s-1950s to 46 F (7.8 C) in the early 21st century.

In a recent study, scientists looked at the venues of 19 past Winter Olympics to see how each might hold up under future climate change.

A cross-country skier falls in front of another during a race. The second skier has his mouth open as if shouting.
Human-made snow was used to augment trails at the Sochi Games in Russia in 2014. Some athletes complained that it made the trails icier and more dangerous.
AP Photo/Dmitry Lovetsky

They found that by midcentury, four former host cities – Chamonix; Sochi, Russia; Grenoble, France; and Garmisch-Partenkirchen, Germany – would no longer have a reliable climate for hosting the Games, even under the United Nations’ best-case scenario for climate change, which assumes the world quickly cuts its greenhouse gas emissions. If the world continues burning fossil fuels at high rates, Squaw Valley, California, and Vancouver, British Columbia, would join that list of no longer being a reliable climate for hosting the Winter Games.

By the 2080s, the scientists found, the climates in 12 of 22 former venues would be too unreliable to host the Winter Olympics’ outdoor events; among them were Turin, Italy; Nagano, Japan; and Innsbruck, Austria.

In 2026, there are now five weeks between the Winter Olympics and the Paralympics, which last through mid-March. Host countries are responsible for both events, and some venues may increasingly find it difficult to have enough snow on the ground, even with snowmaking capabilities, as snow seasons shorten.

Ideal snowmaking conditions today require a dewpoint temperature – the combination of coldness and humidity – of around 28 F (-2 C) or less. More moisture in the air melts snow and ice at colder temperatures, which affects snow on ski slopes and ice on bobsled, skeleton and luge tracks.

Stark white lines etched on a swath of brown mountains delineate ski routes and bobsled course.
A satellite view clearly shows the absence of natural snow during the 2022 Winter Olympics. Beijing’s bid to host the Winter Games had explained how extensively it would rely on snowmaking.
Joshua Stevens/NASA Earth Observatory
A gondola passes by with dark ground below and white ski slopes behind it.
The finish area of the Alpine ski venue at the 2022 Winter Olympics was white because of machine-made snow.
AP Photo/Robert F. Bukaty

As Colorado snow and sustainability scientists and also avid skiers, we’ve been watching the developments and studying the climate impact on the mountains and winter sports we love.

Conditions vary by location and year to year

The Earth’s climate will be warmer overall in the coming decades. Warmer air can mean more winter rain, particularly at lower elevations. Around the globe, snow has been covering less area. Low snowfall and warm temperature made the start to the 2025-26 winter season particularly poor for Colorado’s ski resorts.

However, local changes vary. For example, in northern Colorado, the amount of snow has decreased since the 1970s, but the decline has mostly been at higher elevations.

Several machines pump out sprays of snow across a slope.
Snow cannons spray machine-made snow on a ski slope ahead of the 2026 Winter Olympics.
Mattia Ozbot/Getty Images

A future climate may also be more humid, which affects snowmaking and could affect bobsled, luge and skeleton tracks.

Of the 16 Winter Games sports today, half are affected by temperature and snow: Alpine skiing, biathlon, cross-country skiing, freestyle skiing, Nordic combined, ski jumping, ski mountaineering and snowboarding. And three are affected by temperature and humidity: bobsled, luge and skeleton.

Technology also changes

Developments in technology have helped the Winter Games adapt to some changes over the past century.

Hockey moved indoors, followed by skating. Luge and bobsled tracks were refrigerated in the 1960s. The Lake Placid Winter Games in 1980 in New York used snowmaking to augment natural snow on the ski slopes.

Today, indoor skiing facilities make skiing possible year-round. Ski Dubai, open since 2005, has five ski runs on a hill the height of a 25-story building inside a resort attached to a shopping mall.

Resorts are also using snowfarming to collect and store snow. The method is not new, but due to decreased snowfall and increased problems with snowmaking, more ski resorts are keeping leftover snow to be prepared for the next winter.

Two workers pack snow on an indoor ski slope with a sloped ceiling overhead.
Dubai has an indoor ski slope with multiple runs and a chairlift, all part of a shopping mall complex.
AP Photo/Jon Gambrell

But making snow and keeping it cold requires energy and water – and both become issues in a warming world. Water is becoming scarcer in some areas. And energy, if it means more fossil fuel use, further contributes to climate change.

The International Olympic Committee recognizes that the future climate will have a big impact on the Olympics, both winter and summer. It also recognizes the importance of ensuring that the adaptations are sustainable.

The Winter Olympics could become limited to more northerly locations, like Calgary, Alberta, or be pushed to higher elevations.

Summer Games are feeling climate pressure, too

The Summer Games also face challenges. Hot temperatures and high humidity can make competing in the summer difficult, but these sports have more flexibility than winter sports.

For example, changing the timing of typical summer events to another season can help alleviate excessive temperatures. The 2022 World Cup, normally a summer event, was held in November so Qatar could host it.

What makes adaptation more difficult for the Winter Games is the necessity of snow or ice for all of the events.

A snowboarder with 'USA' on her gloves puts her arms out for balance on a run.
Climate change threatens the ideal environments for snowboarders, like U.S. Olympian Hailey Langland, competing here during the women’s snowboard big air final in Beijing in 2022.
AP Photo/Jae C. Hong

Future depends on responses to climate change

In uncertain times, the Olympics offer a way for the world to come together.

People are thrilled by the athletic feats, like Jean-Claude Killy winning all three Alpine skiing events in 1968, and stories of perseverance, like the 1988 Jamaican bobsled team competing beyond all expectations.

The Winter Games’ outdoor sports may look very different in the future. How different will depend heavily on how countries respond to climate change.

This updates an article originally published Feb. 19, 2022, with the 2026 Winter Games.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games – https://theconversation.com/climate-change-threatens-the-winter-olympics-future-and-even-snowmaking-has-limits-for-saving-the-games-274800

Clergy protests against ICE turned to a classic – and powerful – American playlist

Source: The Conversation – USA (3) – By David W. Stowe, Professor of Religious Studies, Michigan State University

Clergy and community leaders demonstrate outside Minneapolis-St. Paul International Airport on Jan. 23, 2026, amid a surge by federal immigration agents. Brandon Bell/Getty Images

On Jan. 28, 2026, Bruce Springsteen released “Streets of Minneapolis,” a hard-hitting protest against the immigration enforcement surge in the city, including the killings of Renee Good and Alex Pretti. The song is all over social media, and the official video has already been streamed more than 5 million times. It’s hard to remember a time when a major artist has released a song in the midst of a specific political crisis.

Yet some of the most powerful music coming out of Minneapolis is of a much older vintage. Hundreds of clergy from around the country converged on the city in late January to take part in faith-based protests. Many were arrested while blocking a road near the airport. And they have been singing easily recognizable religious songs used during the Civil Rights movement of the 1950s and ‘60s, like “Amazing Grace,” “We Shall Overcome, and ”This Little Light of Mine.“

I have been studying the politics of music and religion for more than 25 years, and I wrote about songs I called “secular spirituals” in my 2004 book, “How Sweet the Sound: Music in the Spiritual Lives of Americans.” Sometimes called “freedom songs,” they were galvanizing more than 60 years ago, and are still in use today.

But why these older songs, and why do they usually come out of the church? There have been many protest movements since the mid-20th century, and they have all produced new music. The freedom songs, though, have a unique staying power in American culture – partly because of their historical associations and partly because of the songs themselves.

‘We Shall Overcome’ was one of several songs at the 1963 March on Washington.

Stronger together

Some of protest music’s power has to do with singing itself. Making music in a group creates a tangible sense of community and collective purpose. Singing is a physical activity; it comes out of our core and helps foster solidarity with fellow singers.

Young activists working in the Deep South during the most violent years of the Civil Rights Movement spoke of the courage that came from singing freedom songs like “We Shall Overcome” in moments of physical danger. In addition to helping quell fear, the songs were unnerving to authorities trying to maintain segregation. “If you have to sing, do you have to sing so loud?” one activist recalled an armed deputy saying.

And when locked up for days in a foul jail, there wasn’t much else to do but sing. When a Birmingham, Alabama, police commissioner released young demonstrators he’d arrested, they recalled him complaining that their singing “made him sick.”

Test of time

Sometimes I ask students if they can think of more recent protest songs that occupy the same place as the freedom songs of the 1960s. There are some well-known candidates: Bob Marley’s “Get Up, Stand Up,” Green Day’s “American Idiot” and Public Enemy’s “Fight the Power,” to name a few. The Black Lives Matter movement alone helped produce several notable songs, including Beyonce’s “Freedom,” Kendrick Lamar’s “Alright and Childish Gambino’s ”This Is America.“

But the older religious songs have advantages for on-the-ground protests. They have been around for a long time, meaning that more people have had more chances to learn them. Protesters typically don’t struggle to learn or remember the tune. As iconic church songs that have crossed over into secular spirituals, they were written to be memorable and singable, crowd-tested for at least a couple of generations. They are easily adaptable, so protesters can craft new verses for their cause – as when civil rights activists added “We are not afraid” to the lyrics of “We shall overcome.”

A black-and-white photo shows a row of seated women inside a van or small space clapping as they sing.
Protesters sing at a civil rights demonstration in New York in 1963.
Bettmann Archive/Getty Images

And freedom songs link the current protesters to one of the best-known – and by some measures, most successful – protest movements of the past century. They create bonds of solidarity not just among those singing them in Minneapolis, but with protesters and activists of generations past.

These religious songs are associated with nonviolence, an important value in a citizen movement protesting violence committed by federal law enforcement. And for many activists, including the clergy who poured into Minneapolis, religious values are central to their willingness to stand up for citizens targeted by ICE.

Deep roots

The best-known secular spirituals actually predate the Civil Rights Movement. “We Shall Overcome” first appeared in written form in 1900 as “I’ll Overcome Some Day,” by the Methodist minister Charles Tindley, though the words and tunes are different. It was sung by striking Black tobacco workers in South Carolina in 1945 and made its way to the Highlander Folk School in Tennessee, an integrated training center for labor organizers and social justice activists.

It then came to the attention of iconic folk singer Pete Seeger, who changed some words and gave it wide exposure. “We Shall Overcome” has been sung everywhere from the 1963 March on Washington and anti-apartheid rallies in South Africa to South Korea, Lebanon and Northern Ireland.

“Amazing Grace” has an even longer history, dating back to a hymn written by John Newton: an 18th-century ship captain in the slave trade who later became an Anglican clergyman and penned an essay against slavery. Pioneering American gospel singer Mahalia Jackson recorded it in 1947 and sang it regularly during the 1960s.

Mahalia Jackson sings the Gospel hymn ‘How I Got Over’ at the March on Washington.

Firmly rooted in Protestant Christian theology, the song crossed over into a more secular audience through a 1970 cover version by folk singer Judy Collins, which reached No. 15 on the Billboard charts. During Mississippi Freedom Summer of 1964, an initiative to register Black voters, Collins heard the legendary organizer Fannie Lou Hamer singing “Amazing Grace,” a song she remembered from her Methodist childhood.

Opera star Jessye Norman sang it at Nelson Mandela’s 70th birthday tribute in London, and bagpipers played it at a 2002 interfaith service near Ground Zero to commemorate victims of 9/11.

‘This little light’

Another gospel song used in protests against ICE – “This little light of mine, I’m gonna let it shine” – has similarly murky historical origins and also passed through the Highlander Folk School into the Civil Rights Movement.

It expresses the impulse to be seen and heard, standing up for human rights and contributing to a movement much larger than each individual. But it could also mean letting a light shine on the truth – for example, demonstrators’ phones documenting what happened in the two killings in Minneapolis, contradicting some officials’ claims.

Like the Civil Rights Movement, the protests in Minneapolis involve protecting people of color from violence – as well as, more broadly, protecting immigrants’ and refugees’ legal right to due process. A big difference is that in the 1950s and 1960s, the federal government sometimes intervened to protect people subjected to violence by states and localities. Now, many Minnesotans are trying to protect people in their communities from agents of the federal government.

The Conversation

David W. Stowe does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clergy protests against ICE turned to a classic – and powerful – American playlist – https://theconversation.com/clergy-protests-against-ice-turned-to-a-classic-and-powerful-american-playlist-274585