Building with air – how nature’s hole-filled blueprints shape manufacturing

Source: The Conversation – USA – By Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout

Engineers use structures found in nature – like the honeycomb – to create lightweight, sturdy materials. Matthew T. Rader, CC BY-NC-SA

If you break open a chicken bone, you won’t find a solid mass of white material inside. Instead, you will see a complex, spongelike network of tiny struts and pillars, and a lot of empty space.

It looks fragile, yet that internal structure allows a bird’s wing to withstand high winds while remaining light enough for flight. Nature rarely builds with solid blocks. Instead, it builds with clever, porous patterns to maximize strength while minimizing weight.

A cross-section view of bone, showing large, roughly circular holes in a white material.
Cross-section of the bone of a bird’s skull: Holes keep the material light enough that the bird can fly, but it’s still sturdy.
Steve Gschmeissner/Science Photo Library via Getty Images

Human engineers have always envied this efficiency. You can see it in the hexagonal perfection of a honeycomb, which uses the least amount of wax to store the most honey, and in the internal spiraling architecture of seashells that resist crushing pressures.

For centuries, however, manufacturing limitations meant engineers couldn’t easily copy these natural designs. Traditional manufacturing has usually been subtractive, meaning it starts with a heavy block of metal that is carved down, or formative, which entails pouring liquid plastic into a mold. Neither method can easily create complex, spongelike interiors hidden inside a solid shell.

If engineers wanted to make a part stronger, they generally had to make it thicker and heavier. This approach is often inefficient, wastes material and results in heavier products that require more energy to transport.

I am a mechanical engineer and associate professor at the University of Wisconsin-Stout, where I research the intersection of advanced manufacturing and biology. For several years, my work has focused on using additive manufacturing to create materials that, like a bird’s wing, are both incredibly light and capable of handling intense physical stress. While these “holey” designs have existed in nature for millions of years, it is only recently that 3D printing has made it possible for us to replicate them in the lab.

The invisible architecture

That paradigm changed with the maturation of additive manufacturing, commonly known as 3D printing, when it evolved from a niche prototyping tool into a robust industrial force. While the technology was first patented in the 1980s, it truly took off over the past decade as it became capable of producing end-use parts for high-stakes industries like aerospace and health care.

A 3D printer printing out an object filled with holes.
3D printing makes it far easier to manufacture lightweight, hole-filled materials.
kynny/iStock via Getty Images

Instead of cutting away material, printers build objects layer by layer, depositing plastic or metal powder only exactly where it’s needed based on a digital file. This technology unlocked a new frontier in materials science focused on mesostructures.

A mesostructure represents the in-between scale. It is not the microscopic atomic makeup of the material, nor is it the macroscopic overall shape of the object, like a whole shoe. It is the internal architecture, including the engineered pattern of air and material hidden inside.

It’s the difference between a solid brick and the intricate iron latticework of the Eiffel Tower. Both are strong, but one uses vastly less material to achieve that strength because of how the empty space is arranged.

From the lab to your closet

While the concept of using additive manufacturing to create parts that take advantage of mesostructures started in research labs around the year 2000, consumers are now seeing these bio-inspired designs in everyday products.

The footwear industry is a prime example. If you look closely at the soles of certain high-end running shoes, you won’t see a solid block of foam. Instead, you will see a complex, weblike lattice structure that looks suspiciously like the inside of a bird bone. This printed design mimics the springiness and weight distribution found in natural porous structures, offering tuned performance that solid foam cannot match.

Engineers use the same principle to improve safety gear. Modern bike helmets and football helmet liners are beginning to replace traditional foam padding with 3D-printed lattices. These tiny, repeating jungle gym structures are designed to crumple and rebound to absorb the energy more efficiently than solid materials, much like how the porous bone inside your own skull protects your brain.

Testing the limits

In my research, I look for the rules nature uses to build strong objects.

For example, seashells are tough because they are built like a brick wall, with hard mineral blocks held together by a thin layer of stretchy glue. This pattern allows the hard bricks to slide past each other instead of snapping when put under pressure. The shell absorbs energy and stops cracks from spreading, which makes the final structure much tougher than a solid piece of the same material.

I use advanced computer models to crush thousands of virtual designs to see exactly when and how they fail. I have even used neural networks, a type of artificial intelligence, to find the best patterns for absorbing energy.

My studies have shown that a wavy design can be very effective, especially when we fine-tune the thickness of the lines and the number of turns in the pattern. By finding these perfect combinations, we can design products that fail gradually and safely – much like the crumple zone on the front of a car.

By understanding the mechanics of these structures, engineers can tailor them for specific jobs, making one area of a product stiff and another area flexible within a single continuous printed part.

The sustainable future

Beyond performance, mimicking nature’s less-is-more approach is a significant win for sustainability. By “printing air” into the internal structure of a product, manufacturers can use significantly less raw material while maintaining the necessary strength.

As industrial 3D printing becomes faster and cheaper, manufacturing will move further away from the solid-block era and closer to the elegant efficiency of the biological world. Nature has spent millions of years perfecting these blueprints through evolution – and engineers are finally learning how to read them.

The Conversation

Anne Schmitz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Building with air – how nature’s hole-filled blueprints shape manufacturing – https://theconversation.com/building-with-air-how-natures-hole-filled-blueprints-shape-manufacturing-270640

Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism

Source: The Conversation – USA – By Nicholas Jacobs, Goldfarb Family Distinguished Chair in American Government, Colby College; Institute for Humane Studies

Protesters against Immigration and Customs Enforcement march through Minneapolis, Minn., on Jan. 25, 2026. Roberto Schmidt/AFP via Getty Images

An unusually large majority of Americans agree that the recent scenes of Immigration and Customs Enforcement operations in Minneapolis are disturbing.

Federal immigration agents have deployed with weapons and tactics more commonly associated with military operations than with civilian law enforcement. The federal government has sidelined state and local officials, and it has cut them out of investigations into whether state and local law has been violated.

It’s understandable to look at what’s happening and reach a familiar conclusion: This looks like a slide into authoritarianism.

There is no question that the threat of democratic backsliding is real. President Donald Trump has long treated federal authority not as a shared constitutional set of rules and obligations but as a personal instrument of control.

In my research on the presidency and state power, including my latest book with Sidney Milkis, “Subverting the Republic,” I have argued that the Trump administration has systematically weakened the norms and practices that once constrained executive power – often by turning federalism itself into a weapon of national administrative power.

But there is another possibility worth taking seriously, one that cuts against Americans’ instincts at moments like this. What if what America is seeing is not institutional collapse but institutional friction: the system doing what it was designed to do, even if it looks ugly when it does it?

For many Americans, federalism is little more than a civics term – something about states’ rights or decentralization.

In practice, however, federalism functions less as a clean division of authority and more as a system for managing conflict among multiple governments with overlapping jurisdiction. Federalism does not block national authority. It ensures that national decisions are subject to challenge, delay and revision by other levels of government.

Dividing up authority

At its core, federalism works through a small number of institutional mechanics – concrete ways of keeping authority divided, exposed and contestable. Minneapolis shows each of them in action.

First, there’s what’s called “jurisdictional overlap.”

State, local and federal authorities all claim the right to govern the same people and places. In Minneapolis, that overlap is unavoidable: Federal immigration agents, state law enforcement, city officials and county prosecutors all assert authority over the same streets, residents and incidents. And they disagree sharply about how that authority should be exercised.

Second, there’s institutional rivalry.

Because authority is divided, no single level of government can fully monopolize legitimacy. And that creates tension. That rivalry is visible in the refusal of state and local officials across the country to simply defer to federal enforcement.

Instead, governors, mayors and attorneys general have turned to courts, demanded access to evidence and challenged efforts to exclude them from investigations. That’s evident in Minneapolis and also in states that have witnessed the administration’s deployment of National Guard troops against the will of their Democratic governors.

It’s easy to imagine a world where state and local prosecutors would not have to jump through so many procedural hoops to get access to evidence for the death of citizens within their jurisdiction. But consider the alternative.

If state and local officials were barred without consent from seeking evidence – the absence of federalism – or if local institutions had no standing to contest how national power is exercised there, federal authority would operate not just forcefully but without meaningful political constraint.

Protesters fight with law enforcement as tear gas fills the air.
Protesters clash with law enforcement after a federal agent shot and killed a man on Jan. 24, 2026, in Minneapolis, Minn.
Arthur Maiorella/Anadolu via Getty Images

Third, confrontation is local and place-specific.

Federalism pushes conflict into the open. Power struggles become visible, noisy and politically costly. What is easy to miss is why this matters.

Federalism was necessary at the time of the Constitution’s creation because Americans did not share a single political identity. They could not decide whether they were members of one big community or many small communities.

In maintaining their state governments and creating a new federal government, they chose to be both at the same time. And although American politics nationalized to a remarkable degree over the 20th century, federal authority is still exercised in concrete places. Federal authority still must contend with communities that have civic identities and whose moral expectations may differ sharply from those assumed by national actors.

In Minneapolis it has collided with a political community that does not experience federal immigration enforcement as ordinary law enforcement.

The chaos of federalism

Federalism is not designed to keep things calm. It is designed to keep power unsettled – so that authority cannot move smoothly, silently or all at once.

By dividing responsibility and encouraging overlap, federalism ensures that power has to push, explain and defend itself at every step.

“A little chaos,” the scholar Daniel Elazar has said, “is a good thing!”

As chaos goes, though, federalism is more often credited for Trump’s ascent. He won the presidency through the Electoral College – a federalist institution that allocates power by state rather than by national popular vote, rewarding geographically concentrated support even without a national majority.

Partisan redistricting, which takes place in the states, further amplifies that advantage by insulating Republicans in Congress from electoral backlash. And decentralized election administration – in which local officials control voter registration, ballot access and certification – can produce vulnerabilities that Trump has exploited in contesting state certification processes and pressuring local election officials after close losses.

Forceful but accountable

It’s helpful to also understand how Minneapolis is different from the most well-known instances of aggressive federal power imposed on unwilling states: the civil rights era.

Hundreds of students protest the arrival of a Black student to their school.
Hundreds of Ole Miss students call for continued segregation on Sept. 20, 1962, as James Meredith prepares to become the first Black man to attend the university.
AP Photo

Then, too, national authority was asserted forcefully. Federal marshals escorted the Black student James Meredith into the University of Mississippi in 1962 over the objections of state officials and local crowds. In Little Rock in 1957, President Dwight D. Eisenhower federalized the Arkansas National Guard and sent in U.S. Army troops after Gov. Orval Faubus attempted to block the racial integration of Central High School.

Violence accompanied these interventions. Riots broke out in Oxford, Mississippi. Protesters and bystanders were killed in clashes with police and federal authorities in Birmingham and Selma, Alabama.

What mattered during the civil rights era was not widespread agreement at the outset – nationwide resistance to integration was fierce and sustained. Rather, it was the way federal authority was exercised through existing constitutional channels.

Presidents acted through courts, statutes and recognizable chains of command. State resistance triggered formal responses. Federal power was forceful, but it remained legible, bounded and institutionally accountable.

Those interventions eventually gained public acceptance. But in that process, federalism was tarnished by its association with Southern racism and recast as an obstacle to progress rather than the institutional framework through which progress was contested and enforced.

After the civil rights era, many Americans came to assume that national power would normally be aligned with progressive moral aims – and that when it was, federalism was a problem to be overcome.

Minneapolis exposes the fragility of that assumption. Federalism does not distinguish between good and bad causes. It does not certify power because history is “on the right side.” It simply keeps power contestable.

When national authority is exercised without broad moral agreement, federalism does not stop it. It only prevents it from settling quietly.

Why talk about federalism now, at a time of widespread public indignation?

Because in the long arc of federalism’s development, it has routinely proven to be the last point in our constitutional system where power runs into opposition. And when authority no longer encounters rival institutions and politically independent officials, authoritarianism stops being an abstraction.

The Conversation

Nicholas Jacobs does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism – https://theconversation.com/federal-power-meets-local-resistance-in-minneapolis-a-case-study-in-how-federalism-staves-off-authoritarianism-274685

The Supreme Court may soon diminish Black political power, undoing generations of gains

Source: The Conversation – USA – By Robert D. Bland, Assistant Professor of History and Africana Studies, University of Tennessee

U.S. Rep. Cleo Fields, a Democrat who represents portions of central Louisiana in the House, could lose his seat if the Supreme Court invalidates Louisiana’s congressional map. AP Photo/Gerald Herbert

Back in 2013, the Supreme Court tossed out a key provision of the Voting Rights Act regarding federal oversight of elections. It appears poised to abolish another pillar of the law.

In a case known as Louisiana v. Callais, the court appears ready to rule against Louisiana and its Black voters. In doing so, the court may well abolish Section 2 of the Voting Rights Act, a provision that prohibits any discriminatory voting practice or election rule that results in less opportunity for political clout for minority groups.

The dismantling of Section 2 would open the floodgates for widespread vote dilution by allowing primarily Southern state legislatures to redraw political districts, weakening the voting power of racial minorities.

The case was brought by a group of Louisiana citizens who declared that the federal mandate under Section 2 to draw a second majority-Black district violated the equal protection clause of the 14th Amendment and thus served as an unconstitutional act of racial gerrymandering.

There would be considerable historical irony if the court decides to use the 14th Amendment to provide the legal cover for reversing a generation of Black political progress in the South. Initially designed to enshrine federal civil rights protections for freed people facing a battery of discriminatory “Black Codes” in the postbellum South, the 14th Amendment’s equal protection clause has been the foundation of the nation’s modern rights-based legal order, ensuring that all U.S. citizens are treated fairly and preventing the government from engaging in explicit discrimination.

The cornerstone of the nation’s “second founding,” the Reconstruction-era amendments to the Constitution, including the 14th Amendment, created the first cohort of Black elected officials.

I am a historian who studies race and memory during the Civil War era. As I highlight in my new book “Requiem for Reconstruction,” the struggle over the nation’s second founding not only highlights how generational political progress can be reversed but also provides a lens into the specific historical origins of racial gerrymandering in the United States.

Without understanding this history – and the forces that unraveled Reconstruction’s initial promise of greater racial justice – we cannot fully comprehend the roots of those forces that are reshaping our contemporary political landscape in a way that I believe subverts the true intentions of the Constitution.

The long history of gerrymandering

Political gerrymandering, or shaping political boundaries to benefit a particular party, has been considered constitutional since the nation’s 18th-century founding, but racial gerrymandering is a practice with roots in the post-Civil War era.

Expanding beyond the practice of redrawing district lines after each decennial census, late 19th-century Democratic state legislatures built on the earlier cartographic practice to create a litany of so-called Black districts across the postbellum South.

The nation’s first wave of racial gerrymandering emerged as a response to the political gains Southern Black voters made during the administration of President Ulysses S. Grant in the 1870s. Georgia, Alabama, Florida, Mississippi, North Carolina and Louisiana all elected Black congressmen during that decade. During the 42nd Congress, which met from 1871 to 1873, South Carolina sent Black men to the House from three of its four districts.

A group portrait depicts the first Black senator and a half-dozen Black representatives.
The first Black senator and representatives were elected in the 1870s, as shown in this historic print.
Library of Congress

Initially, the white Democrats who ruled the South responded to the rise of Black political power by crafting racist narratives that insinuated that the emergence of Black voters and Black officeholders was a corruption of the proper political order. These attacks often provided a larger cultural pretext for the campaigns of extralegal political violence that terrorized Black voters in the South, assassinated political leaders, and marred the integrity of several of the region’s major elections.

Election changes

Following these pogroms during the 1870s, southern legislatures began seeking legal remedies to make permanent the counterrevolution of “Redemption,” which sought to undo Reconstruction’s advancement of political equality. A generation before the Jim Crow legal order of segregation and discrimination was established, southern political leaders began to disfranchise Black voters through racial gerrymandering.

These newly created Black districts gained notoriety for their cartographic absurdity. In Mississippi, a shoestring-shaped district was created to snake and swerve alongside the state’s famous river. North Carolina created the “Black Second” to concentrate its African American voters to a single district. Alabama’s “Black Fourth” did similar work, leaving African American voters only one possible district in which they could affect the outcome in the state’s central Black Belt.

South Carolina’s “Black Seventh” was perhaps the most notorious of these acts of Reconstruction-era gerrymandering. The district “sliced through county lines and ducked around Charleston back alleys” – anticipating the current trend of sophisticated, computer-targeted political redistricting.

Possessing 30,000 more voters than the next largest congressional district in the state, South Carolina’s Seventh District radically transformed the state’s political landscape by making it impossible for its Black-majority to exercise any influence on national politics, except for the single racially gerrymandered district.

A map showing South Carolina's congressional districts in the 1880s.
South Carolina’s House map was gerrymandered in 1882 to minimize Black representation, heavily concentrating Black voters in the 7th District.
Library of Congress, Geography and Map Division

Although federal courts during the late 19th century remained painfully silent on the constitutionality of these antidemocratic measures, contemporary observers saw these redistricting efforts as more than a simple act of seeking partisan advantage.

“It was the high-water mark of political ingenuity coupled with rascality, and the merits of its appellation,” observed one Black congressman who represented South Carolina’s 7th District.

Racial gerrymandering in recent times

The political gains of the Civil Rights Movement of the 1950s and 1960s, sometimes called the “Second Reconstruction,” were made tangible by the 1965 Voting Rights Act. The law revived the postbellum 15th Amendment, which prevented states from creating voting restrictions based on race. That amendment had been made a dead letter by Jim Crow state legislatures and an acquiescent Supreme Court.

In contrast to the post-Civil War struggle, the Second Reconstruction had the firm support of the federal courts. The Supreme Court affirmed the principal of “one person, one vote” in its 1962 Baker v. Carr and 1964 Reynolds v. Sims decisions – upending the Solid South’s landscape of political districts that had long been marked by sparsely populated Democratic districts controlled by rural elites.

The Voting Rights Act gave the federal government oversight over any changes in voting policy that might affect historically marginalized groups. Since passage of the 1965 law and its subsequent revisions, racial gerrymandering has largely served the purpose of creating districts that preserve and amplify the political representation of historically marginalized groups.

This generational work may soon be undone by the current Supreme Court. The court, which heard oral arguments in the Louisiana case in October 2025, will release its decision by the end of June 2026.

The Conversation

Robert D. Bland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Supreme Court may soon diminish Black political power, undoing generations of gains – https://theconversation.com/the-supreme-court-may-soon-diminish-black-political-power-undoing-generations-of-gains-274179

Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk

Source: The Conversation – USA (3) – By Michael I Goran, Professor of Pediatrics and Vice Chair for Research, University of Southern California

Consuming less highly processed foods and sugary drinks and more whole grains can meaningfully improve your health. fizkes/iStock via Getty Images Plus

The Dietary Guidelines for Americans aim to translate the most up-to-date nutrition science into practical advice for the public as well as to guide federal policy for programs such as school lunches.

But the newest version of the guidelines, released on Jan. 7, 2026, seems to be spurring more confusion than clarity about what people should be eating.

I’ve been studying nutrition and chronic disease for over 35 years, and in 2020 I wrote “Sugarproof,” a book about reducing consumption of added sugars to improve health. I served as a scientific adviser for the new guidelines.

I chose to participate in this process, despite its accelerated and sometimes controversial nature, for two reasons. First, I wanted to help ensure the review was conducted with scientific rigor. And second, federal health officials prioritized examining areas where the evidence has become especially strong – particularly food processing, added sugars and sugary beverages, which closely aligns with my research.

My role, along with colleagues, was to review and synthesize that evidence and help clarify where the science is strongest and most consistent.

The latest dietary guidelines, published on Jan. 7, 2026, have received mixed reviews from nutrition experts.

What’s different in the new dietary guidelines?

The dietary guidelines, first published in 1980, are updated every five years. The newest version differs from the previous versions in a few key ways.

For one thing, the new report is shorter, at nine pages rather than 400. It offers simpler advice directly to the public, whereas previous guidelines were more directed at policymakers and nutrition experts.

Also, the new guidelines reflect an important paradigm shift in defining a healthy diet. For the past half-century, dietary advice has been shaped by a focus on general dietary patterns and targets for individual nutrients, such as protein, fat and carbohydrate. The new guidelines instead emphasize overall diet quality.

Some health and nutrition experts have criticized specific aspects of the guidelines, such as how the current administration developed them, or how they address saturated fat, beef, dairy, protein and alcohol intake. These points have dominated the public discourse. But while some of them are valid, they risk overshadowing the strongest, least controversial and most actionable conclusions from the scientific evidence.

What we found in our scientific assessment was that just a few straightforward changes to your diet – specifically, reducing highly processed foods and sugary drinks, and increasing whole grains – can meaningfully improve your health.

What the evidence actually shows

My research assistants and I evaluated the conclusions of studies on consuming sugar, highly processed foods and whole grains, and assessed how well they were conducted and how likely they were to be biased. We graded the overall quality of the findings as low, moderate or high based on standardized criteria such as their consistency and plausibility.

We found moderate to high quality evidence that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.

Similarly, we found moderately solid evidence that people who drink more sugar-sweetened beverages have a higher risk of obesity and Type 2 diabetes, as well as quite conclusive evidence that children who drink fruit juice have a higher risk of obesity. And consuming more beverages containing artificial sweeteners raises the risk of death from any cause and Alzheimer’s disease, based on moderately good evidence.

Whole grains, on the other hand, have a protective effect on health. We found high-quality evidence that people who eat more whole grains have a lower risk of cardiovascular disease and death from any cause. People who consume more dietary fiber, which is abundant in whole grains, have a lower risk of Type 2 diabetes and death from any cause, based on moderate-quality research.

According to the research we evaluated, it’s these aspects – too much highly processed foods and sweetened beverages, and too little whole grain foods – that are significantly contributing to the epidemic of chronic diseases such as obesity, Type 2 diabetes and heart disease in this country – and not protein, beef or dairy intake.

Different types of food on rustic wooden table
Evidence suggests that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.
fcafotodigital/E+ via Getty Images

From scientific evidence to guidelines

Our report was the first one to recommend that the guidelines explicitly mention decreasing consumption of highly processed foods. Overall, though, research on the negative health effects of sugar and processed foods and the beneficial effects of whole grains has been building for many years and has been noted in previous reports.

On the other hand, research on how strongly protein, red meat, saturated fat and dairy are linked with chronic disease risk is much less conclusive. Yet the 2025 guidelines encourage increasing consumption of those foods – a change from previous versions.

The inverted pyramid imagery used to represent the 2025 guidelines also emphasizes protein – specifically, meat and dairy – by putting these foods in a highly prominent spot in the top left corner of the image. Whole grains sit at the very bottom; and except for milk, beverages are not represented.

Scientific advisers were not involved in designing the image.

Making small changes that can improve your health

An important point we encountered repeatedly in reviewing the research was that even small dietary changes could meaningfully lower people’s chronic disease risks.

For example, consuming just 10% fewer calories per day from highly processed foods could lower the risk of diabetes by 14%, according to one of the lead studies we relied on for the evidence review. Another study showed that eating one less serving of highly processed foods per day lowers the risk of heart disease by 4%.

You can achieve that simply by switching from a highly processed packaged bread to one with fewer ingredients or replacing one fast-food meal per week with a simple home-cooked meal. Or, switch your preferred brands of daily staples such as tomato sauce, yogurt, salad dressing, crackers and nut butter to ones that have fewer ingredients like added sugars, sweeteners, emulsifiers and preservatives.

Cutting down on sugary beverages – for example, soda, sweet teas, juices and energy drinks – had an equally dramatic effect. Simply drinking the equivalent of one can less per day lowers the risk of diabetes by 26% and the risk of heart disease by 14%.

And eating just one additional serving of whole grains per day – say, replacing packaged bread with whole grain bread – results in an 18% lower risk of diabetes and a 13% lower risk of death from all causes combined.

How to adopt ‘kitchen processing’

Another way to make these improvements is to take basic elements of food processing back from manufacturers and return them to your own kitchen – what I call “kitchen processing.” Humans have always processed food by chopping, cooking, fermenting, drying or freezing. The problem with highly processed foods isn’t just the industrial processing that transforms the chemical structure of natural ingredients, but also what chemicals are added to improve taste and shelf life.

Kitchen processing, though, can instead be optimized for health and for your household’s flavor preferences – and you can easily do it without cooking from scratch. Here are some simple examples:

  • Instead of flavored yogurts, buy plain yogurt and add your favorite fruit or some homemade simple fruit compote.

  • Instead of sugary or diet beverages, use a squeeze of citrus or even a splash of juice to flavor plain sparkling water.

  • Start with a plain whole grain breakfast cereal and add your own favorite source of fiber and/or fruit.

  • Instead of packaged “energy bars” make your own preferred mixture of nuts, seeds and dried fruit.

  • Instead of bottled salad dressing, make a simple one at home with olive oil, vinegar or lemon juice, a dab of mustard and other flavorings of choice, such as garlic, herbs, or honey.

You can adapt this way of thinking to the foods you eat most often by making similar types of swaps. They may seem small, but they will build over time and have an outsized effect on your health.

The Conversation

Michael I Goran receives funding from the National Institutes of Health and the Dr Robert C and Veronica Atkins Foundation. He is a scientific advisor to Eat Real (non-profit promoting better school meals) and has previously served as a scientific advisor to Bobbi (infant formula) and Begin Health (infant probiotics).

ref. Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk – https://theconversation.com/confused-by-the-new-dietary-guidelines-focus-on-these-simple-evidence-based-shifts-to-lower-your-chronic-disease-risk-273701

Private credit rating agencies shape Africa’s access to debt. Better oversight is needed

Source: The Conversation – Africa – By Daniel Cash, Senior Fellow, United Nations University; Aston University

Africa’s development finance challenge has reached a critical point. Mounting debt pressure is squeezing fiscal space. And essential needs in infrastructure, health and education remain unmet. The continent’s governments urgently need affordable access to international capital markets. Yet many continue to face borrowing costs that make development finance unviable.

Sovereign credit ratings – the assessments that determine how financial markets price a country’s risk – play a central role in this dynamic. These judgements about a government’s ability and willingness to repay debt are made by just three main agencies – S&P Global, Moody’s and Fitch. The grades they assign, ranging from investment grade to speculative or default, directly influence the interest rates governments pay when they borrow.

Within this system, the stakes for African economies are extremely high. Borrowing costs rise sharply once countries fall below investment grade. And when debt service consumes large shares of budgets, less remains for schools, hospitals or climate adaptation. Many institutional investors also operate under mandates restricting them to investment-grade bonds.




Read more:
Africa’s development banks are being undermined: the continent will pay the price


Countries rated below this threshold are excluded from large pools of capital. In practice it means that credit ratings shape the cost of borrowing, as well as whether borrowing is possible at all.

I am a researcher who has examined how sovereign credit ratings operate within the international financial system. And I’ve followed debates about their role in development finance. Much of the criticism directed at the agencies has focused on: their distance from the countries they assess; the suitability of some analytical approaches; and the challenges of applying standardised models across different economic contexts.

Less attention has been paid to the position ratings now occupy within the global financial architecture. Credit rating agencies are private companies that assess the likelihood that governments and firms will repay their debts. They sell these assessments to investors, banks and financial institutions, rather than working for governments or international organisations. But their assessments have become embedded in regulation, investment mandates and policy processes in ways that shape public outcomes.

This has given ratings a governance-like influence over access to finance, borrowing costs and fiscal space. In practice, ratings help determine how expensive it is for governments to borrow. This determines how much room they have to spend on public priorities like health, education, and infrastructure. Yet, credit rating agencies were not created to play this role. They emerged as private firms in the early 1900s to provide information to investors. The frameworks for coordinating and overseeing their wider public impact – which grew long after they were established – developed gradually and unevenly over time.

The question isn’t whether ratings should be replaced. Rather, it’s how this influence is understood and managed.

Beyond the bias versus capacity debate

Discussions about Africa’s sovereign ratings often focus on two explanations. One is that African economies are systemically underrated, with critics pointing to rapid downgrades and assessments that appear harsher than those applied to comparable countries elsewhere.

Factors often cited include the location of analytical teams in advanced economies, limited exposure to domestic policy processes in the global south, and incentive structures shaped by closer engagement with regulators and market actors in major financial centres.

The other explanation emphasises macroeconomic fundamentals, the basic economic conditions that shape a government’s ability to service debt, such as growth prospects, export earnings, institutional strength and fiscal buffers. When these are weaker or more volatile, borrowing costs tend to be more sensitive to global shocks.

Both perspectives have merit. Yet neither fully explains a persistent pattern: governments often undertake significant reforms, sometimes at high political and social costs, but changes in ratings can lag well behind those efforts. During that period, borrowing costs remain high and market access constrained. It is this gap between reform and recognition that points to a deeper structural issue in how credit ratings operate within the global financial system.

Design by default

Credit ratings began as a commercial information service for investors. Over several decades, from the 1970s to the 2000s, they became embedded in financial regulation. United States regulators first incorporated ratings into capital rules in 1975 as benchmarks for determining risk charges. The European Union followed in the late 1980s and 1990s. Key international bodies followed.

This process was incremental, not the result of deliberate public design. Ratings were adopted because they were available, standardised and widely recognised. It’s argued that private sector reliance on ratings typically followed their incorporation into public regulation. But in fact markets relied informally on credit rating assessments long before regulators formalised their use.

By the late 1990s, ratings had become deeply woven into how financial markets function. The result was that formal regulatory reliance increased until ratings became essential for distinguishing creditworthiness. This, some have argued, may have encouraged reliance on ratings at the expense of independent risk assessment.

Today, sovereign credit ratings influence which countries can access development finance, at what cost, and on what terms. They shape the fiscal options available to governments, and therefore the policy space for pursuing development goals.

Yet ratings agencies remain private firms, operating under commercial incentives. They developed outside the multilateral system and were not originally designed for a governance role. The power they wield is real. But the mechanisms for coordinating that power over public development objectives emerged later and separately. This created a governance function without dedicated coordination or oversight structures.

Designing the missing layer

African countries have initiated reform efforts to address their development finance challenge. For instance, some work with credit rating agencies to improve data quality and strengthen institutions. But these efforts don’t always translate into timely changes in assessments.

Part of the difficulty lies in shared information constraints. The link between fiscal policy actions and market perception remains complex. Governments need ways to credibly signal reform. Agencies need reliable mechanisms to verify change. And investors need confidence that assessments reflect current conditions rather than outdated assumptions.




Read more:
Africa’s new credit rating agency could change the rules of the game. Here’s how


While greater transparency can help, public debt data remains fragmented across databases and institutions.

A critical missing element in past reform efforts has been coordination infrastructure: dialogue platforms and credibility mechanisms that allow complex information to flow reliably between governments, agencies, investors and multilateral institutions.

Evidence suggests that external validation can help reforms gain market recognition. In practice, this points to the need for more structured interaction between governments, rating agencies, development partners and regional credit rating agencies around data, policy commitments and reform trajectories.

One option is the Financing for Development process. This is a multistakeholder forum coordinated by the United Nations that negotiates how the global financial system should support sustainable development. Addressing how credit ratings function within the financial system is a natural extension of this process.

Building a coordination layer need not mean replacing ratings. Or shifting them into the public sector. It means creating the transparency, dialogue and accountability structures that help any system function more effectively.

Recognising this reality helps explain how development finance actually works. As debt pressures rise and climate adaptation costs grow, putting this governance layer in place is now critical to safeguarding development outcomes in Africa.

The Conversation

Daniel Cash is affiliated with UN University Centre for Policy Research.

ref. Private credit rating agencies shape Africa’s access to debt. Better oversight is needed – https://theconversation.com/private-credit-rating-agencies-shape-africas-access-to-debt-better-oversight-is-needed-274858

Data centers told to pitch in as storms and cold weather boost power demand

Source: The Conversation – USA (2) – By Nikki Luke, Assistant Professor of Human Geography, University of Tennessee

During winter storms, physical damage to wires and high demand for heating put pressure on the electrical grid. Brett Carlsen/Getty Images

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow and freezing temperatures, it left more than a million people without power, mostly in the Southeast.

Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels.

Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOT – the company that manages the Texas power grid – as well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators.

The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their “industrial diesel generators” could “generate 35 gigawatts of power, or enough electricity to power many millions of homes.”

We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through and recover from winter storms.

A close-up of a rack of electronics.
The electronics in data centers consume large amounts of electricity.
RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

Data centers use enormous quantities of energy

Before Wright’s order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies.

This is a pressing question, because data centers’ power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJM’s.

And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030 – enough power to supply 30 million new homes, but nearly all going to new data centers. PJM’s job is to coordinate that energy – and figure out how much the public, or others, should pay to supply it.

The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution.

Local ordinances, regulations created by state utility commissions and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need.

Always-on connections?

In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time.

But since the 1970s, utilities have encouraged “demand response” programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation.

Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as “virtual power plants.”

A different approach

The terms of data center agreements with local governments and utilities often aren’t available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use.

In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems.

Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide “data center demand response by targeting machine learning workloads,” shifting “non-urgent compute tasks” away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers’ power use off the grid during power shortages.

An aerial view of metal equipment and wires with a city skyline in the background.
Large amounts of power move through parts of the U.S. electricity grid.
Joe Raedle/Getty Images

Flexibility for the future

One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacity – the amount that would power around 70 million households – could be added to the grid without adding new generation and transmission.

In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted.

Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms.

Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasn’t enough electricity for their homes but because their power lines were down.

The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist.

Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold.

The Conversation

Nikki Luke is a fellow at the Climate and Community Institute. She receives funding from the Alfred P. Sloan Foundation. She previously worked at the U.S. Department of Energy.

Conor Harrison receives funding from Alfred P. Sloan Foundation and has previously received funding from the U.S. National Science Foundation.

ref. Data centers told to pitch in as storms and cold weather boost power demand – https://theconversation.com/data-centers-told-to-pitch-in-as-storms-and-cold-weather-boost-power-demand-274604

Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games

Source: The Conversation – USA (2) – By Steven R. Fassnacht, Professor of Snow Hydrology, Colorado State University

Italy’s Predazzo Ski Jumping Stadium, which is hosting events for the 2026 Winter Olympics, needed snowmaking machines for the Italian National Championship Open on Dec. 23, 2025. Mattia Ozbot/Getty Images

Watching the Winter Olympics is an adrenaline rush as athletes fly down snow-covered ski slopes, luge tracks and over the ice at breakneck speeds and with grace.

When the first Olympic Winter Games were held in Chamonix, France, in 1924, all 16 events took place outdoors. The athletes relied on natural snow for ski runs and freezing temperatures for ice rinks.

Two skaters on ice outside with mountains in the background. They are posing as if gliding together.
Sonja Henie, left, and Gilles Grafstrom at the Olympic Winter Games in Chamonix, France, in 1924.
The Associated Press

Nearly a century later, in 2022, the world watched skiers race down runs of 100% human-made snow near Beijing. Luge tracks and ski jumps have their own refrigeration, and four of the original events are now held indoors: figure skaters, speed skaters, curlers and hockey teams all compete in climate-controlled buildings.

Innovation made the 2022 Winter Games possible in Beijing. Ahead of the 2026 Winter Olympics in northern Italy, where snowfall was below average for the start of the season, officials had large lakes built near major venues to provide enough water for snowmaking. But snowmaking can go only so far in a warming climate.

As global temperatures rise, what will the Winter Games look like in another century? Will they be possible, even with innovations?

Former host cities that would be too warm

The average daytime temperature of Winter Games host cities in February has increased steadily since those first events in Chamonix, rising from 33 degrees Fahrenheit (0.4 Celsius) in the 1920s-1950s to 46 F (7.8 C) in the early 21st century.

In a recent study, scientists looked at the venues of 19 past Winter Olympics to see how each might hold up under future climate change.

A cross-country skier falls in front of another during a race. The second skier has his mouth open as if shouting.
Human-made snow was used to augment trails at the Sochi Games in Russia in 2014. Some athletes complained that it made the trails icier and more dangerous.
AP Photo/Dmitry Lovetsky

They found that by midcentury, four former host cities – Chamonix; Sochi, Russia; Grenoble, France; and Garmisch-Partenkirchen, Germany – would no longer have a reliable climate for hosting the Games, even under the United Nations’ best-case scenario for climate change, which assumes the world quickly cuts its greenhouse gas emissions. If the world continues burning fossil fuels at high rates, Squaw Valley, California, and Vancouver, British Columbia, would join that list of no longer being a reliable climate for hosting the Winter Games.

By the 2080s, the scientists found, the climates in 12 of 22 former venues would be too unreliable to host the Winter Olympics’ outdoor events; among them were Turin, Italy; Nagano, Japan; and Innsbruck, Austria.

In 2026, there are now five weeks between the Winter Olympics and the Paralympics, which last through mid-March. Host countries are responsible for both events, and some venues may increasingly find it difficult to have enough snow on the ground, even with snowmaking capabilities, as snow seasons shorten.

Ideal snowmaking conditions today require a dewpoint temperature – the combination of coldness and humidity – of around 28 F (-2 C) or less. More moisture in the air melts snow and ice at colder temperatures, which affects snow on ski slopes and ice on bobsled, skeleton and luge tracks.

Stark white lines etched on a swath of brown mountains delineate ski routes and bobsled course.
A satellite view clearly shows the absence of natural snow during the 2022 Winter Olympics. Beijing’s bid to host the Winter Games had explained how extensively it would rely on snowmaking.
Joshua Stevens/NASA Earth Observatory
A gondola passes by with dark ground below and white ski slopes behind it.
The finish area of the Alpine ski venue at the 2022 Winter Olympics was white because of machine-made snow.
AP Photo/Robert F. Bukaty

As Colorado snow and sustainability scientists and also avid skiers, we’ve been watching the developments and studying the climate impact on the mountains and winter sports we love.

Conditions vary by location and year to year

The Earth’s climate will be warmer overall in the coming decades. Warmer air can mean more winter rain, particularly at lower elevations. Around the globe, snow has been covering less area. Low snowfall and warm temperature made the start to the 2025-26 winter season particularly poor for Colorado’s ski resorts.

However, local changes vary. For example, in northern Colorado, the amount of snow has decreased since the 1970s, but the decline has mostly been at higher elevations.

Several machines pump out sprays of snow across a slope.
Snow cannons spray machine-made snow on a ski slope ahead of the 2026 Winter Olympics.
Mattia Ozbot/Getty Images

A future climate may also be more humid, which affects snowmaking and could affect bobsled, luge and skeleton tracks.

Of the 16 Winter Games sports today, half are affected by temperature and snow: Alpine skiing, biathlon, cross-country skiing, freestyle skiing, Nordic combined, ski jumping, ski mountaineering and snowboarding. And three are affected by temperature and humidity: bobsled, luge and skeleton.

Technology also changes

Developments in technology have helped the Winter Games adapt to some changes over the past century.

Hockey moved indoors, followed by skating. Luge and bobsled tracks were refrigerated in the 1960s. The Lake Placid Winter Games in 1980 in New York used snowmaking to augment natural snow on the ski slopes.

Today, indoor skiing facilities make skiing possible year-round. Ski Dubai, open since 2005, has five ski runs on a hill the height of a 25-story building inside a resort attached to a shopping mall.

Resorts are also using snowfarming to collect and store snow. The method is not new, but due to decreased snowfall and increased problems with snowmaking, more ski resorts are keeping leftover snow to be prepared for the next winter.

Two workers pack snow on an indoor ski slope with a sloped ceiling overhead.
Dubai has an indoor ski slope with multiple runs and a chairlift, all part of a shopping mall complex.
AP Photo/Jon Gambrell

But making snow and keeping it cold requires energy and water – and both become issues in a warming world. Water is becoming scarcer in some areas. And energy, if it means more fossil fuel use, further contributes to climate change.

The International Olympic Committee recognizes that the future climate will have a big impact on the Olympics, both winter and summer. It also recognizes the importance of ensuring that the adaptations are sustainable.

The Winter Olympics could become limited to more northerly locations, like Calgary, Alberta, or be pushed to higher elevations.

Summer Games are feeling climate pressure, too

The Summer Games also face challenges. Hot temperatures and high humidity can make competing in the summer difficult, but these sports have more flexibility than winter sports.

For example, changing the timing of typical summer events to another season can help alleviate excessive temperatures. The 2022 World Cup, normally a summer event, was held in November so Qatar could host it.

What makes adaptation more difficult for the Winter Games is the necessity of snow or ice for all of the events.

A snowboarder with 'USA' on her gloves puts her arms out for balance on a run.
Climate change threatens the ideal environments for snowboarders, like U.S. Olympian Hailey Langland, competing here during the women’s snowboard big air final in Beijing in 2022.
AP Photo/Jae C. Hong

Future depends on responses to climate change

In uncertain times, the Olympics offer a way for the world to come together.

People are thrilled by the athletic feats, like Jean-Claude Killy winning all three Alpine skiing events in 1968, and stories of perseverance, like the 1988 Jamaican bobsled team competing beyond all expectations.

The Winter Games’ outdoor sports may look very different in the future. How different will depend heavily on how countries respond to climate change.

This updates an article originally published Feb. 19, 2022, with the 2026 Winter Games.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games – https://theconversation.com/climate-change-threatens-the-winter-olympics-future-and-even-snowmaking-has-limits-for-saving-the-games-274800

Clergy protests against ICE turned to a classic – and powerful – American playlist

Source: The Conversation – USA (3) – By David W. Stowe, Professor of Religious Studies, Michigan State University

Clergy and community leaders demonstrate outside Minneapolis-St. Paul International Airport on Jan. 23, 2026, amid a surge by federal immigration agents. Brandon Bell/Getty Images

On Jan. 28, 2026, Bruce Springsteen released “Streets of Minneapolis,” a hard-hitting protest against the immigration enforcement surge in the city, including the killings of Renee Good and Alex Pretti. The song is all over social media, and the official video has already been streamed more than 5 million times. It’s hard to remember a time when a major artist has released a song in the midst of a specific political crisis.

Yet some of the most powerful music coming out of Minneapolis is of a much older vintage. Hundreds of clergy from around the country converged on the city in late January to take part in faith-based protests. Many were arrested while blocking a road near the airport. And they have been singing easily recognizable religious songs used during the Civil Rights movement of the 1950s and ‘60s, like “Amazing Grace,” “We Shall Overcome, and ”This Little Light of Mine.“

I have been studying the politics of music and religion for more than 25 years, and I wrote about songs I called “secular spirituals” in my 2004 book, “How Sweet the Sound: Music in the Spiritual Lives of Americans.” Sometimes called “freedom songs,” they were galvanizing more than 60 years ago, and are still in use today.

But why these older songs, and why do they usually come out of the church? There have been many protest movements since the mid-20th century, and they have all produced new music. The freedom songs, though, have a unique staying power in American culture – partly because of their historical associations and partly because of the songs themselves.

‘We Shall Overcome’ was one of several songs at the 1963 March on Washington.

Stronger together

Some of protest music’s power has to do with singing itself. Making music in a group creates a tangible sense of community and collective purpose. Singing is a physical activity; it comes out of our core and helps foster solidarity with fellow singers.

Young activists working in the Deep South during the most violent years of the Civil Rights Movement spoke of the courage that came from singing freedom songs like “We Shall Overcome” in moments of physical danger. In addition to helping quell fear, the songs were unnerving to authorities trying to maintain segregation. “If you have to sing, do you have to sing so loud?” one activist recalled an armed deputy saying.

And when locked up for days in a foul jail, there wasn’t much else to do but sing. When a Birmingham, Alabama, police commissioner released young demonstrators he’d arrested, they recalled him complaining that their singing “made him sick.”

Test of time

Sometimes I ask students if they can think of more recent protest songs that occupy the same place as the freedom songs of the 1960s. There are some well-known candidates: Bob Marley’s “Get Up, Stand Up,” Green Day’s “American Idiot” and Public Enemy’s “Fight the Power,” to name a few. The Black Lives Matter movement alone helped produce several notable songs, including Beyonce’s “Freedom,” Kendrick Lamar’s “Alright and Childish Gambino’s ”This Is America.“

But the older religious songs have advantages for on-the-ground protests. They have been around for a long time, meaning that more people have had more chances to learn them. Protesters typically don’t struggle to learn or remember the tune. As iconic church songs that have crossed over into secular spirituals, they were written to be memorable and singable, crowd-tested for at least a couple of generations. They are easily adaptable, so protesters can craft new verses for their cause – as when civil rights activists added “We are not afraid” to the lyrics of “We shall overcome.”

A black-and-white photo shows a row of seated women inside a van or small space clapping as they sing.
Protesters sing at a civil rights demonstration in New York in 1963.
Bettmann Archive/Getty Images

And freedom songs link the current protesters to one of the best-known – and by some measures, most successful – protest movements of the past century. They create bonds of solidarity not just among those singing them in Minneapolis, but with protesters and activists of generations past.

These religious songs are associated with nonviolence, an important value in a citizen movement protesting violence committed by federal law enforcement. And for many activists, including the clergy who poured into Minneapolis, religious values are central to their willingness to stand up for citizens targeted by ICE.

Deep roots

The best-known secular spirituals actually predate the Civil Rights Movement. “We Shall Overcome” first appeared in written form in 1900 as “I’ll Overcome Some Day,” by the Methodist minister Charles Tindley, though the words and tunes are different. It was sung by striking Black tobacco workers in South Carolina in 1945 and made its way to the Highlander Folk School in Tennessee, an integrated training center for labor organizers and social justice activists.

It then came to the attention of iconic folk singer Pete Seeger, who changed some words and gave it wide exposure. “We Shall Overcome” has been sung everywhere from the 1963 March on Washington and anti-apartheid rallies in South Africa to South Korea, Lebanon and Northern Ireland.

“Amazing Grace” has an even longer history, dating back to a hymn written by John Newton: an 18th-century ship captain in the slave trade who later became an Anglican clergyman and penned an essay against slavery. Pioneering American gospel singer Mahalia Jackson recorded it in 1947 and sang it regularly during the 1960s.

Mahalia Jackson sings the Gospel hymn ‘How I Got Over’ at the March on Washington.

Firmly rooted in Protestant Christian theology, the song crossed over into a more secular audience through a 1970 cover version by folk singer Judy Collins, which reached No. 15 on the Billboard charts. During Mississippi Freedom Summer of 1964, an initiative to register Black voters, Collins heard the legendary organizer Fannie Lou Hamer singing “Amazing Grace,” a song she remembered from her Methodist childhood.

Opera star Jessye Norman sang it at Nelson Mandela’s 70th birthday tribute in London, and bagpipers played it at a 2002 interfaith service near Ground Zero to commemorate victims of 9/11.

‘This little light’

Another gospel song used in protests against ICE – “This little light of mine, I’m gonna let it shine” – has similarly murky historical origins and also passed through the Highlander Folk School into the Civil Rights Movement.

It expresses the impulse to be seen and heard, standing up for human rights and contributing to a movement much larger than each individual. But it could also mean letting a light shine on the truth – for example, demonstrators’ phones documenting what happened in the two killings in Minneapolis, contradicting some officials’ claims.

Like the Civil Rights Movement, the protests in Minneapolis involve protecting people of color from violence – as well as, more broadly, protecting immigrants’ and refugees’ legal right to due process. A big difference is that in the 1950s and 1960s, the federal government sometimes intervened to protect people subjected to violence by states and localities. Now, many Minnesotans are trying to protect people in their communities from agents of the federal government.

The Conversation

David W. Stowe does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clergy protests against ICE turned to a classic – and powerful – American playlist – https://theconversation.com/clergy-protests-against-ice-turned-to-a-classic-and-powerful-american-playlist-274585

A UK climate security report backed by the intelligence services was quietly buried – a pattern we’ve seen many times before

Source: The Conversation – UK – By Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex

Last autumn, a UK government report warned that climate-driven ecosystem collapse could lead to food shortages, mass migration, political extremism and even nuclear conflict. The report was never officially launched.

Commissioned by Defra – the Department for Environment Food and Rural Affairs – and informed by intelligence agencies including MI5 and MI6, the briefing assessed how environmental degradation could affect UK national security.

At the last minute the launch was cancelled, reportedly blocked by Number 10. Thanks to pressure from campaigners and a freedom of information request, a 14-page version of the report was snuck out (no launch, not even a press release) on January 22.

That report says: “Critical ecosystems that support major food production areas and impact global climate, water and weather cycles” are already under stress and represent a national security risk. If they failed, the consequences would be severe: water insecurity, severely reduced crop yields, loss of arable land, fisheries collapse, changes to global weather patterns, release of trapped carbon exacerbating climate change, novel zoonotic disease and loss of pharmaceutical resources.

In plainer terms: the UK would face hunger, thirst, disease and increasingly violent weather.

An unredacted version of the report, seen by the Times, goes further. It warns that the degradation of the Congo rainforest and the drying up of rivers fed by the Himalayas could drive people to flee to Europe (Britain’s large south Asian diaspora would make it “an attractive destination”), leading to “more polarised and populist politics” and putting more pressure on national infrastructure.

The Times describes a “reasonable worst case scenario” in the report, where many ecosystems were “so stressed that they could soon pass the point where they could be protected”. Declining Himalayan water supplies would “almost certainly escalate tensions” between China, India and Pakistan, potentially leading to nuclear conflict. Britain, which imports 40% of its food, would struggle to feed itself, the unredacted report says.

The report isn’t an outlier, and these concerns are not confined to classified briefings. A 2024 report by the University of Exeter and think-tank IPPR warned that cascading climate impacts and tipping points threaten national security – exactly the risk outlined in the Defra report.

River flows through jagged mountains
Melting glaciers in remote mountains ultimately pose a security threat for the UK, say intelligence services.
Hussain Warraich / shutterstock

The government has not publicly explained why the launch was cancelled. In response to the Times article, a Department for Environment, Food and Rural Affairs spokesperson said: “Nature underpins our security, prosperity and resilience, and understanding the threats we face from biodiversity loss is crucial to meeting them head on. The findings of this report will inform the action we take to prepare for the future.”

Perhaps there are mundane reasons to be cautious about a report linked to the intelligence services that warns of global instability. But the absence of any formal briefing or ministerial comment is itself revealing – climate risks appear to be treated differently from other risks to national security. It’s hard to imagine a report warning of national security risks from AI, China or ocean piracy getting the same treatment.

This episode is not even especially unusual, historically. Governments have been receiving warnings about climate change – and downplaying or delaying responses – for decades.

Decades of warnings

In January 1957, the Otago Daily Times reported a speech by New Zealand scientist Athol Rafter under the headline “Polar Ice Caps May Melt With Industrialisation”. And Rafter was merely repeating concerns already circulating internationally, including by a Canadian physicist whose similar warning went around the world in May 1953. Climate change first went viral more than seven decades ago.

By the early 1960s, scientists were holding meetings explicitly focused on the implications of carbon dioxide build-up. In 1965, a report to the US president’s Science Advisory Council warned that “marked changes in climate, not controllable though local or even national efforts, could occur”.

Senior figures in the UK government were aware of these discussions by the late 1960s, while the very first environment white paper, in May 1970, mentions carbon dioxide build-up as a possible problem.

But the story we see today was the same. Reports are commissioned, urgent warnings are issued – and action is deferred. When climate change gained renewed momentum in the mid-1980s, following the discovery of the ozone hole and the effects of greenhouse gases besides carbon dioxide, the message sharpened: global warming will come quicker and hit harder than expected.

Margaret Thatcher finally acknowledged the threat in a landmark 1988 speech to the Royal Society. But when green groups tried to get her to make specific commitments, they had little success.

Since about 1990, the briefings have barely changed. Act now, or suffer severe consequences later. Those consequences, however, are no longer theoretical.

Why does nothing happen?

Partly, it’s down to inertia. We have built societies in which carbon-intensive systems are locked in. Once you’ve built infrastructure around, say, the private petrol-powered automobile, it’s hard for competitors to offer an alternative. There’s also a mental intertia: it’s hard to let go of assumptions you grew up with in a more stable era.

Secrecy plays a role too. As the Defra report illustrates, uncomfortable assessments are often softened, delayed or buried. Then, if you do accept the need for action, you are then up against the problem of responsibility being fragmented across sectors and institutions, making it hard to know where to aim your efforts. Meanwhile, social movements fighting for climate action find it hard to sustain momentum for more than three years.

Here’s the final irony. Conspiracy theorists and climate deniers insist governments are exaggerating the threat. In reality, the evidence increasingly suggests the opposite. Official assessments tend to lag behind scientific warnings, and the most pessimistic scenarios are often confined to technical or classified documents.

The situation is not better than we are told. It’s actually far worse.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.


The Conversation

Marc Hudson was employed as a post-doctoral researcher on various industrial decarbonisation projects. He runs a climate histories website called All Our Yesterdays. http://allouryesterdays.info

ref. A UK climate security report backed by the intelligence services was quietly buried – a pattern we’ve seen many times before – https://theconversation.com/a-uk-climate-security-report-backed-by-the-intelligence-services-was-quietly-buried-a-pattern-weve-seen-many-times-before-274325

Why do our joints crack, pop and crunch and should we worry about it?

Source: The Conversation – UK – By Clodagh Toomey, Physiotherapist and Associate Professor, School of Allied Health, University of Limerick

New Africa/Shutterstock

Many of us have noisy joints. Knees crack on the stairs, necks pop when we stretch, and knuckles seem to crack almost on demand. These sounds can be startling and are often blamed on ageing, damage or the looming threat of arthritis.

As a physiotherapist and researcher of chronic joint pain, I am frequently asked whether joint noises are something to worry about. The reassuring answer is that, in most cases, they are not.

One reason joint sounds cause anxiety is that we tend to treat them as a single phenomenon. Clinically, they are not.

The familiar “crack” from knuckles, backs or necks is usually caused by a process called cavitation. Joints are surrounded by a capsule filled with synovial fluid, a thick lubricant that contains dissolved gases such as oxygen, nitrogen and carbon dioxide. When a joint is stretched beyond its usual range, pressure inside the capsule drops. A gas bubble forms and collapses, producing the popping sound.

This is why you cannot crack the same joint repeatedly. It typically takes around 20 minutes for the gas to dissolve back into the fluid.

Other noises are different. Snapping sounds often come from tendons moving over bony structures. Grinding, crunching or creaking noises, known as crepitus, are particularly common in the knees. These are thought to arise from movement between cartilage and bone surfaces and are often felt as well as heard.

Knees are especially prone to crepitus because of how they work. The kneecap sits in a groove at the front of the thigh bone and is guided by muscles above and below it. If those muscles pull unevenly, because of strength imbalances, tightness or foot and hip mechanics, the kneecap can track slightly off centre. This can increase the crunching or grinding sensation.

Noise on its own is rarely a problem. What matters clinically is whether it comes with other symptoms. Pain, swelling, locking of the joint or a noticeable reduction in function are the things that warrant further assessment.

Does cracking joints cause arthritis?

There is no strong evidence that cracking or popping joints causes osteoarthritis.

Research in this area is challenging, as it requires following people over many years and accurately tracking their habits. The studies that do exist, including retrospective and cross-sectional research, have not found a meaningful link between habitual joint cracking and arthritis.

Some studies have explored other outcomes, such as grip strength or joint laxity, which refers to how loose or flexible a joint is and how much it can move beyond its typical range. Findings have been mixed and inconsistent. Overall, there is no convincing evidence that cracking joints causes damage to joint structures, strength or long-term joint health.

Many people report that joint cracking feels satisfying or relieving. This makes sense. Stretching a joint to the point of cavitation can temporarily increase range of motion and reduce muscle tension. There is also a neurological effect, as nerve endings are stimulated during the movement, sending a reflex signal to the brain which causes local muscle relaxation in the area. The audible pop itself can provide a calming, satisfying sensation which may lead to developing that habitual self-soothing mechanism for tension that annoys your family members and friends.

The key point is that these effects are short lived. Joint cracking does not fix underlying mechanical issues or provide lasting improvements in mobility. If relief only comes from repeated cracking, the underlying cause has not been addressed.

Spinal manipulation

Spinal manipulation, whether performed by physiotherapists, chiropractors or other practitioners, relies on the same cavitation mechanism. There is evidence that it can provide short-term pain relief and reduce muscle tension for some people.

However, it is important to be cautious, particularly with the neck. The cervical spine protects the spinal cord and major blood vessels supplying the brain. Rare but serious complications, including stroke, have been reported following neck manipulation. Anyone considering this type of treatment should ensure it is carried out by a properly trained professional and understand that it targets symptoms rather than underlying causes.

Joint noises do tend to become more common with age. Cartilage changes over time, and muscles and ligaments may lose some of their strength and elasticity. These changes can increase the likelihood of noise during movement.

People who have joint conditions such as knee osteoarthritis and have noisy joints tend to report slightly more pain and reduced function compared to people with osteoarthritis and no crepitus. It may be reassuring to know that there is no difference in tests like walking speed or muscle strength between groups, pointing to a potential psychological impact of noisy knees.

Crucially, noise alone is not a reason to stop being active. Some people reduce their physical activity because they fear they are “wearing out” their joints. In fact, the opposite is true. Movement is essential for joint health. Cartilage relies on regular compression and release to receive nutrients, as it has very limited blood supply.

Exercise is a cornerstone of joint health and is recommended as the first treatment to try in national and international clinical guidelines for conditions such as osteoarthritis. Consistency matters more than the specific type of exercise. The best exercise is the one you will keep doing.

There is no evidence that supplements such as collagen or fish oils reduce joint noise. Large studies show limited effects on pain and function at a population level, although some people report benefits. These supplements are generally safe, but if they do not help, they are unlikely to be worth the cost.

Joint noises are usually harmless. They are worth assessing if they are accompanied by pain, swelling, locking, or reduced function, or if they are limiting your confidence to move. Staying active is one of the best things you can do for your joints, whether they crack, pop, crunch or stay silent.


Strange Health is hosted by Katie Edwards and Dan Baumgardt. The executive producer is Gemma Ware, with video and sound editing for this episode by Sikander Khan. Artwork by Alice Mason.

In this episode, Dan and Katie talk about a social media clip from loryalien via TikTok.

Listen to Strange Health via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript is available via the Apple Podcasts or Spotify apps.

The Conversation

Clodagh Toomey receives funding from the Health Research Board Ireland. She is affiliated with the non-profit Good Life with osteoArthritis Denmark (GLA:D).

ref. Why do our joints crack, pop and crunch and should we worry about it? – https://theconversation.com/why-do-our-joints-crack-pop-and-crunch-and-should-we-worry-about-it-274161