Universities survived Trump’s 2025 funding freeze, but the money still isn’t flowing to researchers

Source: The Conversation – USA (2) – By Brendan Cantwell, Professor of Higher, Adult, and Lifelong Education, Michigan State University

Columbia University, seen in June 2025, is one of the schools that made a deal with the Trump administration last year in order to avoid losing funding. Spencer Platt/Getty Images

Several prominent universities, including Columbia University and the University of Pennsylvania, made headlines in 2025 in a dizzying back-and-forth with the federal government. The Trump administration cut large amounts of research funding to universities. Some pushed back, and others hatched settlements to get the money restored.

So how have these confrontations between higher education and the White House played out over the past year, now that they have dropped out of the spotlight?

Amy Lieberman, education editor at The Conversation U.S., spoke with Brendan Cantwell, a scholar of higher education at Michigan State University, to understand how the Trump administration is adopting a more subtle tactic to block funding to universities.

Where does Trump’s attempt to withdraw funding from universities stand?

Several universities entered into settlements with the Trump administration in 2025 – including the University of Pennsylvania, Columbia University, Cornell University, Northwestern University and Brown University – to restore research funding the government pulled. We don’t really know how those deals are being enforced. They appear to be working, in the sense that the government has not complained and the schools have received the targeted funding that the government canceled.

In another case, Harvard University never entered into a deal with the Trump administration and instead sued the government in April 2025 to block a US$2.7 billion funding freeze. Federal courts restored Harvard’s funding, but we don’t have a lot of specific knowledge on how this funding was restored. The government appealed this ruling in December 2025.

In October, the administration also proposed an agreement, called the Compact for Academic Excellence in Higher Education, that would provide funding advantages for universities that agreed to change their admissions practices to cap the percentage of international students that they enroll, among other policy shifts.

There was almost universal skepticism and condemnation of this deal among schools, and it fell apart, aside from a few small schools not initially invited that said they would sign on.

A man walks and holds a sign that says 'Harvard thank you for your courage!!'
Cambridge, Mass., resident Casey Wenz stands outside Harvard Yard in April 2025 to express support for Harvard University in its legal battle against the Trump administration.
Sydney Roth/Anadolu via Getty Images

What is your research focused on right now?

I am thinking about how the administration is shifting from making targeted deals with universities and more toward using legislative and rule-making processes to achieve its goals.

These deals with universities in 2025 were really unusual. I think they are going to become less and less effective for the administration, as they face losses in court. Universities have also realized that they could not agree to a deal with the administration and still prevail.

Now, we are seeing the administration impose its priorities in other ways, in part through President Donald Trump’s 2025 big tax and spending cuts and new rules at the Department of Education. This approach retains the Trump administration’s ideological preferences, but uses more normal routes.

Are they placing more limits on research funding, or what is the goal?

The Trump administration in 2025 wanted to reduce funding dramatically to the National Institutes of Health, the National Science Foundation – and to NASA, in particular. Congress rejected those requests and instead produced what was essentially a level funding picture for university research.

What isn’t clear is how much of the money appropriated by Congress is going to make its way into new grants for research. Much of the funding that Congress appropriated, so far, has not been released.

We know that in 2025, federal agencies made fewer grants than in past years. The grants the government did make tended to be a bit larger, and winning a grant became more competitive. This approach gives the administration more flexibility in funding the kinds of projects that it prefers.

In my assessment, it seems likely that the government will do the same again this year. The administration may also attempt to withhold a portion of the money that Congress appropriated for scientific research.

Over the course of the year, we are going to see how this plays out. Is the administration just dragging its feet, using whatever administrative levers it has to slow-walk things? Or, is it going to attempt to divert research funding to other priorities and now spend it in a way that Congress did not appropriate? We don’t really know. I do know that universities and scientific research organizations are very concerned about this possibility.

If this money doesn’t start to flow, we probably will see legal challenges from universities and scientific organizations.

How long does it take for delayed funding to become evident in research?

The effects are almost immediate and then build over time.

Some of the grants we expected to be awarded in the first two months of the year have not been awarded. In 2025, thousands of grants were canceled and some agencies made up to 25% fewer grants than they had awarded in prior years.

As the year goes on, unless the pace of awards increases, we can expect the total amount of money that goes out to researchers to be even lower than it was in 2025.

This is the bottom line: Congress continues to fund research, but all money is not making its way to researchers.

What does it look like as the Trump administration shifts its tactics?

One of the ways the administration seems like it will go after universities is by making it harder for students to qualify for student loans. The tax and spending cuts bill, for example, put caps on federal student loan borrowing at the graduate level.

This is more of a normal conservative idea; that the availability of student loans has encouraged universities to offer more low-quality programs at the undergraduate and graduate level which don’t help students. I think these conservative ideas with some mainstream appeal may be the focus of the administration moving forward, in addition to administrative foot-dragging.

Overall, I think that we may see less of these big, direct confrontations between the Trump administration and universities. It worked in the sense that they got some initial concessions from universities, but it is not really clear if those concessions amounted to a major victory for the administration.

The political boundaries of research are also becoming narrower. You can’t do climate research and expect to get federal funding right now.

I think that the federal government is going to continue to restrict money from universities. There is going to be this persistent, progressive shrinking of research funding. But the administration has either not been willing or able to impose a sudden collapse of university funding and bring schools to their knees.

The Conversation

Brendan Cantwell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Universities survived Trump’s 2025 funding freeze, but the money still isn’t flowing to researchers – https://theconversation.com/universities-survived-trumps-2025-funding-freeze-but-the-money-still-isnt-flowing-to-researchers-277716

Indie coffee shops are meant to counter corporate behemoths like Starbucks – so why do they all look the same?

Source: The Conversation – USA (2) – By Conrad Kickert, Associate Professor of Architecture, University at Buffalo

Many coffee shops today seem to be aesthetically divorced from time and place. stomy/iStock via Getty Images

Like many young, urban professionals, we run on coffee. We especially enjoy frequenting independently owned cafes that pride themselves on ethically sourced beverages, strong local ties and a hip aesthetic.

They’re the kinds of places that sneer at the homogenization and predictability of Tim Hortons, Second Cup, Dunkin and Starbucks.

But as public space and consumer culture researchers, we began noticing a pattern: While the invention of new, nondairy milks to mix into lattes continues to amaze us, many U.S. coffee shops seemed to share a similar aesthetic.

What was up with all the exposed brick? Why did so many of the baristas look cooler than us, but also so similar to one another? And why did most menus appear on a chalkboard, as if we were still in kindergarten?

Weren’t we supposed to be in one-of-a-kind, authentic settings that make us feel unique and, let’s admit it, slightly elevated?

As it turns out, the visual patterns we noticed had never been backed up by research. So after a quick cortado, we set out to test our hunch that local coffee shops had adopted a uniform aesthetic.

Measuring homogeneity

We asked over 100 American and Canadian young professionals living in cities to share an interior image of their favorite independent coffee shop, describe why they liked the shop’s appearance, and document aspects of its interior design.

They could select these interior design features from a list of 23 common elements that we had identified in a pilot study – brick walls, marble counters, indoor plants, local art, vintage furniture and even the look of the baristas. Respondents could also write down other details they noticed.

The elements that they selected and wrote down showed a fascinating overlap.

Baristas led the pack: Two-thirds of the participants’ favorite local coffee shops had staff with tattoos or piercings. Over half had baristas with beards. Well over half of the respondents noted that their favorite shop had chalkboards, reclaimed wood features, local art, milk foam designs on beverages, local event posters and exposed brick. A large share of the shops had vintage furniture, community message boards and free books available to patrons to read. One-third of the images had indoor plants, trees or greenery.

Barista with a beard and tattooed hands pours boiling water over coffee grounds.
Chances are your favorite local coffee shop has a barista with a beard and tattoos.
Wera Rodsawang/Moment via Getty Images

Next up, we challenged the participants to identify the city where these coffee shops were located.

Using the images provided by the respondents from the initial survey, we asked 158 new and prior participants if they could match the location of the shops depicted in six photographs to Cincinnati, St. Louis or Toronto – cities chosen for their different architectural and aesthetic qualities.

Not a single participant was able to correctly identify the correct city for all the photos.

We gave respondents another chance by showing two pictures of coffee shops, one at a time. This time, the two shops were located in Chicago and San Francisco – again, places that pride themselves on their unique and recognizable design culture. They were now given the choice of these key cities to select from, as well as three wrong cities. Only 6% successfully located both coffee shops, and nearly 20% immediately gave up.

As one participant conceded: “Honestly, these aesthetics are very transferable now … they were random guesses and they could have been in any of the cities mentioned.”

In other words, independent coffee shops in North America have become so similar aesthetically that their location cannot be picked from a lineup. The purportedly unique and local feel of coffee shops has instead been homogenized into a singular, palatable, North American aesthetic.

Ironically, these shops have narrowed their aesthetics like a de facto brand franchise – exactly like the chain stores that their patrons ostensibly reject.

A young woman with dreadlocks pays for her coffee as a smiling young female barista with short hair holds out a card reader.
Exposed brick, check. Plants, check. Chalkboard, check.
Tara Moore/Digital Vision via Getty Images

Computers and capital

So why is this happening?

New Yorker cultural critic Kyle Chayka has attributed aesthetic homogenization to popular social media platforms like Instagram. He calls it the “tyranny of the algorithm”: Social media algorithms promote the visuals that users are most likely to engage with. This, in turn, causes the same types of visuals to be liked and shared, since users encounter them more often. Because the algorithm sees they’re popular, it continues to promote them, in a self-reinforcing cycle. In turn, coffee shop owners also see these online images and try to replicate them in their own establishments.

Artificial intelligence will likely accelerate the digital homogenization of visual culture, since AI models are trained on massive datasets that feature widely circulated images. Whether it’s popular fashion, architecture or interior design, idiosyncrasies are collapsing into a generic, hegemonic aesthetic – what scholars Roland Meyer and Jacob Birken call “platform realism.”

Finance plays a role as well. With the average cost of starting a new coffee shop between US$80,000 and $300,000, and with only a small share of coffee shops expected to stay open beyond five years, banks are keen to reduce their risk. Many of them will therefore ask aspiring coffee shop owners to opt for cheaper interior design choices that appeal to the broadest customer base.

The consumer also plays a role

But patrons of hip coffee shops may also be to blame.

Decades before the rise of social media, AI and financial risk management, scholars such as Sharon Zukin revealed how young urban professionals paradoxically embrace the homogenization of their environment in their quest for authenticity.

Those exposed brick walls? Zukin already described how Manhattan real estate brokers had marketed them to gentrifying SoHo yuppies in the early 1980s.

Like their predecessors, today’s hipsters, creative professionals and knowledge workers are essentially cultural and aesthetic consumers. Many of them crave visuals – from fashion to architecture – that are different enough to feel cool and authentic, yet safe enough to match their lifestyle and their social status. They want a tasty latte as much as a palatable interior to drink it in.

Businesses and developers are eager to appeal to these upwardly mobile consumers. At the same time, they want to reach the biggest number of customers. So they tend to create repeatable, homogenized environments in what Zukin describes as a “symbolic economy.”

In coffee shops, patrons want more than a good espresso. They want to immerse themselves in a “scene” that matches their lifestyle and aspirations. And the exposed brick and the vintage furniture do just that – even if they’ve been copy-and-pasted in cities, small and large, across the nation.

As we chase authenticity, we may just be finding comfort in carefully curated conformity.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Indie coffee shops are meant to counter corporate behemoths like Starbucks – so why do they all look the same? – https://theconversation.com/indie-coffee-shops-are-meant-to-counter-corporate-behemoths-like-starbucks-so-why-do-they-all-look-the-same-275746

Notions of ‘Christendom’ often miss the mark – medieval Europe’s ideas about faith and power were not so simple

Source: The Conversation – USA (3) – By Brett Whalen, Professor of History, University of North Carolina at Chapel Hill

A painting in Rome’s San Silvestro Chapel depicts Pope Sylvester I and Constantine the Great. Wikimedia Commons

During the National Prayer Breakfast on Feb. 5, 2026, Paula White-Cain, senior adviser to the White House Office of Faith, introduced President Donald Trump as “the greatest champion of faith that we have ever had in the executive branch.” Taking the podium after her, Trump declared, “I’ve done more for religion than any other president.”

Should an earthly leader be promoting a heavenly cause? Some of the Americans who say “yes” – by no means all – are likely sympathetic to the ideas and values of Christian nationalism. A blanket term, Christian nationalism ranges in meaning. Some citizens might see themselves as Christian nationalists simply because they are Christian and patriotic. Others, however, assert that the United States is rightfully a Christian nation that ought to be governed by Christian leaders, ethics and laws.

As a historian, I’m aware that Christian nationalism relies upon a selective and often distorted view of American history. As a historian of the European Middle Ages, in particular, I’m interested in another myth of a shared Christian past that seems to lie beneath the surface of some Christian nationalist claims. That’s the idea of the medieval Christian West, also known as “Christendom”: a time before the modern separation of church and state.

1,000 years

What was Christendom? Similar to Christian nationalism, the term can mean different things to different people.

It generally recalls a long period of time – 1,000 years, give or take – between the “fall” of Rome around 500 C.E. and the beginning of the modern era around 1500. Christianity dominated European politics, society and culture. The Middle Ages really were an era when kings ruled in Christ’s name, when the popes of Rome commanded obedience from believers around Europe, and when monasteries played a crucial role in the shaping of values and education.

An illustration shows one man in a pointed hat putting a crown on a kneeling man's head, set against a red background.
Pope Leo III crowns Charlemagne as Holy Roman Emperor in 800.
Wikimedia Commons

In recent years, though, I’ve observed puzzling and ahistorical ways that the concept of Christendom has started to appear in certain corners of conservative political thought. That era of Christian dominion is sometimes remembered as a lost age of Christian unity, a time when religion and politics were “properly” aligned.

Such views don’t map neatly onto any partisan position or religious affiliation. The Catholic-inspired website The Josias, for example, a guide “for those who wish to bring their faith into the public square and resist the tides of liberalism, modernism, and ignorance of tradition,” is filled with works by medieval thinkers.

In some conservative Protestant circles, one finds yearnings for the creation of a “new Christendom,” an “American Christendom,” or, as pastor Doug Wilson calls it, “mere Christendom.”

Wilson is the founder of the Communion of Reformed Evangelical Churches – one of which Defense Secretary Pete Hegseth attends. Wilson says that his vision of “mere” Christendom does not entail a return to theocracy but “a network of nations bound together by a formal, public, civic acknowledgment of the Lordship of Jesus Christ, and the fundamental truth of the Apostles’ Creed.”

In his 2023 book “The Boniface Option,” minister Andrew Isker calls for Christians to fight for the creation of “new Christendom.” He also co-authored 2022’s “Christian Nationalism: A Biblical Guide for Taking Dominion and Disciplining Nations.”

From a historical perspective, there are numerous problems with such views of Christendom. For starters, they erase the reality that, while Christian authorities governed Christian-majority kingdoms during the Middle Ages, Europe was also home to Jewish and Muslims communities. They also paper over the fact that medieval Christians themselves never reached a consensus over the proper relationship between worldly and spiritual powers – or, as we might call them today, church and state.

Faith and empire

When I teach on religion and politics, I compare two late ancient thinkers whose works left profound legacies on the medieval world: the first historian of the church, Eusebius of Caesarea; and the immensely influential theologian, Saint Augustine.

A brightly colored drawing of a man in a red cloak with a halo around him, set against a blue background.
An illustration of Eusebius of Caesarea in a 17th-century manuscript, created by Armenian artist Mesrop of Khizan.
J. Paul Getty Museum/Wikimedia Commons

Writing in the fourth century, Eusebius celebrated the reign of the first Christian Roman emperor, Constantine, who ruled from 306-337. The story of Constantine’s conversion is famous. As Eusebius told it, the emperor was marching toward Rome during a civil war when he saw a radiant “cross-shaped” vision in the sky, accompanied by the words “by this conquer.” That night, the “Christ of God” appeared to the emperor in a dream and told him to march to war under that sign, which he did with victory.

From Eusebius’ perspective, there was a lot to celebrate about Constantine’s reign. Constantine ended the persecution of Christians unleashed by his predecessors. Under his direction, imperial money flooded into clerical hands, followed by a wave of church building around the empire. The emperor granted bishops legal privileges and tax exemptions, and he called church councils to resolve disputes over Christian doctrine and organization.

In Eusebius’ eyes, this was all part of the divine plan. As he wrote, God had intended since the beginning for the “two shoots” of the “empire” and the “gospel of Christ” to intertwine, grafted together in harmony. Pagan Rome, Eusebius claimed, had subdued the peoples of the world. Under Constantine, its rule was bringing the “good news” of Christianity to all those conquered nations.

This kind of boosterism for Christian monarchs, hailed as “champions of the faith,” would endure throughout the Middle Ages and beyond. The Byzantine Empire, the Carolingian Empire, the Holy Roman Empire, Christian kingdoms from England to Armenia: Supporters saw their worldly power as representing the heavenly power of Christ, the “King of kings.” This was, in effect, a kind of Christian nationalism before the rise of modern nations.

‘Not of this world’

Yet medieval Christian thinkers also maintained skepticism about the ability of temporal princes to realize God’s kingdom here on Earth.

This is where Augustine, who wrote “The City of God” in the early fifth century, comes into the picture. Augustine was a prolific writer and immensely complicated thinker whose views changed across the course of his lifetime. Similar to Eusebius, he believed that God determined the fate of all empires and kingdoms, whether Christian or not.

A painting of a bearded man in a gold robe looking off to the left as he seems to receive a spiritual vision.
A painting of Saint Augustine by 17th-century artist Philippe de Champaigne.
Los Angeles County Museum of Art via Wikimedia Commons

Augustine supported the right of rulers to wage “just wars” and use force to maintain public order. Still, the bishop of Hippo hit the brakes on unbridled enthusiasm for the divinely appointed role of earthly empires and kingdoms, even if their rulers were Christian.

Living through the aftermath of Rome’s plundering in 410 by the Visigoths, Augustine keenly appreciated the fact that empires come and go. True happiness for Christian princes didn’t come from seeking their own personal ends: winning battles, gaining the most territory, leaving their thrones to their heirs, and conquering their enemies. It came from putting their “power at the service of God’s mercy” and the greater good. “Remove justice,” Augustine asked in “The City of God,” “and what are kingdoms but gangs of criminals on a large scale?”

In Augustine’s view, which profoundly influenced medieval theologians and political thinkers, this world was the transitory “City of Man,” filled with love of self and lust for domination. What really mattered was the eternal “City of God.” There was nothing wrong with Christian kingdoms, empires and nations, he thought, but there was nothing especially blessed about them, either. After all, hadn’t Jesus said in the Gospels, “My kingdom is not of this world”?

There has never been a singular Christian perspective on the relationship between faith, power and political identities. There certainly wasn’t in the world of medieval Christendom. To suggest otherwise is a fantasy that misrepresents the sophistication of Christian political thought during the Middle Ages – and in the present.

The Conversation

Brett Whalen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Notions of ‘Christendom’ often miss the mark – medieval Europe’s ideas about faith and power were not so simple – https://theconversation.com/notions-of-christendom-often-miss-the-mark-medieval-europes-ideas-about-faith-and-power-were-not-so-simple-275285

Bird losses are accelerating across North America, particularly in farming regions where agriculture is most intensive

Source: The Conversation – USA (2) – By François Leroy, Postdoctoral Researcher in Ecology, The Ohio State University

Eastern meadowlark populations across the U.S. grasslands have dropped by about three-quarters since 1970. lwolfartist via Wikimedia Commons, CC BY

Since the 1970s, the U.S. has lost billions of birds. We now know that those losses aren’t just growing – they are accelerating in places with intensive human activity, particularly where agriculture and expanding communities are changing the landscape.

Bird population declines have been closely linked to pollution, use of chemicals and physical changes to their habitats.

But human pressures on nature are not just continuing; they are increasing at an accelerating rate. Indicators of human activity, such as population growth, economic growth and transportation use, rose more rapidly after the 1950s, as did measures of environmental change, from atmospheric carbon dioxide levels to tropical forest loss.

In a new study published in the journal Science, my colleagues and I found that bird populations are responding in the same way: Their declines are speeding up, particularly in regions dominated by intensive agriculture.

It’s not just that there are fewer birds each year. In some places, each year brings larger losses than the one before.

Where bird populations are shrinking faster

Using data from the North American Breeding Bird Survey, we analyzed bird population changes for 261 species across the contiguous U.S. between 1987 and 2021.

We found that, on average, bird numbers declined by about 15% – for every six birds in 1987, there were only five three decades later. Nearly half of the species we examined showed significant population declines, with the strongest declines observed for the common grackle, the European starling and the red-winged blackbird.

A bird with bright red spots on its wings closest to its body takes off from a twig.
The red-winged blackbird showed one of the most pronounced declines, together with one of the strongest accelerations of that decline.
Walter Siegmund via Wikimedia Commons, CC BY

The North American Breeding Bird Survey is one of the longest-running wildlife monitoring programs in the world. Since 1969, trained volunteers have counted birds along thousands of fixed routes across the U.S. and Canada during the breeding season, when birds are reproducing, nesting, laying eggs or raising young.

Because the survey spans decades, a continent and hundreds of species, it provides an unparalleled window into how bird populations are changing over time.

Most studies using this data focus on whether populations are increasing or decreasing. In our study, we asked a different question: Are those trends themselves speeding up or slowing down?

When we examined how the decline of birds evolved over time, a striking pattern emerged.

Maps show greatest losses through the Great Plains and Florida, but fastest acceleration in the Midwest and Northeast.
Maps from a new study show changing bird population sizes and where those losses are accelerating.
François Leroy, Marta A. Jarzyna and Petr Keil, 2026

The losses were strongest in southern parts of the United States – a pattern consistent with previous research that linked bird declines to warm and warming regions. Many species have been found to struggle in hotter temperatures, or they shift their ranges toward cooler climates.

The Midwest, California and parts of the Mid-Atlantic region stood out as areas where bird declines are accelerating. Populations that were already shrinking in the late 1980s are now losing birds more rapidly than they did three decades ago.

These regions share a common feature: intensive agriculture. We measured agricultural intensity using indicators such as cropland area, fertilizer application and pesticide use around survey locations. Areas with higher agricultural intensity were more likely to have accelerating bird declines.

Why agriculture intensity can amplify decline

Modern agriculture transforms landscapes. Large cropland areas replace diverse habitats. Herbicides and pesticides used on farms reduce weeds and insects that many bird species depend on for food. Heavy machinery and reduced habitat diversity can limit nesting opportunities.

We cannot disentangle which agricultural practices are most responsible for the accelerating declines. Fertilizer use, pesticide application and land-use change often occur together. It is likely that multiple pressures interact to affect birds. However, studies have linked higher pesticide use to reductions in bird numbers, both directly through toxicity and indirectly through declines in insect prey. These findings suggest that chemicals may play an important role in amplifying population declines in agricultural regions.

A plane flies lower over a field spraying a liquid from a bar of sprayers.
A crop duster sprays chemicals on an alfalfa field in California in 2023. Pesticides kill the pests that eat crops, but they also take away a food supply for birds.
Bill & Brigitte Clough/Design Pics Editorial/Universal Images Group via Getty Images

We also found that agricultural intensity and temperature change may reinforce each other. Agricultural landscapes often lack shade trees, so they warm more than natural areas, potentially compounding climate-related stress on bird populations.

Why acceleration matters

Accelerating population declines are an early warning sign about birds’ well-being. A steady decline is concerning, but when losses grow larger year after year, it means the situation is getting worse faster.

Monitoring acceleration can help identify emerging hot spots before populations reach low levels, providing an early warning for conservation action.

A bird with a blue tail and iridescent purple feathers.
Grackles eat a lot of insects, from beetles to grasshoppers, and help control pest populations in agricultural fields. Their numbers are also falling in North America.
Rhododendrites via Wikimedia, CC BY-SA

Birds are more than just familiar backyard species. They help control insect pests, disperse seeds and regulate ecosystems. Because they are well monitored and sensitive to environmental change, they often provide an early indication of broader ecological shifts.

Nearly 40% of U.S. land is used for agriculture. How these landscapes are managed will shape the future for many birds, and farmers are thus at the forefront to address the biodiversity crisis. It’s also important to remember that agricultural workers themselves are the most exposed to the same chemicals that affect ecosystems, and a growing body of research has examined the health implications of pesticide exposure. Balancing food production, environmental sustainability and human health is a shared challenge.

Biodiversity responses to land management changes can occur quickly. So when habitats are restored or chemical pressures are reduced, birds and insects can return within years.

That potential for relatively rapid ecological recovery makes agricultural landscapes especially important. Our findings suggest that looking not only at how much biodiversity is changing, but also at how much those changes are speeding up, may offer a clearer picture of the pressures facing wildlife today.

The Conversation

François Leroy received funding from the National Science Foundation (OISE-2330423) and from the European Research Council (Grant No. 101044740). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.

ref. Bird losses are accelerating across North America, particularly in farming regions where agriculture is most intensive – https://theconversation.com/bird-losses-are-accelerating-across-north-america-particularly-in-farming-regions-where-agriculture-is-most-intensive-276740

Why do mountaintops stay snowy?

Source: The Conversation – USA (2) – By Allie Mazurek, Engagement Climatologist and Researcher, Colorado Climate Center, Colorado State University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Why do we see snow on mountaintops that are closer to the Sun but not near the ground? – Ms. Drews’ third grade class, Beechview Elementary School, Farmington Hills, Michigan


There’s not much better than a bluebird day in the mountains – a crisp, sunny day accompanied by a fresh blanket of snow. But why doesn’t the Sun quickly melt all that high altitude snow away?

It all boils down to our atmosphere, which is what I research as a scientist in Colorado. Let’s dive in!

Our atmosphere: Earth’s armor

Earth’s atmosphere begins right at its surface and extends to outer space, and it is filled with a mixture of many different gases. Gases in the atmosphere include the oxygen we breathe and the water vapor that makes it rain and snow. They are essential to supporting life on Earth in several ways.

One of the most important jobs those gases have is to protect us from harmful things in space, including our closest star: the Sun.

The Sun’s radiation provides heat to our planet, but too much of it can be a problem. If you’ve ever gotten a sunburn, then you’re already familiar with this idea.

Illustration shows how the greenhouse effect warms the Earth by trapping some gases close to the surface.
Gases in the atmosphere warm the Earth by trapping heat close to the planet’s surface. Too much of those greenhouse gases can cause global temperatures to rise beyond normal and stay high.
Climate Central, CC BY

Some of our atmospheric gases limit the amount of radiation from the Sun that can reach the Earth’s surface by absorbing some of it, which prevents temperatures from being way too warm in the daytime. At night, certain atmospheric gases also trap some of the heat that the Earth’s surface releases as it cools down, protecting us from unsurvivable cold.

The way the atmosphere regulates Earth’s temperatures is known as the greenhouse effect. You’ll often hear this term used alongside climate change or global warming. That is because global warming is caused by enhancing the greenhouse effect: As people burn fossil fuels in cars and factories, the amount of greenhouse gases in the atmosphere increases. These extra gases allow the Earth’s atmosphere to trap more heat, causing an increase in temperatures.

The atmosphere likes to stay grounded

If you were to compare the Earth’s atmosphere along a Caribbean beach to that surrounding the top of Mount Everest, it would look quite different.

That is because as you go higher up in the atmosphere, it gets “thinner,” meaning that there are less gases present at higher elevations and altitudes.

There are more atmospheric gas molecules present at lower altitudes, closer to sea level. But as you go higher in the mountains, atmospheric pressure and the density of air molecules decrease. It’s why climbers on Mount Everest need oxygen tanks.

Why? Blame it on gravity.

In the same way that gravity keeps people and objects from flying away to outer space, Earth’s gravitational force pulls on the gases in our atmosphere, trying to keep them as close to Earth as possible.

As a result, there are fewer gas molecules in the atmosphere as you go higher up in altitude, making the air thinner, or less dense. Humans can sometimes experience altitude sickness at high elevations because there is less oxygen present in the air as a result of this phenomenon.

Closer to the Sun, but still cold and snowy?

Our high-elevation mountains protrude into higher altitudes of the atmosphere, where the air has fewer gas molecules. While this thinner air allows more of the Sun’s radiation to pass through compared with the atmosphere at sea level, thinner air tends to be colder for two reasons:

First, collisions between gas molecules generate heat, and if you have fewer molecules available to run into each other, that heat generation is lower.

Second, a thinner atmosphere is less effective at maintaining heat because there are fewer molecules available to trap and hold on to heat.

Colder temperatures can create more opportunities for precipitation to fall in the form of snow rather than rain, which is why some mountains can be so snowy.

And if the ground is habitually covered in snow, as is the case in many mountain ranges, it can be even easier to maintain cooler temperatures. That’s because snow-covered surfaces are very reflective, making them highly effective at causing the Sun’s incoming rays to bounce back toward space instead of getting absorbed by the ground.

So if you visit the mountains to have fun in the snow, be sure to pack your jacket, but don’t forget that sunscreen too.

The Conversation

Allie Mazurek does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why do mountaintops stay snowy? – https://theconversation.com/why-do-mountaintops-stay-snowy-277560

Fat cells burn energy to make heat – making them the next frontier of weight loss therapies

Source: The Conversation – USA – By Claudio Villanueva, Professor of Integrative Biology and Physiology, University of California, Los Angeles

There is more to fat than meets the eye. Thom Leach/Science Photo Library via Getty Images

Over the past few years, a new class of medications has transformed the treatment of obesity. Drugs like Ozempic, Wegovy and Mounjaro work primarily by reducing appetite, helping people eat less and feel full sooner. Their success has demonstrated something important: Body weight is biologically regulated, and targeting the right biological pathways can lead to meaningful weight loss that can help transform lives.

But appetite is only half of the equation. Your weight reflects a balance between the calories you consume through your diet and the energy you expend through movement, exercise and maintaining basic cellular function. While recent therapies have focused on controlling energy intake, scientists are increasingly turning their attention to the other side of the ledger: the tissues that burn energy.

At the center of this conversation is an organ most people misunderstand: fat. For decades, fat – also known as adipose tissue – was thought of as passive storage: a biological pantry for excess calories. Scientists now know this view is incomplete.

Fat is not just storage

White adipose tissue, the most abundant type of fat in adults, does store energy in the form of triglycerides. But it also has several other functions.

For one, white fat is a powerful endocrine organ, releasing hormones like leptin that reduce appetite, as well as adiponectin, which regulates insulin and blood sugar levels. It also cushions organs, insulates against heat loss and acts as a metabolic buffer, safely storing excess lipids that would otherwise accumulate in the liver or muscle.

Microscopy image of oval-shaped white blobs packed together
White adipose cells provide several essential bodily functions.
Ed Reschke/Stone via Getty Images

When white adipose cells expand in a healthy, flexible way, they protect the body. When they become inflamed or dysfunctional, they contribute to insulin resistance, fatty liver disease and cardiovascular risk. Obesity arises from both the expansion of white adipose cells and an increase in their number.

In other words, fat is not inherently harmful. Its health impact depends on the size of adipose cells, and when they become too large, they are unable to function optimally. Increasing the number of new fat cells can sometimes improve metabolic function.

Moreover, there are additional types of fat, and they behave in different ways.

Brown fat: The cellular furnace

Unlike white fat, brown fat is specialized to burn energy. Brown adipose cells are packed with mitochondria – the tiny power plants inside cells – and contain a protein called UCP1 that allows them to convert chemical energy directly into heat. Instead of storing calories, brown fat dissipates them.

In infants, brown fat helps maintain body temperature. For years, scientists believed it largely disappeared in adulthood. But imaging studies in the late 2000s revealed that many adults retain metabolically active brown fat, particularly in the neck and upper chest.

Exposure to cold temperatures naturally triggers the brain to stimulate brown fat cells and generate heat. As energy use rises for this process, so does calorie-burning.

If activating brown fat increases energy expenditure, could it be harnessed to treat obesity?

The challenge is that human metabolism is tightly regulated. When energy expenditure increases, the body often compensates by stimulating hunger. Studies in animals – and observations in humans – show that cold exposure not only activates brown fat but also increases appetite. The brain detects the higher energy demand and signals for greater food intake.

From an evolutionary perspective, this makes sense. For our human ancestors, cold environments meant survival required more fuel. A system that failed to replace calories burned to keep you warm would have been dangerous. This homeostatic defense of body weight is powerful. It is one reason why weight loss is difficult to sustain and why increasing energy expenditure alone may not be sufficient to lose weight.

But when coupled with GLP-1 drugs that suppress appetite, promoting energy expenditure could lead to therapies that are even more powerful at promoting weight loss.

Diagram of white, beige and brown fat cells, progressively showing smaller amounts of lipids and larger numbers of mitochondria
As white fat cells turn brown, they acquire more mitochondria (blue ovals) and store fewer lipids (yellow spheres)
Vitalii Dumma/iStock via Getty Images Plus

Beige fat and metabolic plasticity

Adding further complexity to fat’s role in weight loss are beige fat cells. These cells arise within white fat depots under certain conditions – such as cold exposure or specific hormonal signals – and acquires some of the heat-producing properties of brown fat. This process, often called browning, reveals that adipose tissue is remarkably flexible.

Fat is not a static mass. It contains stem and progenitor cells capable of generating new adipocytes with distinct properties. That flexibility opens intriguing therapeutic possibilities: Instead of merely shrinking fat, could researchers reprogram it to become something else?

Researchers like me are exploring ways to safely enhance the heat-generating capacity of fat cells, potentially increasing energy expenditure without relying solely on environmental cold. Brown and beige fat are compelling targets because they are purpose-built for heat production, which is why my lab is focusing on harnessing them to treat metabolic disease.

But fat is not the only tissue in the body that consumes energy or can generate heat in the cold. Skeletal muscle accounts for a substantial portion of daily energy expenditure, particularly during activity. The liver continuously engages in metabolically expensive processes. Even subtle futile cycles – processes in which molecules are repeatedly built and broken down – consume energy and generate heat.

The future of metabolic therapeutics for weight loss may involve carefully increasing energy flux across multiple tissues. The challenge is doing so without triggering compensatory hunger or unintended side effects. Any intervention that dramatically raises metabolic demand risks being interpreted by the brain as a threat to survival.

Close-up of legs of three people running
Increasing energy expenditure can also increase appetite.
ljubaphoto/E+ via Getty Images

A two-sided strategy to maximize weight loss

The success of GLP-1–based medications has demonstrated that targeting appetite pathways can overcome some of the body’s resistance to weight loss. The next generation of therapies may build on that foundation.

One possibility is combining medications that modulate appetite with interventions that enhance energy expenditure. By influencing both sides of the energy balance equation – intake and output – it might be possible to achieve more durable metabolic improvements.

Equally important is shifting the public narrative. Fat is not merely an enemy to eliminate. It is a dynamic, multifunctional organ that protects, communicates, adapts and, under the right conditions, burns energy.

Understanding that complexity moves society beyond simplistic views of weight regulation. It also points toward a future in which therapies are not just about eating less, but about strategically harnessing the body’s own metabolic machinery.

The era of appetite control has begun. I believe the era of precision energy expenditure will be next.

The Conversation

Claudio Villanueva receives funding from National Institutes of Health.

ref. Fat cells burn energy to make heat – making them the next frontier of weight loss therapies – https://theconversation.com/fat-cells-burn-energy-to-make-heat-making-them-the-next-frontier-of-weight-loss-therapies-277596

Oil isn’t just fuel: Iran conflict could disrupt markets for everything from plastics to fertilizers

Source: The Conversation – USA – By André O. Hudson, Dean of the College of Science, Professor of Biochemistry, Rochester Institute of Technology

Disruptions to crude oil transport could affect more than fuel supply chains. AP Photo/Hasan Jamali, File

Tensions in the Middle East often trigger concerns about rising gasoline prices. But disruptions to oil supplies could affect much more than the cost of filling up a car. That’s because crude oil is not only burned as fuel. It is also the raw material for thousands of products that modern societies depend on, including plastics, fertilizers, clothing fibers, medicines and electronics.

As a biochemist, I’m interested in how certain chemicals can shape society, and oil is a prime example.

The stakes become clearer when looking at the Strait of Hormuz, a narrow waterway between Iran and Oman. About one-fifth of the world’s petroleum liquids consumption passes through the strait each day, making it one of the most important oil shipping routes on Earth. If conflict significantly disrupts traffic there, the effects could ripple far beyond energy markets.

A map of the Strait of Hormuz, which is a narrow body of water between Iran and Oman.
The Strait of Hormuz.
Goran_tek-en/Wikimedia Commons, CC BY-SA

Oil is a chemical starting point

Crude oil is a complex mixture of hydrocarbons – molecules made mainly of carbon and hydrogen. Refineries and chemical plants separate and transform these molecules into smaller chemical building blocks known as petrochemicals.

Some of the most important petrochemical building blocks include chemicals such as ethylene, propylene and benzene. Manufacturers can then convert these building blocks into more complex forms, which make up plastics, solvents, synthetic rubber and other industrial materials.

While fuel is a well-known product, fuels actually represents only a portion of what is produced from crude oil. The refining process generates a wide range of petroleum-based materials used to manufacture everyday items, such as plastics, medicines, electronics, cosmetics, clothing fibers and household goods.

A diagram showing a bunch of different types of hydrocarbon molecules derived from petroleum
Hydrocarbons are molecules made predominantly from hydrogen and carbon. Different forms, derived from crude oil, are used in many types of manufacturing.
André O. Hudson/Patel & Shah, 2013

Plastics that shape modern life

One of the most visible uses of oil is the production of plastics. Scientists can link individual petrochemical molecules to form polymers, which are long chains of repeating units that create materials such as polyethylene, polypropylene and polystyrene.

Because plastics are lightweight, durable and relatively inexpensive, they have become essential to global manufacturing.

These plastics appear in countless products, including food packaging and water bottles; medical equipment, such as syringes and IV bags; electronics casings and appliances; automotive parts; and construction materials, such as pipes and insulation.

Even technologies designed to reduce carbon emissions depend on them. Wind turbines, solar panels and electric vehicles all contain plastic components derived from petrochemicals.

Fertilizer that feeds billions

Oil and natural gas also play a critical role in agriculture. Modern fertilizers rely on nitrogen compounds, such as ammonia. Ammonia is produced through the Haber-Bosch process, which uses hydrogen typically derived from natural gas or other fossil fuels.

These fertilizers replenish nutrients in soil and dramatically increase crop yields. Without them, global food production would be far lower. Petrochemicals are also used to produce pesticides, herbicides and plastics used in irrigation systems and agricultural equipment.

Clothing, cosmetics and medicines

Petrochemicals also appear in many everyday consumer goods. Synthetic fabrics, such as polyester, nylon and acrylic, are made from petrochemical feedstocks. These feedstocks are the basic chemicals, made from crude oil or natural gas, that serve as the starting ingredients for products widely used in clothing, carpets and furniture.

Petroleum-derived ingredients are also common in cosmetics and personal care products. Certain lotions, shampoos and lipsticks rely on these compounds because they help stabilize formulas and extend shelf life.

Petrochemicals are also important in medicine. Petroleum-derived chemical intermediates − compounds made during the process of turning raw materials into a final product − are used to manufacture pharmaceuticals, medical tubing, sterile packaging and disposable gloves.

These materials help hospitals maintain sterility and safety in health care environments.

Crude oil is far more than just a source of gasoline.

Why the Strait of Hormuz matters

Because oil and petrochemical feedstocks move through global shipping routes, disruptions in one region will affect supply chains worldwide. The Strait of Hormuz is particularly important. If conflict or political tensions continue to interrupt shipping through the Strait, oil prices will rise quickly. Energy analysts have long warned that disruptions to the strait could send shock waves through global markets. The impact would not be limited to transportation fuels.

Petrochemical industries depend on steady supplies of crude oil and natural gas liquids as raw materials. If those supplies become more expensive or harder to obtain, manufacturers could face higher production costs.

The proportion of crude oil used for petrochemical feedstocks to create plastics, fertilizers and other materials represents around 10% to 20% of oil consumption. Most crude oil is refined for fuel production, including gasoline, diesel and jet fuel, so these fuel supply chains would likely be the first to take a hit. But over time, disruptions could affect the availability and price of products ranging from plastics and packaging to fertilizers, synthetic clothing fibers and even food.

A hidden foundation of modern economies

Because petrochemicals are often used behind the scenes as ingredients rather than finished products, the connection many agricultural, medical and consumer goods have to oil is easy to overlook. Yet, petrochemicals form a hidden foundation for modern economies. They enable large-scale agriculture, advanced health care systems and global manufacturing supply chains.

At the same time, concerns about climate change and plastic pollution are driving research into alternatives. Scientists are developing bio-based plastics made from plant materials, improving recycling technologies and exploring new ways to produce fertilizers with lower carbon emissions.

For now, the modern world remains deeply dependent on oil, not only for energy but also for the materials that shape everyday life. When news headlines focus on disruptions to oil supply, the consequences may extend far beyond the gas pump, affecting the products that underpin modern society.

The Conversation

André O. Hudson receives funding from the National Science Foundation and the National Institutes of Health

ref. Oil isn’t just fuel: Iran conflict could disrupt markets for everything from plastics to fertilizers – https://theconversation.com/oil-isnt-just-fuel-iran-conflict-could-disrupt-markets-for-everything-from-plastics-to-fertilizers-277946

AI doesn’t ‘see’ the way that you do, and that could be a problem when it categorizes objects and scenes

Source: The Conversation – USA – By Arryn Robbins, Assistant Professor of Psychology, University of Richmond

An AI and a human might classify this mammal with gray, wrinkled skin as very different animals. Richard Bailey/Corbis via Getty Images

Even with no fur in frame, you can easily see that a photo of a hairless Sphynx cat depicts a cat. You wouldn’t mistake it for an elephant.

But many artificial intelligence vision systems would. Why? Because when AI systems learn to categorize objects, they often rely on visual cues – like surface texture or simple patterns in pixels. This tendency makes them vulnerable to getting confused by small changes that have little effect on human perception.

A vision system aligned more closely with human perception – one that perhaps emphasizes shape, for instance – might still confuse the cat for another similarly shaped mammal, like a tiger; but it is unlikely to indicate an elephant.

The kinds of mistakes an AI makes reveal how it organizes visual information, with potential limitations that become concerning in higher-stakes settings.

red stop sign with stickers and graffiti
Stickers and graffiti on a stop sign could serve as an adversarial attack, confusing AI in autonomous vehicles.
rick/Flickr, CC BY

Imagine an autonomous vehicle approaching a vandalized stop sign. While a human driver recognizes the sign from its shape and context, an AI that relies on pixel patterns may misclassify it, pushing the altered sign out of the category “sign” altogether and into a different group of images that it identifies as similar, such as a billboard, advertisement or other roadside object.

Together, these problems point to a misalignment between how humans perceive the visual world and how AI represents it.

We are experts in visual perception, and we work at the intersection of human and machine perception. People organize visual input into objects, meaning and relationships shaped by experience and context. AI models don’t organize visual information the same way. This key difference explains why AI sometimes fails in surprising ways.

Seeing objects, not features

Imagine that in front of you is a small, opaque object with both straight and curved edges. But you don’t see those features; you just see your coffee mug.

Vision isn’t a camera, passively recording the world. Instead, your brain rapidly turns the light your eyes absorb into objects you recognize and understand, organizing experience into structured mental representations.

Researchers can understand how these representations are structured by examining how people judge similarity. Your coffee mug is not like your computer, but it’s similar to a glass of water despite differences in appearance. That judgment reflects how the mug is mentally represented: not just in terms of appearance, but also what the mug is used for and how it fits into everyday activities.

clear glass of water next to white ceramic mug in saucer on table
Very alike in how you use them; less similar in looks.
Oscar Wong/Moment via Getty Images

Importantly, the mental organization of representations is flexible. Which aspects of an object stand out change with context and goals. If packing a moving box, shape and size matter most, so your mug might be placed anywhere it fits. But when putting it away in a cupboard, it goes next to other drinkware. The mug hasn’t changed, only the way it is organized in your mind.

Human visual perception is adaptive, driven by meaning and tied to how we interact with the world.

Aligning AI with humans

AI systems, however, organize visual input in fundamentally different ways than people – not because they are machines, but because of how narrowly they are trained. When an AI is trained to categorize a cat or an elephant, it only needs to learn which visual patterns lead to the correct label, not how those animals relate to each other or fit into the broader world.

In contrast, humans learn within a broader context. When we learn what an elephant is, we weave that representation into the tapestry of everything else we have learned: animals, size, habitats and more. Because AI is graded only on label accuracy, it can rely on shortcuts that work in training but sometimes fail in the real world.

The issue of representational alignment refers to whether AI organizes information in ways that resemble how people do. It’s not to be confused with value alignment, which refers to the challenge of making sure AI systems pursue outcomes and goals that humans intend.

Because human learning embeds new information into a web of prior knowledge, the relationships between new and existing concepts can be studied and measured. This means that representational alignment may be a solvable problem and a step toward addressing broader alignment challenges.

One approach to representational alignment focuses on building AI systems that behave like humans on psychological tasks, allowing researchers to compare representations directly. For example, if people judge a cat as more similar to a dog than to an elephant, the goal is to build AI models that arrive at those same judgments.

One promising technique involves training AI on human similarity judgments collected in the lab. In these studies, human participants might be shown three images and asked which two objects are more similar; for example, whether a mug is more like a glass or a bowl. Including this data during training encourages AI systems to learn how objects relate to one another, producing representations that better reflect how people understand the world.

view from behind of man looking at X-rays of chest and other body parts
Health care providers want AI systems that flag real issues, without a lot of misses or false positives.
REB Images/Connect Images via Getty Images

Alignment beyond vision

Representational alignment matters beyond vision systems, and AI researchers are taking notice. As AI increasingly supports high-stakes decisions, differences between how machines and humans represent the world will have real consequences, even when an AI system appears highly accurate. For example, if an AI analyzing medical images learns to associate the source of an image or repeated image artifacts with disease rather than the real visual signs of the disease itself, that is obviously problematic.

AI doesn’t necessarily need to process information exactly the way people think, but training AI using principles drawn from human perception and cognition – such as similarity, context and relational structure – can lead to safer, more accurate and more ethical systems.

The Conversation

Eben W. Daggett receives funding from the NMSU Institute for Applied Practice in AI and Machine Learning (IAAM). He is currently employed by Medtronic PLC.

Michael Hout has received funding from the New Mexico State University Institute for Applied Practice in Artificial Intelligence and Machine Learning.

Arryn Robbins does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI doesn’t ‘see’ the way that you do, and that could be a problem when it categorizes objects and scenes – https://theconversation.com/ai-doesnt-see-the-way-that-you-do-and-that-could-be-a-problem-when-it-categorizes-objects-and-scenes-271481

US military leans into AI for attack on Iran, but the tech doesn’t lessen the need for human judgment in war

Source: The Conversation – USA – By Jon R. Lindsay, Associate Professor of Cybersecurity and Privacy and of International Affairs, Georgia Institute of Technology

AI is helping U.S. forces find and choose targets in Iran, like this airfield. U.S. Central Command via AP

The U.S. military was able “to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran” thanks in part to its use of artificial intelligence, according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela.

While Claude is only a few years old, the U.S. military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today.

In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the U.S. Navy, I find that digital systems are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses.

Myth and reality in military AI

Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the people who use them.

In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of considerable debate.

Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration and cybersecurity.

Claude is an example of a decision support system, not a weapon. Claude is embedded in the Maven Smart System, used widely by military, intelligence and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities.

The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions.

Researcher Craig Jones explains how the U.S. military is using artificial intelligence in its attack on Iran, and some of the issues that arise from its use.

The long history of military AI

Weapons with some degree of autonomy have been used in war for well over a century. Nineteenth-century naval mines exploded on contact. German buzz bombs in World War II were gyroscopically guided. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the U.S. Patriot system, have long offered fully automatic modes.

Robotic drones became prevalent in the wars of the 21st century. Uncrewed systems now perform a variety of “dull, dirty and dangerous” tasks on land, at sea, in the air and in orbit. Remotely piloted vehicles like the U.S. MQ-9 Reaper or Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and strikes. Combatants in the Russia-Ukraine war have pioneered the use of first-person view drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic jamming precludes remote control by human operators.

But systems that automate reconnaissance and strikes are merely the most visible parts of the automation revolution. The ability to see farther and hit faster dramatically increases the information processing burden on military organizations. This is where decision support systems come in. If automated weapons improve the eyes and arms of a military, decision support systems augment the brain.

Cold War era command and control systems anticipated modern decision support systems such as Israel’s AI-enabled Tzayad for battle management. Automation research projects like the United States’ Semi-Automatic Ground Environment, or SAGE, in the 1950s produced important innovations in computer memory and interfaces. In the U.S. war in Vietnam, Igloo White gathered intelligence data into a centralized computer for coordinating U.S. airstrikes on North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency’s strategic computing program in the 1980s spurred advances in semiconductors and expert systems. Indeed, defense funding originally enabled the rise of AI.

Organizations enable automated warfare

Automated weapons and decision support systems rely on complementary organizational innovation. From the Electronic Battlefield of Vietnam to the AirLand Battle doctrine of the late Cold War and later concepts of network-centric warfare, the U.S. military has developed new ideas and organizational concepts.

Particularly noteworthy is the emergence of a new style of special operations during the U.S. global war on terrorism. AI-enabled decision support systems became invaluable for finding terrorist operatives, planning raids to kill or capture them, and analyzing intelligence collected in the process. Systems like Maven became essential for this style of counterterrorism.

The impressive American way of war on display in Venezuela and Iran is the fruition of decades of trial and error. The U.S. military has honed complex processes for gathering intelligence from many sources, analyzing target systems, evaluating options for attacking them, coordinating joint operations and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is that countless human personnel everywhere work to keep it running.

AI gives rise to important concerns about automation bias, or the tendency for people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was often misled by Vietnamese decoys. A state-of-the-art U.S. Aegis cruiser accidentally shot down an Iranian airliner in 1988. Intelligence mistakes led U.S. stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999.

Many Iraqi and Afghan civilians died due to analytical mistakes and cultural biases within the U.S. military. Most recently, evidence suggests that a Tomahawk cruise missile struck a girls school adjacent to an Iranian naval base, killing about 175 people, mostly students. This targeting could have resulted from a U.S. intelligence failure.

Automated prediction needs human judgment

The successes and failures of decision support systems in war are due more to organizational factors than technology. AI can help organizations improve their efficiency, but AI can also amplify organizational biases. While it may be tempting to blame Lavender for excessive civilian deaths in the Gaza Strip, lax Israeli rules of engagement likely matter more than automation bias.

As the name implies, decision support systems support human decision-making; AI does not replace people. Human personnel still play important roles in designing, managing, interpreting, validating, evaluating, repairing and protecting their systems and data flows. Commanders still command.

In economic terms, AI improves prediction, which means generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values and commitments regarding real-world outcomes, but AI systems intrinsically do not.

In my view, this means that increasing military use of AI is actually making humans more important in war, not less.

The Conversation

Jon R. Lindsay does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. US military leans into AI for attack on Iran, but the tech doesn’t lessen the need for human judgment in war – https://theconversation.com/us-military-leans-into-ai-for-attack-on-iran-but-the-tech-doesnt-lessen-the-need-for-human-judgment-in-war-277831

Why do mountaintops stay snowy even when it’s dry at the base?

Source: The Conversation – USA (2) – By Allie Mazurek, Engagement Climatologist and Researcher, Colorado Climate Center, Colorado State University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Why do we see snow on mountaintops that are closer to the Sun but not near the ground? – Ms. Drews’ third grade class, Beechview Elementary School, Farmington Hills, Michigan


There’s not much better than a bluebird day in the mountains – a crisp, sunny day accompanied by a fresh blanket of snow. But why doesn’t the Sun quickly melt all that high altitude snow away?

It all boils down to our atmosphere, which is what I research as a scientist in Colorado. Let’s dive in!

Our atmosphere: Earth’s armor

Earth’s atmosphere begins right at its surface and extends to outer space, and it is filled with a mixture of many different gases. Gases in the atmosphere include the oxygen we breathe and the water vapor that makes it rain and snow. They are essential to supporting life on Earth in several ways.

One of the most important jobs those gases have is to protect us from harmful things in space, including our closest star: the Sun.

The Sun’s radiation provides heat to our planet, but too much of it can be a problem. If you’ve ever gotten a sunburn, then you’re already familiar with this idea.

Illustration shows how the greenhouse effect warms the Earth by trapping some gases close to the surface.
Gases in the atmosphere warm the Earth by trapping heat close to the planet’s surface. Too much of those greenhouse gases can cause global temperatures to rise beyond normal and stay high.
Climate Central, CC BY

Some of our atmospheric gases limit the amount of radiation from the Sun that can reach the Earth’s surface by absorbing some of it, which prevents temperatures from being way too warm in the daytime. At night, certain atmospheric gases also trap some of the heat that the Earth’s surface releases as it cools down, protecting us from unsurvivable cold.

The way the atmosphere regulates Earth’s temperatures is known as the greenhouse effect. You’ll often hear this term used alongside climate change or global warming. That is because global warming is caused by enhancing the greenhouse effect: As people burn fossil fuels in cars and factories, the amount of greenhouse gases in the atmosphere increases. These extra gases allow the Earth’s atmosphere to trap more heat, causing an increase in temperatures.

The atmosphere likes to stay grounded

If you were to compare the Earth’s atmosphere along a Caribbean beach to that surrounding the top of Mount Everest, it would look quite different.

That is because as you go higher up in the atmosphere, it gets “thinner,” meaning that there are less gases present at higher elevations and altitudes.

There are more atmospheric gas molecules present at lower altitudes, closer to sea level. But as you go higher in the mountains, atmospheric pressure and the density of air molecules decrease. It’s why climbers on Mount Everest need oxygen tanks.

Why? Blame it on gravity.

In the same way that gravity keeps people and objects from flying away to outer space, Earth’s gravitational force pulls on the gases in our atmosphere, trying to keep them as close to Earth as possible.

As a result, there are fewer gas molecules in the atmosphere as you go higher up in altitude, making the air thinner, or less dense. Humans can sometimes experience altitude sickness at high elevations because there is less oxygen present in the air as a result of this phenomenon.

Closer to the Sun, but still cold and snowy?

Our high-elevation mountains protrude into higher altitudes of the atmosphere, where the air has fewer gas molecules. While this thinner air allows more of the Sun’s radiation to pass through compared with the atmosphere at sea level, thinner air tends to be colder for two reasons:

First, collisions between gas molecules generate heat, and if you have fewer molecules available to run into each other, that heat generation is lower.

Second, a thinner atmosphere is less effective at maintaining heat because there are fewer molecules available to trap and hold on to heat.

Colder temperatures can create more opportunities for precipitation to fall in the form of snow rather than rain, which is why some mountains can be so snowy.

And if the ground is habitually covered in snow, as is the case in many mountain ranges, it can be even easier to maintain cooler temperatures. That’s because snow-covered surfaces are very reflective, making them highly effective at causing the Sun’s incoming rays to bounce back toward space instead of getting absorbed by the ground.

So if you visit the mountains to have fun in the snow, be sure to pack your jacket, but don’t forget that sunscreen too.

The Conversation

Allie Mazurek does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why do mountaintops stay snowy even when it’s dry at the base? – https://theconversation.com/why-do-mountaintops-stay-snowy-even-when-its-dry-at-the-base-277560