How Frank Rizzo, a high school dropout, became Philadelphia’s toughest cop and a harbinger of MAGA politics

Source: The Conversation – USA – By Timothy J Lombardo, Associate Professor of History, University of South Alabama

Mayor Frank Rizzo poses for a portrait on Jan. 3, 1977. Santi Visalli via Getty Images

In August 2025, the city of Philadelphia agreed to return a statue of Frank Rizzo to the supporters that commissioned the memorial in 1992.

The 2,000-pound bronze tribute to the former police commissioner-turned-mayor had stood in front of the city’s Municipal Services Building from 1998 until 2020, when then-mayor Jim Kenney ordered it removed days after protesters attempted to topple it during the protests that followed the murder of George Floyd.

While the agreement states that the statue cannot be placed in public view, conservatives have still hailed its return as a triumph for Rizzo’s legacy. In the ongoing culture wars over historical memory and memorialization, Rizzo’s supporters have declared their repossession of the statue a victory over the “woke mayor” who unlawfully removed it.

As a historian and native Philadelphian, I have written extensively about the city. My first book, which will be rereleased with a new preface in February 2026, traces the rise of Rizzo’s political appeal and contextualizes his supporters’ politics in the broader history of the rise of the right.

My work recognizes Rizzo not only as the quintessential backlash politician of the 1960s and 1970s, but also as a harbinger of today’s identity-based populism that favors social and cultural victories over economic redistribution.

As police commissioner from 1967 to 1971 and mayor from 1972 to 1979, Rizzo became a hero to the white, blue-collar Philadelphians who clamored for “law and order” and railed against liberal policymaking. Until he died in 1991, while running a third campaign to retake the mayor’s office, Rizzo was an avatar of what I call “blue-collar conservatism.”

Understanding Rizzo’s career and political popularity can help explain the persistent appeal of this identity-based populism in the 21st century.

Large bronze statue of man with red paint splashed across his head and chest
Police officers guard the Frank Rizzo statue as protesters clash with police near City Hall in May 2020 after the murder of George Floyd.
Bastiaan Slabbers/NurPhoto via Getty Images

Rizzo, from cop to mayor

Francis Lazzaro Rizzo was born in South Philadelphia, in the mostly Italian-American neighborhood his parents settled in after immigrating from Calabria, Italy.

In a city where police work was often a family affair, Rizzo followed his father’s footsteps into the Philadelphia Police Department a few years after dropping out of high school.

Early on, he drew praise from superiors for his clean-cut image and aggressive policing. In the 1950s, Rizzo fortified that reputation while patrolling predominantly Black neighborhoods in West Philadelphia and leading raids on gay meeting places in Center City.

As deputy commissioner in the 1960s, Rizzo directly confronted the city’s civil rights movement. Among other exploits, he commanded the response to the Columbia Avenue Uprising in 1964, when North Philadelphia residents responded to an all-too-common act of police brutality with three days of urban disorder.

He also faced down protesters seeking to integrate Girard College, an all-white city-operated boarding school for orphaned boys in the heart of predominantly Black North Philadelphia.

While serving as acting commissioner in 1967, Rizzo led a throng of baton-wielding police into a crowd of high schoolers demanding education reform. The scene ended with police chasing down and beating mostly Black youngsters in front of the Board of Education headquarters.

Rizzo was promoted to commissioner later that year.

While African Americans and white liberals decried his “Gestapo tactics,” Rizzo grew increasingly popular among the city’s white, blue-collar residents.

Black men barefoot, handcuffed and wearing only underwear are lined up facing a building as police with guns watch over them
Members of the Philadelphia Black Panther Party are handcuffed and stripped by Philadelphia police after Frank Rizzo ordered an early morning raid of their Columbia Avenue headquarters on Aug. 31, 1970.
Courtesy of the Special Collections Research Center, Temple University Libraries, Philadelphia, PA.

He capitalized on their enthusiasm in 1971, when he campaigned and won his first election for mayor as both a Democrat and the self-proclaimed “toughest cop in America.”

For two terms he rewarded his supporters by opposing and limiting liberal programs they had fought, like public housing, school desegregation and affirmative action. When dissatisfied Democrats challenged his reelection in 1975, Rizzo vowed revenge by saying he would “make Atilla the Hun look like a fa—t.”

Finally, while campaigning for an amendment to Philadelphia’s Home Rule Charter to allow him to run for a third consecutive mayoral term, Rizzo told an all-white audience of public housing opponents to “vote white” for charter change.

Populism then and now

Rizzo’s record makes clear why protesters targeted his statue in 2020. When Mayor Kenney ordered it removed, he called it “a deplorable monument to racism, bigotry and police brutality for members of the Black community, the LGBTQ community and many others.”

While Rizzo and his supporters were certainly part of the late 1960s backlash against civil rights and liberalism generally, his populism was more complex and durable than that narrative suggests.

He also offered affirmation to a beleaguered white, blue-collar identity. His supporters raved about his forceful policing and cheered his anti-liberalism as a last line of defense against policies they considered threats to their livelihoods. Just as important, they saw themselves reflected in the rough-talking high school dropout who worked his way up to the most powerful position in Philadelphia.

When Rizzo first ran for mayor, one of his supporters told a reporter that “He’ll win because he isn’t a Ph.D. He’s one of us. Rizzo came up the hard way.”

That kind of identity-based populism offered social and cultural victories even when it did little to address the declining economy that struck urban America in the 1970s. So while Rizzo’s populism had few answers for deindustrialization, in 1972 he was able to temporarily halt construction on a public housing project in an all-white section of his native South Philadelphia.

Man in suit shakes hands with woman alongside stacks of boxes while factory workers gather around
Mayoral candidate Frank Rizzo campaigns in a Philadelphia factory.
Dick Swanson/The Chronicle Collection via Getty Images

Trump’s similar appeal

Donald Trump offers a similar populist appeal in the 21st century. In fact, he has drawn comparisons to Rizzo since his first presidential campaign.

Like Rizzo, Trump’s appeal is more social and cultural than economic. Critics have argued that Trump’s promotion of traditional Republican economic policies belie the notion that he is a populist. Trump’s populism, however, lies not in his ability to deliver working-class prosperity, but conservative victories in the nation’s long-standing culture wars.

Trump’s policies may not fulfill his promise to lower the cost of groceries or health care, but mass deportations reward those who fear a changing American identity.

Sending troops into cities may not address the cost-of-living crisis, but it delights those who see disorder in urban society.

Trump’s attempt to recast national history museums in a patriotic mold may not usher in a new “Golden Age of America,” but it promises a victory to opponents of “woke” history.

Large mural depicting man in blue suit and tie covers entire side of row home
A large mural in South Philadelphia that paid tribute to Frank Rizzo was painted over in June 2020.
AP Photo/Matt Rourke

Redistributive vs. identity populism

Despite the lopsided attention Trump’s social and cultural populism receives, a kind of progressive, redistributive populism persists in many American cities. This populism promises a redirection of resources from elites and toward working people.

In Philadelphia in 2023, the multicultural, left-populist Working Families Party won the two at-large seats reserved for minority-party representation in the city’s legislature. Currently, Zohran Mamdani’s upstart campaign for mayor of New York seems to be reviving a long tradition of progressive urban populism.

Redistributive populism, however, remains at odds with the identity populism once championed by Rizzo and now by Trump. While the Trump administration’s policies may promise social and cultural victories, they have done little to affect the economic prospects of working-class Americans.

Read more of our stories about Philadelphia.

The Conversation

Timothy J Lombardo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How Frank Rizzo, a high school dropout, became Philadelphia’s toughest cop and a harbinger of MAGA politics – https://theconversation.com/how-frank-rizzo-a-high-school-dropout-became-philadelphias-toughest-cop-and-a-harbinger-of-maga-politics-263229

You can be exposed to PFAS through food, water, even swimming in lakes – new maps show how risk from ‘forever chemicals’ varies

Source: The Conversation – USA (2) – By Ruohao Zhang, Assistant Professor of Agricultural Economics, Penn State

Since the 1940s, companies have been using PFAS – perfluoroalkyl and polyfluoroalkyl substances – to make products easier to use, from Teflon nonstick pots to waterproof rain gear, stain-resistant carpet and firefighting foam.

The chemicals’ resistance to heat, oils, stains, grease and water makes them useful. However, that same chemical stability also makes them linger in the environment – and in the human body. Studies have suggested that some types of PFAS can contribute to health harms, including thyroid disease, liver damage and kidney and testicular cancer.

The U.S. Environmental Protection Agency has concluded that there is no safe level of human exposure for two of the most common PFAS compounds: perfluorooctanoic acid (PFOA) and perfluorooctanesulfonic acid (PFOS). It set drinking water standards limiting their acceptable levels in water systems in 2024.

However, drinking water isn’t the only way people are exposed to PFAS today.

Two cows look over a wooden hay trough with a barn in the background.
Cattle have been found with high levels of PFAS, including at this farm in Maine. Sludge used on crops has been tied to the spread of PFAS.
Adam Glanzman/Bloomberg via Getty Images

To better understand the ways people are being exposed to PFAS, we and a team of colleagues examined four exposure pathways – drinking water contamination, food contamination, recreational exposure and industrial emissions, such as from Superfund sites, airports, military bases and manufacturing plants – across three Great Lakes states: Michigan, New York and Pennsylvania.

An interactive map and online dashboard we created lets residents look up their communities’ known PFAS exposure risks. The results offer insights for people across the United States who share similar living environments, dietary choices and lifestyles.

PFAS exposure patterns

The extensive use and improper disposal of products containing these “forever chemicals” have led to their widespread presence in the environment. They have made their way into farm fields, drinking water and water bodies, where fish and shellfish can ingest the chemicals and humans can swim in PFAS-contaminated water.

In an analysis of the three states, we found that the average person consumes about three times more PFAS through food than through drinking water.

We looked at 19 food items in which PFAS have been detected, including seafood, such as clams, crab and shrimp, which have the highest levels of PFAS. Other foods beyond these 19 may also expose people to PFAS, so the totals may underestimate the actual intake.

For water contamination, we found that PFAS had been detected in 1,272 out of 2,593 tested public water facilities in Michigan, New York and Pennsylvania, collectively serving a population of about 23 million people.

We also found distinct patterns of PFAS exposure.

Among the three states, Pennsylvania has the highest risk of PFAS exposure from food and water, while Michigan has the lowest. Michigan’s lower risk likely reflects its significantly lower PFAS contamination in drinking water, which may be due to its PFAS water-testing and regulation.

Michigan map is mostly light colors, but with darker reds in the Upper Peninsula.
Areas of Michigan believed to have higher PFAS risk through food are in dark red, according to the PFAS Exposure Risk Dashboard. Overall, Michigan’s PFAS exposure from food is believed to be low compared with other Great Lakes states.
PFAS Exposure Risk Dashboard

Notably, our analysis found that most dietary PFAS risk comes from butter, olive oil and shrimp. Seafood typically contains much higher PFAS concentrations than butter or olive oil – polluted rivers bring these chemicals into marine environments, and fish and shellfish gradually accumulate and magnify it through the food chain. However, substantially greater consumption of butter and olive oil makes those products potentially large dietary sources of PFAS.

It’s important to note that not all sources of the foods we examined have the same PFAS risk, and the analysis did not assess the health effects from the PFAS exposure levels detected.

We found that intake of foods that can contain PFAS is higher in Pennsylvania and New York than in Michigan, driven largely by greater seafood and olive oil consumption, likely reflecting the influence of Mediterranean cuisines. Higher seafood consumption there is consistent with proximity to the coast.

Comparing Detroit, Philadelphia and NYC

Zooming in on individual cities offers more insight:

Detroit has an above-average risk of PFAS exposure through food compared with other locations in Michigan, and the highest amount among the three major urban centers we looked at. Ground beef and baked tilapia, two products in which PFAS has been detected in the North Central regional market, contribute to Detroit’s relatively higher food-related PFAS exposure risk compared with cities in other states, in addition to high consumption of bacon, sausage and crab.

Detroit’s public drinking water hasn’t been tested for PFAS, so residents’ risk level from water is unknown.

New York City has minimal PFAS exposure risk from its public drinking water and much lower PFAS levels than surrounding suburban areas. Its risk of dietary intake of PFAS comes primarily from consumption of butter and olive oil.

A map of the Philadelphia area showing some areas with high water PFAS levels, mostly in suburban areas around the city
Areas of the Philadelphia region with high PFAS readings are in dark blue. Gray areas lack data.
PFAS Exposure Risk Dashboard

Philadelphia’s public drinking water has also been found to be at minimal risk, with significantly lower PFAS contamination than in surrounding suburban areas. However, it has relatively high consumption of shrimp, bacon and sausage. We found that the city and its region also have a high risk of exposure to PFAS from recreation on water bodies compared with other regions. Studies are only beginning to understand the risks from PFAS exposure through skin.

Among smaller cities, Rochester, New York, and its surrounding area, particularly along Lake Ontario, also stands out for its higher risks from recreational exposure to PFAS compared with other regions. A 2024 study of PFAS in the Great Lakes found that airborne pollution was contributing to contamination in the five lakes, particularly Lake Ontario, along with PFAS from industry-lined rivers.

A map of New York showing dark areas believed to have higher recreation risk, particularly south of Lake Ontario
Areas of western New York, particularly along Lake Ontario, have some of the higher recreational PFAS concerns in New York according to the map.
PFAS Exposure Risk Dashboard

How to reduce your PFAS exposure

In general, we recommend several actions to help mitigate PFAS exposure risk.

Households served by public water systems with high levels of PFAS may want to use drinking water filtration systems.

People can also reduce their exposure by adjusting their dietary choices by eating less of those foods with the potential to have PFAS contamination.

Our dashboard also includes a map of recreational sites near PFAS-contaminated water bodies.

The dashboard reflects the goal of our study – not only to inform, but also to empower individuals and communities to make healthier choices. Local governments and advocacy groups can also use the data to prioritize policies to reduce exposure.

Where to learn more

Several official and unofficial resources are also available to help the public understand PFAS contamination across the U.S.

The EPA created an online PFAS Analytic Tool that shows locations of PFAS contamination in natural water, drinking water systems, and industrial emissions through interactive maps. The Environmental Working Group, a science and advocacy group, provides a map highlighting PFAS-contaminated sites and affected public water systems.

These resources offer valuable insights into contamination locations, but they do not directly assess human exposure or individual risk.

As the research on PFAS continues to develop and policies evolve, the need for information becomes increasingly important for public understanding and prevention. We hope our study inspires people to become more informed and more engaged in protecting themselves and their families from environmental pollution exposure.

Jiahui Guo, a Ph.D. student at Penn State, and Yongwang Ren, a postdoctoral researcher at Kansas State University, contributed to this article.

The Conversation

This project is funded by Illinois-Indiana Sea Grant, Grant Number: NA22OAR4170654-T1-01.

ref. You can be exposed to PFAS through food, water, even swimming in lakes – new maps show how risk from ‘forever chemicals’ varies – https://theconversation.com/you-can-be-exposed-to-pfas-through-food-water-even-swimming-in-lakes-new-maps-show-how-risk-from-forever-chemicals-varies-261632

Hidden treasures of America’s national parks are closer than you might think

Source: The Conversation – USA (2) – By Jeffrey C. Hallo, Professor of Parks, Recreation and Tourism Management, Clemson University

When people think about national parks, they often think about the most famous ones – places like Yellowstone, Yosemite, Denali, Acadia, Glacier, Everglades and the Great Smoky Mountains. These are among the nation’s most sought-after destinations, with awe-inspiring scenery, abundant wildlife and places for adventure and recreation.

Admission is free at most of them, and at the rest, it’s competitive with the cost of a family meal deal at a fast-food joint.

But there is much more to the nation’s park system than just the 63 places formally designated as national parks. The National Park Service also manages nearly 400 other areas designated for their national significance as battlefields, military or historic sites, lakeshores, seashores, monuments, parkways, recreation areas, trails, rivers and preserves.

As a scholar of parks, recreation and tourism who has also published a children’s book about the wonders of the National Park System, I have seen how important these places are to Americans. And when the nation grapples with political divisions, civil unrest, social change or pandemics, these public lands – whether technically national parks or other elements in the wider system – are debated and fought over, protested in and used as an example. But they also provide places to find peace and restoration.

These sites of national significance are in every state in the U.S. – and hold surprising treasures no less wondrous than the big-name destinations, potentially right around the corner from your home.

Cliffs with hollowed-out sections rise above blue water. Trees grow on the clifftops.
Sea caves on Lake Superior provide stunning natural beauty at a national park that’s less well-known than some others.
Royalbroil via Wikimedia Commons, CC BY-SA

Enjoyment at the waterfront

America’s coastlines, shorelines, lakes and rivers are often prime destinations for vacationers, but access to them can be limited by private development, and parking and admission fees can be costly.

National parks help protect wide swaths of public access to these popular destinations and the affordability of visiting them for generations to come. Almost all of these water-focused parks allow swimming, beach or shore access, boating and fishing.

For example, Cumberland Island National Seashore in Georgia is an idyllic island with wild horses, historic mansions, uncrowded beaches and a maritime forest where you can hunt for fossilized shark teeth and camp among the Spanish moss-covered oak trees.

Point Reyes National Seashore in California has tule elk and elephant seal herds, a picturesque red-roofed lighthouse and fog-swept cliffs along the Pacific Ocean. Apostle Islands National Lakeshore in Wisconsin has sea caves to explore on kayaks.

Backcountry exploration

When people seek a break from the pace of modern life and the demands of being digitally connected, national parks contain expanses of backcountry, where signs of civilization are sparse, and where profound natural beauty, adventure and solitude are still available.

In Michigan’s Isle Royale National Park, you can see moose and hear wolves howl in the island’s wilderness. In South Carolina’s Congaree National Park, you can canoe or kayak on backwater creeks among some of the largest and tallest trees in eastern North America.

In Idaho, Craters of the Moon National Monument & Preserve allows visitors to explore an otherworldly volcanic landscape of lava flows, cinder cones and lava tubes. Primitive roads there allow people to drive into the backcountry to experience solitude without hiking.

National parks also offer a break from looking at this world entirely: 44 properties in the National Park Service system are certified as International Dark Sky Parks, where the nighttime environment is protected from invasive light pollution by laws and local regulations.

People walk across a stone bridge toward a wooded area.
At Hot Springs National Park in Arkansas, visitors can walk right from a city center into the park.
Ron Buskirk/UCG/Universal Images Group via Getty Images

A break from urban life

In America’s suburbs, and even in the heart of major cities, national park lands bring history, nature, leisure and urban life together. These parks reinforce the idea that national parks aren’t just for long-distance vacations but rather for daily life, enjoyment and reflection not far from home.

For example, the Mississippi National River & Recreation Area in Minnesota offers roughly 4 million residents of the Minneapolis-St. Paul metropolitan area mostly free access to over 70 miles of the river for all manner of waterborne and shoreline recreation. And just outside of New York City, off Long Island’s south shore, Fire Island National Seashore provides an easy escape to a rare coastal wilderness for undisturbed hikes through dunes and salt marshes.

Hot Springs National Park in Arkansas is one of the only national parks fully integrated into a small city. An area first preserved by Congress for public recreation in 1832 – 40 years before Yellowstone became the first official national park – it offers miles of trails that feel wild, despite their proximity to the downtown area. Its historic Bathhouse Row provides opportunities for bathing in thermal waters, and the park encourages visitors to drink the natural waters at the numerous spring-fed fountains in the town.

If a stronger drink is needed, Hot Springs is the only national park that has a brewery within its boundaries, using the park’s thermal spring water in its beers.

A sign reads 'Stonewall National Monument' next to a fence adorned with rainbow flags.
The Stonewall National Monument in New York City is one of many locations that recognize efforts to improve equality and social justice throughout U.S. history.
AP Photo/Pamela Smith

Lessons from history and culture

The National Park System also preserves America’s history and culture – and reminds people of the country’s collective mistakes and triumphs. The parks help Americans apply the many lessons of history to current issues. Americans can learn what we as a nation and as a collective of people have done – and what we have always yearned to do.

Independence National Historical Park in Philadelphia showcases the birthplace of American democracy, where the Declaration of Independence and Constitution were debated and signed, establishing a new democracy with sweeping goals of equality and opportunity for everyone.

Manzanar National Historic Site in California and Kalaupapa National Historical Park in Hawaii keep alive the stories of forced internments of people who were deemed dangerous or undesirable, reminding Americans that there have been times the nation did not live up to its ideals.

Minuteman Missile National Historic Site in South Dakota and Manhattan Project National Historical Park, with sites in Tennessee, New Mexico and Washington, shed light on the technology and politics of warfare.

And Belmont-Paul Women’s Equality National Monument in Washington, D.C., César E. Chávez National Monument in California and Stonewall National Monument in the heart of New York City – along with many other similar national parks – teach Americans about the generations-long ongoing struggles for civil rights and social justice.

U.S. national parks are more numerous, complex and full of wonder and opportunities for discovery than any one person could fully grasp – whether a self-proclaimed superfan or a credentialed expert. There is always more to discover, with more stories to hear and more places to see and explore.

There are likely lesser-known gems very close by for you to visit. Take a friend, a child or someone who has never been there before. People who use parks love them, and parks supported by love are protected – by all of us.

The Conversation

Jeffrey C. Hallo receives funding from the National Park Service.

ref. Hidden treasures of America’s national parks are closer than you might think – https://theconversation.com/hidden-treasures-of-americas-national-parks-are-closer-than-you-might-think-262585

A first connection can make a big difference when it comes to sticking with a career

Source: The Conversation – USA (2) – By Soon Hyeok Choi, Assistant Professor of Real Estate Finance, Rochester Institute of Technology

People often say that a single spark can light a fire.

In careers, that spark is often a person. It might be someone early in life who cracks open a door, offers encouragement, or quietly shows what success can look like. What’s less obvious is how profoundly that very first connection can shape everything that comes afterward.

Consider 23-time Grand Slam tennis champion Serena Williams. Williams has often spoken about the crucial role played by her first coach – and father – Richard Williams. His belief in her abilities and his willingness to expose her to competitive tennis from an early age ensured she gained experience long before most of her peers. In this, she’s not alone – in sports, a first coach can recognize potential before anyone else does.

Or consider Misty Copeland, the first Black female principal dancer at American Ballet Theatre. At 13, a Boys & Girls Club teacher, Cynthia Bradley, recognized her potential and brought her into formal ballet training; within four years Copeland earned a spot in ABT’s Studio Company. In 2015, she became ABT’s first Black female principal, a milestone built on that early mentorship. Those first advocates opened doors to elite training, scholarships and professional networks that sustained a long, barrier-breaking career.

Anecdotes like these are powerful, but they also raise questions. Do early connections cause long-term success, or do they simply come more easily to people already positioned to succeed? After all, a young athlete with supportive and affluent parents might have access to better training and competition regardless of who their first coach is. This chicken-and-egg problem is hard to untangle – unless you look at a setting where chance plays a role. That’s where my research comes in.

Real estate as a natural laboratory

I’m a professor of real estate finance, and I noticed that the residential real estate brokerage industry can mimic a random experimental setting. Since only a small number of people are active in housing markets at any given time, agents can’t choose exactly who they work with. That means a new agent’s first counterparty broker – that is, the agent on the other side of the deal – depends on who happens to be representing clients at the same time and place. In many cases, that first connection is essentially a matter of luck.

So my colleagues and I analyzed more than 20 years of home sales data from Charlotte, North Carolina, covering more than 40,000 unique real estate agents and 417,000 home sales between 2001 and 2023. We found that new agents who land their first deal with a well-connected power broker are about 25% more likely to still be in the business a year later. Since many agents struggle to close a second deal within a year of their first, this significantly boosts their chances of building a lasting career.

The first handshake and lasting spark

What makes these first encounters so powerful is not only the transfer of skills but also the shaping of confidence and identity. A young musician invited to join an orchestra by a respected conductor begins to see himself as part of that world. A student encouraged by a scientist to enter a national competition begins to imagine a place for herself in research. An athlete who trains with an Olympic medalist begins to visualize competing at the highest levels. In each case, the first connection changes the sense of what is possible.

Our study also found that new agents at the greatest risk of leaving the field – those with fewer early sales – benefit the most from starting out with a well-connected partner. The same dynamic appears in sports, where struggling athletes often flourish under coaches with deep relationships and credibility, and in education, where students on the verge of disengaging can be reenergized by respected teachers who open doors to programs, competitions and networks. These mentors do more than teach. They change trajectories.

The lesson for those just beginning their careers: Seek out people who are respected and generous with their experience. Observing how they work, think and solve problems can shape your own professional identity.

For those who are more established, the takeaway is equally important: Offering a hand to someone new, making an introduction or simply offering encouragement can set in motion a sequence of events that shape a life.

The Conversation

Soon Hyeok Choi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A first connection can make a big difference when it comes to sticking with a career – https://theconversation.com/a-first-connection-can-make-a-big-difference-when-it-comes-to-sticking-with-a-career-263892

Scientific objectivity is a myth – cultural values and beliefs always influence science and the people who do it

Source: The Conversation – USA – By Sara Giordano, Associate Professor of Interdisciplinary Studies, Kennesaw State University

People are at the heart of the scientific enterprise. Matteo Farinella, CC BY-NC

Even if you don’t recall many facts from high school biology, you likely remember the cells required for making babies: egg and sperm. Maybe you can picture a swarm of sperm cells battling each other in a race to be the first to penetrate the egg.

For decades, scientific literature described human conception this way, with the cells mirroring the perceived roles of women and men in society. The egg was thought to be passive while the sperm was active.

The opening credits of the 1989 movie ‘Look Who’s Talking’ animated this popular narrative, with speaking sperm rushing toward the nonverbal egg to be the first to fertilize it.

Over time, scientists realized that sperm are too weak to penetrate the egg and that the union is more mutual, with the two cells working together. It’s no coincidence that these findings were made in the same era when new cultural ideas of more egalitarian gender roles were taking hold.

Scientist Ludwik Fleck is credited with first describing science as a cultural practice in the 1930s. Since then, understanding has continued to build that scientific knowledge is always consistent with the cultural norms of its time.

Despite these insights, across political differences, people strive for and continue to demand scientific objectivity: the idea that science should be unbiased, rational and separable from cultural values and beliefs.

When I entered my Ph.D. program in neuroscience in 2001, I felt the same way. But reading a book by biologist Anne Fausto-Sterling called “Sexing the Body” set me down a different path. It systematically debunked the idea of scientific objectivity, showing how cultural ideas about sex, gender and sexuality were inseparable from the scientific findings. By the time I earned my Ph.D., I began to look more holistically at my research, integrating the social, historical and political context.

From the questions scientists begin with, to the beliefs of the people who conduct the research, to choices in research design, to interpretation of the final results, cultural ideas constantly inform “the science.” What if an unbiased science is impossible?

Emergence of idea of scientific objectivity

Science grew to be synonymous with objectivity in the Western university system only over the past few hundred years.

In the 15th and 16th centuries, some Europeans gained traction in challenging the religiously ordained royal order. Consolidation of the university system led to shifts from trust in religious leaders interpreting the word of “god,” to trust in “man” making one’s own rational decisions, to trust in scientists interpreting “nature.” The university system became an important site for legitimizing claims through theories and studies.

Previously, people created knowledge about their world, but there were not strict boundaries between what are now called the humanities, such as history, English and philosophy, and the sciences, including biology, chemistry and physics. Over time, as questions arose about how to trust political decisions, people split the disciplines into categories: subjective versus objective. The splitting came with the creation of other binary oppositions, including the closely related emotionality/rationality divide. These categories were not simply seen as opposite, but in a hierarchy with objectivity and rationality as superior.

A closer look shows that these binary systems are arbitrary and self-reinforcing.

Science is a human endeavor

The sciences are fields of study conducted by humans. These people, called scientists, are part of cultural systems just like everyone else. We scientists are part of families and have political viewpoints. We watch the same movies and TV shows and listen to the same music as nonscientists. We read the same newspapers, cheer for the same sports teams and enjoy the same hobbies as others.

All of these obviously “cultural” parts of our lives are going to affect how scientists approach our jobs and what we consider “common sense” that does not get questioned when we do our experiments.

Beyond individual scientists, the kinds of studies that get conducted are based on what questions are deemed relevant or not by dominant societal norms.

For example, in my Ph.D. work in neuroscience, I saw how different assumptions about hierarchy could influence specific experiments and even the entire field. Neuroscience focuses on what is called the central nervous system. The name itself describes a hierarchical model, with one part of the body “in charge” of the rest. Even within the central nervous system, there was a conceptual hierarchy with the brain controlling the spinal cord.

My research looked more at what happened peripherally in muscles, but the predominant model had the brain at the top. The taken-for-granted idea that a system needs a boss mirrors cultural assumptions. But I realized we could have analyzed the system differently and asked different questions. Instead of the brain being at the top, a different model could focus on how the entire system communicates and works together at coordination.

Every experiment also has assumptions baked in – things that are taken for granted, including definitions. Scientific experiments can become self-fulfilling prophecies.

For example, billions of dollars have been spent on trying to delineate sex differences. However, the definition of male and female is almost never stated in these research papers. At the same time, evidence mounts that these binary categories are a modern invention not based on clear physical differences.

But the categories are tested so many times that eventually some differences are discovered without putting these results into a statistical model together. Oftentimes, so-called negative findings that don’t identify a significant difference are not even reported. Sometimes, meta-analyses based on multiple studies that investigated the same question reveal these statistical errors, as in the search for sex-related brain differences. Similar patterns of slippery definitions that end up reinforcing taken-for-granted assumptions happen with race, sexuality and other socially created categories of difference.

Finally, the end results of experiments can be interpreted in many different ways, adding another point where cultural values are injected into the final scientific conclusions.

Settling on science when there’s no objectivity

Vaccines. Abortion. Climate change. Sex categories. Science is at the center of most of today’s hottest political debates. While there is much disagreement, the desire to separate politics and science seems to be shared. On both sides of the political divide, there are accusations that the other side’s scientists cannot be trusted because of political bias.

RFK Jr, Donald Trump and Dr. Oz seated at a table with flags behind them
It can be easier to spot built-in bias in scientific perspectives that conflict with your own values.
Jim Watson/AFP via Getty Images

Consider the recent controversy over the U.S. Centers for Disease Control and Prevention’s vaccine advisory panel. Secretary of Health and Human Services Robert F. Kennedy Jr. fired all members of the Advisory Committee on Immunization Practices, saying they were biased, while some Democratic lawmakers argued back that his move put in place those who would be biased in pushing his vaccine-skeptical agenda.

If removing all bias is impossible, then, how do people create knowledge that can be trusted?

The understanding that all knowledge is created through cultural processes does allow for two or more differing truths to coexist. You see this reality in action around many of today’s most controversial subjects. However, this does not mean you must believe all truths equally – that’s called total cultural relativism. This perspective ignores the need for people to come to decisions together about truth and reality.

Instead, critical scholars offer democratic processes for people to determine which values are important and for what purposes knowledge should be developed. For example, some of my work has focused on expanding a 1970s Dutch model of the science shop, where community groups come to university settings to share their concerns and needs to help determine research agendas. Other researchers have documented other collaborative practices between scientists and marginalized communities or policy changes, including processes for more interdisciplinary or democratic input, or both.

I argue a more accurate view of science is that pure objectivity is impossible. Once you leave the myth of objectivity behind, though, the way forward is not simple. Instead of a belief in an all-knowing science, we are faced with the reality that humans are responsible for what is researched, how it is researched and what conclusions are drawn from such research.

With this knowledge, we have the opportunity to intentionally set societal values that inform scientific investigations. This requires decisions about how people come to agreements about these values. These agreements need not always be universal but instead can be dependent on the context of who and what a given study might affect. While not simple, using these insights, gained over decades of studying science from both within and outside, may force a more honest conversation between political positions.

The Conversation

Sara Giordano does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Scientific objectivity is a myth – cultural values and beliefs always influence science and the people who do it – https://theconversation.com/scientific-objectivity-is-a-myth-cultural-values-and-beliefs-always-influence-science-and-the-people-who-do-it-259137

How RFK Jr.’s misguided science on mRNA vaccines is shaping policy − a vaccine expert examines the false claims

Source: The Conversation – USA (3) – By Deborah Fuller, Professor of Microbiology, School of Medicine, University of Washington

RFK Jr. canceled $500 million of funding for research on mRNA vaccine technology. Anadolu/Getty Images

On Sept. 4, 2025, Health and Human Services Secretary Robert F. Kennedy Jr. is scheduled to testify before the Senate Finance Committee, where he is expected to face questions about his vaccine policies.

A few days prior, on Sept. 1, 2025, President Donald Trump demanded pharmaceutical companies to prove that COVID-19 mRNA vaccines work, saying that the CDC was “being ripped apart over this question.” It was his first public acknowledgment of the chaos roiling the Centers for Disease Control and Prevention amid the firing of CDC Director Susan Monarez and subsequent resignations of four high-level agency officials.

Meanwhile, public health experts and HHS staffers are calling for Kennedy to be fired.

The turmoil comes about a month after HHS announced US$500 million in funding cuts for 22 research contracts on mRNA vaccine technology. The agency said it will instead pour these funds into research on a traditional approach to designing vaccines that was first used more than 200 years ago. With such vaccines, called whole-virus vaccines, a person’s immune system is presented with the whole virus, often in weakened or inactivated form. This switcheroo has puzzled many scientists.

As a vaccinologist who has studied and developed vaccines for over 35 years, I see that the science behind mRNA vaccine technology is being widely misstated. This incorrect information is shaping long-term health policy in the U.S. – which makes it urgent to correct the record.

Are mRNA vaccines less safe than whole-virus vaccines?

HHS defended its cancellation of mRNA vaccine research based, in part, on a nonpeer-reviewed compilation of selected publications called the COVID-19 mRNA “vaccine” harms research collection. This document lists about 750 articles claimed to describe harms caused by mRNA vaccines against COVID-19. However, the vast majority of these articles aren’t about vaccines but about the harms of getting infected with SARS-CoV-2, the virus that causes COVID-19. And notably absent from it is the huge body of data showing mRNA vaccines actually prevent these harms.

a SARS-CoV-2 particle whole and in cross-section.
Spike proteins on SARS-COV-2 can cause tissue damage – and although mRNA vaccines produce them in small amounts, they prevent the virus from replicating to produce them in large amounts.
https://www.scientificanimations.com/wiki-images/, CC BY-SA

For example, the document being used to justify RFK Jr.’s claims about mRNA vaccines highlights 375 studies reporting that the virus’s spike protein alone, which is produced when the virus replicates, can cause excessive inflammation and tissue damage. This is true. But the document marshals this evidence to support the claim that mRNA vaccines, which are designed to produce spike proteins, cause the same harm – which is not accurate.

While viral replication results in uncontrolled production of a large amounts of the protein, the way it’s produced by the mRNA vaccine is very different. The vaccine produces a small, controlled amount of spike protein inside a few cells – just enough to induce an immune response without causing damage. And by blocking the virus’s replication, it reduces the amount of spike protein in circulation, actually having the opposite effect.

What about side effects like myocarditis?

Early reports flagged a type of heart swelling called myocarditis as a rare side effect of the mRNA vaccine, particularly for young men ages 18 to 25 after a booster dose. A 2024 review identified about 20 cases out of 1 million people who received the vaccine. However, that same study found that unvaccinated people had an elevenfold higher risk of getting myocarditis after a COVID-19 infection than vaccinated people.

What’s more, another 2024 study showed that people who developed myocarditis after vaccination had fewer complications than those who developed the condition after getting infected with COVID-19.

Do mRNA vaccines make the SARS-CoV-2 virus resistant?

Another claim from the compilation of supposed mRNA vaccine harms that was cited as a reason for cutting funding for mRNA technology is that mRNA vaccines cause mutations in the SARS-CoV-2 virus that make them resistant or less susceptible to the vaccine.

When a virus replicates in its host, it produces millions of copies of its genetic material. Mutations are copying errors that occur naturally during the replication process. These acquired mutations produce new variants, which is why both the COVID-19 mRNA and the whole-virus flu vaccine get updated annually – to keep up with natural changes in the virus.

Slowing down viral replication decreases the rate at which a virus can acquire new mutations. Since both mRNA and whole-virus vaccines stop or slow the virus from replicating, both types of vaccines help reduce the emergence of resistant viruses.

Viruses can mutate to escape from antibodies, but the mRNA vaccines are not causing the emergence of more virulent strains, likely for at least two reasons. First, mRNA vaccines induce immune responses that can attack the virus at multiple spots, so it would have to come up with many mutations at once to escape the vaccine’s defenses. Second, even if the virus could acquire all these mutations, they would likely weaken it, making it unable to cause or even transmit disease.

mRNA vaccines versus new SARS-CoV-2 variants

Kennedy, in announcing cuts to mRNA vaccine research on Aug. 5, 2025, claimed that mRNA vaccines don’t work against respiratory viruses and that HHS was moving toward “safer, broader vaccine platforms that remain effective even as viruses mutate.”

Both whole-virus vaccines and mRNA vaccines protected against COVID-19 and prevented hospitalization and death for millions of people worldwide between 2020 and 2024, but there’s clear evidence that the mRNA-based vaccines provided significantly better protection than whole-virus vaccines. And for COVID-19, mRNA vaccines are more effective against new variants, which emerge as viruses mutate, than whole-virus vaccines.

mRNA vaccines’ superpower is that they can be updated and manufactured very quickly, unlike traditional whole-virus vaccines.

The COVID-19 mRNA vaccines started with exceptionally high efficacy, exceeding 94%. When the SARS-CoV-2 delta and omicron variants emerged in the spring and fall of 2021, mRNA vaccines became less effective in preventing infections. However, they remained highly effective in preventing severe illness, whereas in unvaccinated people the rates of severe illness and hospitalization remained high.

This is because mRNA vaccines induce the immune system to make both antibodies and specialized immune cells called T cells. These elements can recognize multiple parts of the virus, including ones that don’t change, enabling significant protection against new variants.

What’s more, the mRNA vaccines have a superpower that no other type of vaccine can currently match: They can be quickly updated and manufactured within two to three months. To develop a whole-virus vaccine, researchers must first spend months isolating and propagating the virus. Conversely, making an mRNA vaccine requires just sequencing the virus’s genetic code – a process that today takes just hours.

If a new pandemic began today, mRNA vaccines are currently the only type of vaccine that could be developed quickly enough to disrupt its spread.

The future of mRNA vaccine technologies

Thirty years ago, when scientists first started developing mRNA vaccine technology, they recognized its potential to overcome major limitations of whole-virus vaccines – namely, slow production time and more limited ability to protect from new viral variants. Today, mRNA vaccines are also being developed to prevent or treat diseases including HIV and cancer, as well as autoimmune and genetic diseases.

Of course, this technology can be further improved. New mRNA vaccine technologies are aimed, among other things, at making mRNA vaccines easier to store to allow for faster distribution and reduce their short-term side effects, eliminate the rare risk of myocarditis and more quickly block a respiratory infection.

The National Institutes of Health is funneling money away from new mRNA technologies toward a single project developing universal vaccines based on traditional whole-virus vaccine technology. Universal vaccines are urgently needed to provide broader protection against ever-changing respiratory viruses, such as influenza, that are major pandemic threats.

A 2022 study in mice and ferrets showed that a universal flu vaccine NIH plans to support has promise. However, multiple studies of potential universal flu vaccines based on mRNA technology show even more potential. Such vaccines could induce broader immunity than whole-virus vaccines by eliciting antibody and T-cell responses that target an even wider range of flu viruses.

It’s hard to square those benefits with the fact that HHS and NIH have named the planned new universal vaccine platform “Generation Gold Standard,” insisting that it represents a new standard in science and transparency. The effort seems more akin to eliminating all e-bike technology and telling everyone who seeks one to get by with a single brand of a 10-speed bike: Getting to the intended destination may still be possible, but it will be slower and harder.

And in the case of abandoning mRNA vaccine research, it may lead to lives needlessly lost, whether due to potential medicines untapped or to pandemic unpreparedness.

The Conversation

Deborah Fuller receives funding from the National Institutes of Health. She is co-founder and a scientific advisor for two biotech companies developing nucleic acid vaccine technologies that are not based on mRNA.

ref. How RFK Jr.’s misguided science on mRNA vaccines is shaping policy − a vaccine expert examines the false claims – https://theconversation.com/how-rfk-jr-s-misguided-science-on-mrna-vaccines-is-shaping-policy-a-vaccine-expert-examines-the-false-claims-263027

China’s electric vehicle influence expands nearly everywhere – except the US and Canada

Source: The Conversation – USA (2) – By Jack Barkenbus, Visiting Scholar, Vanderbilt University

BYD electric cars wait at a Chinese port to be loaded onto the automobile carrier BYD Shenzhen, which was slated to sail to Brazil. STR/AFP via Getty Images

In 2025, 1 in 4 new automotive vehicle sales globally are expected to be an electric vehicle – either fully electric or a plug-in hybrid.

That is a significant rise from just five years ago, when EV sales amounted to fewer than 1 in 20 new car sales, according to the International Energy Agency, an intergovernmental organization examining energy use around the world.

In the U.S., however, EV sales have lagged, only reaching 1 in 10 in 2024. By contrast, in China, the world’s largest car market, more than half of all new vehicle sales are electric.

The International Energy Agency has reported that two-thirds of fully electric cars in China are now cheaper to buy than their gasoline equivalents. With operating and maintenance costs already cheaper than gasoline models, EVs are attractive purchases.

Most EVs purchased in China are made there as well, by a range of different companies. NIO, Xpeng, Xiaomi, Zeekr, Geely, Chery, Great Wall Motor, Leapmotor and especially BYD are household names in China. As someone who has followed and published on the topic of EVs for over 15 years, I expect they will soon become as widely known in the rest of the world.

What kinds of EVs is China producing?

China’s automakers are producing a full range of electric vehicles, from the subcompact, like the BYD Seagull, to full-size SUVs, like the Xpeng G9, and luxury cars, like the Zeekr 009.

Recent European crash-test evaluations have given top safety ratings to Chinese EVs, and many of them cost less than similar models made by other companies in other countries.

A Wall Street Journal video explores a Chinese ‘dark factory’ – one so automated that it doesn’t need lights inside.

What’s behind Chinese EV success?

There are several factors behind Chinese companies’ success in producing and selling EVs. To be sure, relatively low labor costs are part of the explanation. So are generous government subsidies, as EVs were one of several advanced technologies selected by the Chinese government to propel the nation’s global technological profile.

But Chinese EV makers are also making other advances. They make significant use of industrial robotics, even to the point of building so-called “dark factories” that can operate with minimal human intervention. For passengers, they have reimagined vehicles’ interiors, with large touchscreens for information and entertainment, and even added a refrigerator, bed or karaoke system.

Competition among Chinese EV makers is fierce, which drives additional innovation. BYD is the largest seller of EVs, both domestically and globally. Yet the company says it employs over 100,000 scientists and engineers seeking continual improvement.

From initial concept models to actual rollout of factory-made cars, BYD takes 18 months – half as long as U.S. and other global automakers take for their product development processes, Reuters reported.

BYD is also the world’s second-largest EV battery seller and has developed a new battery that can recharge in just five minutes, roughly the same time it takes to fill a gas-powered car’s tank.

A gray car sits on a showroom floor under bright lights.
An Xpeng M03, whose base model costs about US$17,000, is displayed at a car show in Shanghai in April 2025.
VCG/VCG via Getty Images

Exports

The real test of how well Chinese vehicles appeal to consumers will come from export sales. Chinese EV manufacturers are eager to sell abroad because their factories can produce far more than the 25 million vehicles they can sell within China each year – perhaps twice as much.

China already exports more cars than any other nation, though primarily gas-powered ones at the moment. Export markets for Chinese EVs are developing in Western Europe, Southeast Asia, Latin America, Australia and elsewhere.

The largest market where Chinese vehicles, whether gasoline or electric, are not being sold is North America. Both the U.S. and Canadian governments have created what some have called a “tariff fortress” protecting their domestic automakers, by imposing tariffs of 100% on the import of Chinese EVs – literally doubling their cost to consumers.

Customers’ budgets matter too. The average price of a new electric vehicle in the U.S. is approximately $55,000. Less expensive vehicles make up part of this average, but without tax credits, which the Trump administration is eliminating after September 2025, nothing gets close to $25,000. By contrast, Chinese companies produce several sub-$25,000 EVs, including the Xpeng M03, the BYD Dolphin and the MG4 without tax credits. If sold in America, however, the 100% tariffs would remove the price advantage.

Tesla, Ford and General Motors all claim they are working on inexpensive EVs. More expensive vehicles, however, generate higher profits, and with the protection of the “tariff fortress,” their incentive to develop cheaper EVs is not as high as it might be.

In the 1970s and 1980s, there was considerable U.S. opposition to importing Japanese vehicles. But ultimately, a combination of consumer sentiment and the willingness of Japanese companies to open factories in the U.S. overcame that opposition, and Japanese brands like Toyota, Honda and Nissan are common on North American roads. The same process may play out for Chinese automakers, though it’s not clear how long that might take.

The Conversation

Jack Barkenbus does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. China’s electric vehicle influence expands nearly everywhere – except the US and Canada – https://theconversation.com/chinas-electric-vehicle-influence-expands-nearly-everywhere-except-the-us-and-canada-262459

AI’s ballooning energy consumption puts spotlight on data center efficiency

Source: The Conversation – USA (2) – By Divya Mahajan, Assistant Professor of Computer Engineering, Georgia Institute of Technology

These ‘chillers’ on the roof of a data center in Germany, seen from above, work to cool the equipment inside the building. AP Photo/Michael Probst

Artificial intelligence is growing fast, and so are the number of computers that power it. Behind the scenes, this rapid growth is putting a huge strain on the data centers that run AI models. These facilities are using more energy than ever.

AI models are getting larger and more complex. Today’s most advanced systems have billions of parameters, the numerical values derived from training data, and run across thousands of computer chips. To keep up, companies have responded by adding more hardware, more chips, more memory and more powerful networks. This brute force approach has helped AI make big leaps, but it’s also created a new challenge: Data centers are becoming energy-hungry giants.

Some tech companies are responding by looking to power data centers on their own with fossil fuel and nuclear power plants. AI energy demand has also spurred efforts to make more efficient computer chips.

I’m a computer engineer and a professor at Georgia Tech who specializes in high-performance computing. I see another path to curbing AI’s energy appetite: Make data centers more resource aware and efficient.

Energy and heat

Modern AI data centers can use as much electricity as a small city. And it’s not just the computing that eats up power. Memory and cooling systems are major contributors, too. As AI models grow, they need more storage and faster access to data, which generates more heat. Also, as the chips become more powerful, removing heat becomes a central challenge.

Small blue and green lights arranged in columns glow behind black mesh screens
Data centers house thousands of interconnected computers.
Alberto Ortega/Europa Press via Getty Images

Cooling isn’t just a technical detail; it’s a major part of the energy bill. Traditional cooling is done with specialized air conditioning systems that remove heat from server racks. New methods like liquid cooling are helping, but they also require careful planning and water management. Without smarter solutions, the energy requirements and costs of AI could become unsustainable.

Even with all this advanced equipment, many data centers aren’t running efficiently. That’s because different parts of the system don’t always talk to each other. For example, scheduling software might not know that a chip is overheating or that a network connection is clogged. As a result, some servers sit idle while others struggle to keep up. This lack of coordination can lead to wasted energy and underused resources.

A smarter way forward

Addressing this challenge requires rethinking how to design and manage the systems that support AI. That means moving away from brute-force scaling and toward smarter, more specialized infrastructure.

Here are three key ideas:

Address variability in hardware. Not all chips are the same. Even within the same generation, chips vary in how fast they operate and how much heat they can tolerate, leading to heterogeneity in both performance and energy efficiency. Computer systems in data centers should recognize differences among chips in performance, heat tolerance and energy use, and adjust accordingly.

Adapt to changing conditions. AI workloads vary over time. For instance, thermal hotspots on chips can trigger the chips to slow down, fluctuating grid supply can cap the peak power that centers can draw, and bursts of data between chips can create congestion in the network that connects them. Systems should be designed to respond in real time to things like temperature, power availability and data traffic.

How data center cooling works.

Break down silos. Engineers who design chips, software and data centers should work together. When these teams collaborate, they can find new ways to save energy and improve performance. To that end, my colleagues, students and I at Georgia Tech’s AI Makerspace, a high-performance AI data center, are exploring these challenges hands-on. We’re working across disciplines, from hardware to software to energy systems, to build and test AI systems that are efficient, scalable and sustainable.

Scaling with intelligence

AI has the potential to transform science, medicine, education and more, but risks hitting limits on performance, energy and cost. The future of AI depends not only on better models, but also on better infrastructure.

To keep AI growing in a way that benefits society, I believe it’s important to shift from scaling by force to scaling with intelligence.

The Conversation

Divya Mahajan owns shares in Google, AMD, Microsoft, and Nvidia. She receives funding from Google and AMD.

ref. AI’s ballooning energy consumption puts spotlight on data center efficiency – https://theconversation.com/ais-ballooning-energy-consumption-puts-spotlight-on-data-center-efficiency-254192

AI is transforming weather forecasting − and that could be a game changer for farmers around the world

Source: The Conversation – USA (2) – By Paul Winters, Professor of Sustainable Development, University of Notre Dame

Weather forecasts help farmers figure out when to plant, where to use fertilizer and much more. Maitreya Shah/Studio India

For farmers, every planting decision carries risks, and many of those risks are increasing with climate change. One of the most consequential is weather, which can damage crop yields and livelihoods. A delayed monsoon, for example, can force a rice farmer in South Asia to replant or switch crops altogether, losing both time and income.

Access to reliable, timely weather forecasts can help farmers prepare for the weeks ahead, find the best time to plant or determine how much fertilizer will be needed, resulting in better crop yields and lower costs.

Yet, in many low- and middle-income countries, accurate weather forecasts remain out of reach, limited by the high technology costs and infrastructure demands of traditional forecasting models.

A new wave of AI-powered weather forecasting models has the potential to change that.

A farmer in a field holds a dried out corn stalk.
A farmer holds dried-up maize stalks in his field in Zimbabwe on March 22, 2024. A drought had caused widespread water shortages and crop failures.
AP Photo/Tsvangirayi Mukwazhi

By using artificial intelligence, these models can deliver accurate, localized predictions at a fraction of the computational cost of conventional physics-based models. This makes it possible for national meteorological agencies in developing countries to provide farmers with the timely, localized information about changing rainfall patterns that the farmers need.

The challenge is getting this technology where it’s needed.

Why AI forecasting matters now

The physics-based weather prediction models used by major meteorological centers around the world are powerful but costly. They simulate atmospheric physics to forecast weather conditions ahead, but they require expensive computing infrastructure. The cost puts them out of reach for most developing countries.

Moreover, these models have mainly been developed by and optimized for northern countries. They tend to focus on temperate, high-income regions and pay less attention to the tropics, where many low- and middle-income countries are located.

A major shift in weather models began in 2022 as industry and university researchers developed deep learning models that could generate accurate short- and medium-range forecasts for locations around the globe up to two weeks ahead.

These models worked at speeds several orders of magnitude faster than physics-based models, and they could run on laptops instead of supercomputers. Newer models, such as Pangu-Weather and GraphCast, have matched or even outperformed leading physics-based systems for some predictions, such as temperature.

A woman in a red sari tosses pellets into a rice field.
A farmer distributes fertilizer in India.
EqualStock IN from Pexels

AI-driven models require dramatically less computing power than the traditional systems.

While physics-based systems may need thousands of CPU hours to run a single forecast cycle, modern AI models can do so using a single GPU in minutes once the model has been trained. This is because the intensive part of the AI model training, which learns relationships in the climate from data, can use those learned relationships to produce a forecast without further extensive computation – that’s a major shortcut. In contrast, the physics-based models need to calculate the physics for each variable in each place and time for every forecast produced.

While training these models from physics-based model data does require significant upfront investment, once the AI is trained, the model can generate large ensemble forecasts — sets of multiple forecast runs — at a fraction of the computational cost of physics-based models.

Even the expensive step of training an AI weather model shows considerable computational savings. One study found the early model FourCastNet could be trained in about an hour on a supercomputer. That made its time to presenting a forecast thousands of times faster than state-of-the-art, physics-based models.

The result of all these advances: high-resolution forecasts globally within seconds on a single laptop or desktop computer.

Research is also rapidly advancing to expand the use of AI for forecasts weeks to months ahead, which helps farmers in making planting choices. AI models are already being tested for improving extreme weather prediction, such as for extratropical cyclones and abnormal rainfall.

Tailoring forecasts for real-world decisions

While AI weather models offer impressive technical capabilities, they are not plug-and-play solutions. Their impact depends on how well they are calibrated to local weather, benchmarked against real-world agricultural conditions, and aligned with the actual decisions farmers need to make, such as what and when to plant, or when drought is likely.

To unlock its full potential, AI forecasting must be connected to the people whose decisions it’s meant to guide.

That’s why groups such as AIM for Scale, a collaboration we work with as researchers in public policy and sustainability, are helping governments to develop AI tools that meet real-world needs, including training users and tailoring forecasts to farmers’ needs. International development institutions and the World Meteorological Organization are also working to expand access to AI forecasting models in low- and middle-income countries.

A man sells grain in Dawanau International Market in Kano, Nigeria on July 14, 2023.
Many low-income countries in Africa face harsh effects from climate change, from severe droughts to unpredictable rain and flooding. The shocks worsen conflict and upend livelihoods.
AP Photo/Sunday Alamba

AI forecasts can be tailored to context-specific agricultural needs, such as identifying optimal planting windows, predicting dry spells or planning pest management. Disseminating those forecasts through text messages, radio, extension agents or mobile apps can then help reach farmers who can benefit. This is especially true when the messages themselves are constantly tested and improved to ensure they meet the farmers’ needs.

A recent study in India found that when farmers there received more accurate monsoon forecasts, they made more informed decisions about what and how much to plant – or whether to plant at all – resulting in better investment outcomes and reduced risk.

A new era in climate adaptation

AI weather forecasting has reached a pivotal moment. Tools that were experimental just five years ago are now being integrated into government weather forecasting systems. But technology alone won’t change lives.

With support, low- and middle-income countries can build the capacity to generate, evaluate and act on their own forecasts, providing valuable information to farmers that has long been missing in weather services.

The Conversation

Paul Winters receives funding from the Gates Foundation. He is the Executive Director of AIM for Scale.

Amir Jina receives funding from AIM for Scale.

ref. AI is transforming weather forecasting − and that could be a game changer for farmers around the world – https://theconversation.com/ai-is-transforming-weather-forecasting-and-that-could-be-a-game-changer-for-farmers-around-the-world-263030

5 forecasts early climate models got right – the evidence is all around you

Source: The Conversation – USA (2) – By Nadir Jeevanjee, Research Physical Scientist, National Oceanic and Atmospheric Administration

The island nation of Tuvalu is losing land to sea-level rise, and its farms and water supplies are under threat from salt water. Mario Tama/Getty Images

Climate models are complex, just like the world they mirror. They simultaneously simulate the interacting, chaotic flow of Earth’s atmosphere and oceans, and they run on the world’s largest supercomputers.

Critiques of climate science, such as the report written for the Department of Energy by a panel in 2025, often point to this complexity to argue that these models are too uncertain to help us understand present-day warming or tell us anything useful about the future.

But the history of climate science tells a different story.

The earliest climate models made specific forecasts about global warming decades before those forecasts could be proved or disproved. And when the observations came in, the models were right. The forecasts weren’t just predictions of global average warming – they also predicted geographical patterns of warming that we see today.

An older man smiles at the camera with an impish grin.
Syukuro Manabe was awarded the Nobel Prize in physics in 2021.
Johan Nilsson/TT News Agency/AFP

These early predictions starting in the 1960s emanated largely out of a single, somewhat obscure government laboratory outside Princeton, New Jersey: the Geophysical Fluid Dynamics Laboratory. And many of the discoveries bear the fingerprints of one particularly prescient and persistent climate modeler, Syukuro Manabe, who was awarded the 2021 Nobel Prize in physics for his work.

Manabe’s models, based in the physics of the atmosphere and ocean, forecast the world we now see while also drawing a blueprint for today’s climate models and their ability to simulate our large-scale climate. While models have limitations, it is this track record of success that gives us confidence in interpreting the changes we’re seeing now, as well as predicting changes to come.

Forecast No. 1: Global warming from CO2

Manabe’s first assignment in the 1960s at the U.S. Weather Bureau, in a lab that would become the Geophysical Fluid Dynamics Laboratory, was to accurately model the greenhouse effect – to show how greenhouse gases trap radiant heat in Earth’s atmosphere. Since the oceans would freeze over without the greenhouse effect, this was a key first step in building any kind of credible climate model.

To test his calculations, Manabe created a very simple climate model. It represented the global atmosphere as a single column of air and included key components of climate, such as incoming sunlight, convection from thunderstorms, and his greenhouse effect model.

Chart showing temperatures warming at ground level and in the atmosphere as carbon dioxide concentrations rises.
Results from Manabe’s 1967 single-column global warming simulations show that as carbon dioxide (CO2) increases, the surface and lower atmosphere warm, while the stratosphere cools.
Syukuro Manabe and Richard Wetherald, 1967

Despite its simplicity, the model reproduced Earth’s overall climate quite well. Moreover, it showed that doubling carbon dioxide concentrations in the atmosphere would cause the planet to warm by about 5.4 degrees Fahrenheit (3 degrees Celsius).

This estimate of Earth’s climate sensitivity, published in 1967, has remained essentially unchanged in the many decades since and captures the overall magnitude of observed global warming. Right now the world is about halfway to doubling atmospheric carbon dioxide, and the global temperature has warmed by about 2.2 F (1.2 C) – right in the ballpark of what Manabe predicted.

Other greenhouses gases such as methane, as well as the ocean’s delayed response to global warming, also affect temperature rise, but the overall conclusion is unchanged: Manabe got Earth’s climate sensitivity about right.

Forecast No. 2: Stratospheric cooling

The surface and lower atmosphere in Manabe’s single-column model warmed as carbon dioxide concentrations rose, but in what was a surprise at the time, the model’s stratosphere actually cooled.

Temperatures in this upper region of the atmosphere, between roughly 7.5 and 31 miles (12 and 50 km) in altitude, are governed by a delicate balance between the absorption of ultraviolet sunlight by ozone and release of radiant heat by carbon dioxide. Increase the carbon dioxide, and the atmosphere traps more radiant heat near the surface but actually releases more radiant heat from the stratosphere, causing it to cool.

Heat map shows cooling in the stratosphere. The stratosphere, starting at 10-15 kilometers above the surface and extending up to an altitude of 50 kilometers, has been cooling over the past 20 years at all latitudes while the atmosphere beneath it has warmed.

IPCC 6th Assessment Report

This cooling of the stratosphere has been detected over decades of satellite measurements and is a distinctive fingerprint of carbon dioxide-driven warming, as warming from other causes such as changes in sunlight or El Niño cycles do not yield stratospheric cooling.

Forecast No. 3: Arctic amplification

Manabe used his single-column model as the basis for a prototype quasi-global model, which simulated only a fraction of the globe. It also simulated only the upper 100 meters or so of the ocean and neglected the effects of ocean currents.

In 1975, Manabe published global warming simulations with this quasi-global model and again found stratospheric cooling. But he also made a new discovery – that the Arctic warms significantly more than the rest of the globe, by a factor of two to three times.

Map shows the Arctic warming much faster than the rest of the planet.

Map from IPCC 6th Assessment Report

This “Arctic amplification” turns out to be a robust feature of global warming, occurring in present-day observations and subsequent simulations. A warming Arctic furthermore means a decline in Arctic sea ice, which has become one of the most visible and dramatic indicators of a changing climate.

Forecast No. 4: Land-ocean contrast

In the early 1970s, Manabe was also working to couple his atmospheric model to a first-of-its-kind dynamical model of the full world ocean built by oceanographer Kirk Bryan.

Around 1990, Manabe and Bryan used this coupled atmosphere-ocean model to simulate global warming over realistic continental geography, including the effects of the full ocean circulation. This led to a slew of insights, including the observation that land generally warms more than ocean, by a factor of about 1.5.

As with Arctic amplification, this land-ocean contrast can be seen in observed warming. It can also be explained from basic scientific principles and is roughly analogous to the way a dry surface, such as pavement, warms more than a moist surface, such as soil, on a hot, sunny day.

The contrast has consequences for land-dwellers like ourselves, as every degree of global warming will be amplified over land.

Forecast No. 5: Delayed Southern Ocean warming

Perhaps the biggest surprise from Manabe’s models came from a region most of us rarely think about: the Southern Ocean.

This vast, remote body of water encircles Antarctica and has strong eastward winds whipping across it unimpeded, due to the absence of land masses in the southern midlatitudes. These winds continually draw up deep ocean waters to the surface.

An illustration shows how ocean upwelling works
Winds around Antarctica contribute to upwelling of cold deep water that keeps the Southern Ocean cool while also raising nutrients to the surface waters.
NOAA

Manabe and colleagues found that the Southern Ocean warmed very slowly when atmospheric carbon dioxide concentrations increased because the surface waters were continually being replenished by these upwelling abyssal waters, which hadn’t yet warmed.

This delayed Southern Ocean warming is also visible in the temperature observations.

What does all this add up to?

Looking back on Manabe’s work more than half a century later, it’s clear that even early climate models captured the broad strokes of global warming.

Manabe’s models simulated these patterns decades before they were observed: Arctic Amplification was simulated in 1975 but only observed with confidence in 2009, while stratospheric cooling was simulated in 1967 but definitively observed only recently.

Climate models have their limitations, of course. For instance, they cannot predict regional climate change as well as people would like. But the fact that climate science, like any field, has significant unknowns should not blind us to what we do know.

The Conversation

Nadir Jeevanjee works for NOAA’s Geophysical Fluid Dynamics Laboratory, which is discussed in this article. The views expressed herein are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.

ref. 5 forecasts early climate models got right – the evidence is all around you – https://theconversation.com/5-forecasts-early-climate-models-got-right-the-evidence-is-all-around-you-263248