A more complete Latin American history, including centuries of US influence, helps students understand the complexities surrounding Nicolás Maduro’s arrest

Source: The Conversation – USA (2) – By Lightning Jay, Assistant Professor of Teaching, Learning and Educational Leadership, Binghamton University, State University of New York

A woman shows a portrait of ousted Venezuelan President Nicolás Maduro during a demonstration in Caracas on Jan. 21, 2026. Pedro Mattey/AFP via Getty Images

Many of our college freshman students will have seen and read about the Jan. 3, 2026, U.S. military operation in Venezuela that culminated in the arrest of its leader, Nicolás Maduro, and his wife, Cilia Flores. The U.S. has charged Maduro and Flores with conspiracy and drug trafficking. Maduro and Flores are imprisoned in New York City, awaiting trial.

Some freshmen this semester will likely say Maduro’s unusual arrest violates international law. Others may view it as a decisive step in the U.S.’s fight against narco-terrorism.

That’s in part because the U.S has no national curriculum, and high school history courses often rely on teachers’ discretion, even more so than in other content areas. This results in history being taught a lot of different ways across schools.

As scholars of Latin American history and history education in the U.S., we know that most American high school students learn about the ancient civilizations in Latin America and a few other key flash points in history.

But few, we suspect, will understand Maduro’s arrest as part of a long history of the U.S.’s interventions in Latin America, stretching back to the Monroe Doctrine in the 1800s. President James Monroe introduced this foreign policy in an 1823 speech, saying that the U.S. would not allow European colonization or interference in the Western Hemisphere.

A man wearing a beige outfit is held on either arm by two men in uniform, while a woman behind him is held by the arm by one man. They walk near a grey river.
Nicolás Maduro and his wife, Cilia Flores, are seen in handcuffs after landing at a Manhattan helipad on Jan. 5, 2026.
XNY/Star Max/Contributor via Getty Images

A partial, skewed history

In high school world history courses, teachers in the U.S. often rely on case studies and examples to indicate historical trends.

High school students are likely to learn about the Inca, Maya and Aztec civilizations as representatives of pre-Columbian Latin America. They read about Spanish conquistadors such as Hernán Cortés, who overthrew the Aztec empire, and Francisco Pizarro, who conquered the Incas in the early 1500s.

They will learn about how most Latin American countries, including Mexico, Argentina, Colombia and Guatemala, gained independence in the early 1800s.

Often, students learn about these countries’ fights for independence, with the case example of the Haitian Revolution. They may learn about Simón Bolívar, the grand Venezuelan military officer and liberator who played a decisive role in the independence movements of countries including Venezuela, Colombia and Bolivia.

Students also often learn about more recent eras, including the Cuban missile crisis, a dangerous tipping point between the U.S. and the Soviet Union that brought the world close to nuclear war in 1962.

But overall, in U.S. history courses the U.S. is typically the main character and Latin America is treated as a place where the U.S. exerts power.

An example of this narrative includes the U.S.’s failed attempt to overthrow the Cuban government in 1961, during the Bay of Pigs invasion.

What US high school students miss

It is no surprise that students who learned this version of Latin American history in high school would have many questions about Maduro’s recent arrest – including who the longtime leader is.

A fuller exposure to Latin American history would include, among other things, lessons about neoliberal capitalism, which has long shaped the politics, economies and societies of Latin America. This is a U.S.-government supported policy that promotes less internal government intervention and more free-market capitalism.

Even though most Latin American countries achieved independence just 30 to 40 years after the U.S., not all presidential administrations in the U.S. fully accepted these nations’ freedom.

In 1904, Theodore Roosevelt added an additional text called a corollary to the Monroe Doctrine, stating that the U.S. could intervene in the internal affairs of any Latin American country in cases of wrongdoing.

By the late 1800s, the U.S. had conquered more than half of Mexico’s territory and annexed Puerto Rico. It also began occupying Cuba in 1898, after Spain lost the Spanish-American War and control over the island.

The U.S. militarily and politically then backed a 1903 revolution that gave Panama independence from Colombia. Panama’s independence led to a treaty that let the U.S. build and control the Panama Canal for nearly a century.

A cartoon shows a man wearing a red and white shirt, blue pants with stars and a hat riding a bicycle that has two globes for wheels down a dirt path with a horse behind it.
A political cartoon from 1898 criticizing American foreign policy shows Uncle Sam riding a bicycle with globes of the western and eastern hemispheres for wheels.
Bettmann/Contributor via Getty Images

A strong influence

Overall, the U.S. intervened in Latin America more than 40 times from 1898 to the mid-1990s.

Some of these interventions involved coups against democratically elected officials – including Jacobo Árbenz Guzmán in Guatemala in 1954 and Salvador Allende in Chile in 1973. These coups often led to civil wars or enduring military regimes that the U.S. claimed were necessary to fight the spread of communism.

Chile was then among the countries – including Argentina and Uruguay – that implemented economic policies in the 1970s that kept markets open to foreign businesses and governments, fostering dependence on wealthier nations.

Some Latin American countries, including Mexico and Brazil, struggled financially in the 1990s.

The U.S. and international financial institutions gave conditional loans that promoted austerity – meaning raising taxes and cutting public spending – and market liberalization, which reduces governmental restrictions over an economy. These loans stabilized some economies in the short term, but also made other problems, such as inequality and debt, worse.

In the early 2000s, several countries, including Brazil, Ecuador and Bolivia, elected left-leaning leaders who advocated for alternatives to this U.S.-backed economic policy. Ultimately, though, their reforms were often limited and not politically stable.

A more complete history

During a Jan. 4, 2026, press conference, President Donald Trump used a new term, the “Donroe Doctrine,” to describe his administration’s plans to claim dominance in the Western Hemisphere.

One day later, Vice President JD Vance doubled down: “This is in our neighborhood,” he said in an interview about Maduro’s capture. “In our neighborhood, the United States calls the shots. That’s the way it has always been. That’s the way it is again under the president’s leadership.”

Learning a more complete version of Latin American history in high school won’t prevent our college students from bringing questions to class about the U.S.’s capture of Maduro, and why Trump has said the U.S. will “run” Venezuela.

But this knowledge might help our students ask more complex, nuanced questions, such as whom national security strategies actually benefit the most.

Understanding Latin America is not merely a requirement for interpreting headlines about Venezuela but a prerequisite for Americans to understand themselves and their place in the world.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. A more complete Latin American history, including centuries of US influence, helps students understand the complexities surrounding Nicolás Maduro’s arrest – https://theconversation.com/a-more-complete-latin-american-history-including-centuries-of-us-influence-helps-students-understand-the-complexities-surrounding-nicolas-maduros-arrest-272984

US hospitality and tourism professors don’t reflect the diversity of the industry they serve

Source: The Conversation – USA (2) – By Michael D. Caligiuri, Assistant Professor of Organizational Behavior, California State Polytechnic University, Pomona

Tourists are diverse. Are tourism professors? Grant Baldwin/Getty Images

White and male professors continue to dominate U.S. hospitality and tourism education programs, our new research has found, even as the industry is growing increasingly diverse. This imbalance raises questions about who shapes the future of hospitality and whose voices are left out of the conversation.

Our analysis of 862 faculty members across 57 of the top U.S. college hospitality programs found that nearly three-quarters of these professors were white, and more than half were male. White men alone represented 43.5% of all faculty, showing persistent overrepresentation.

By comparison, only 3.7% of faculty identified as Black, far below the 14.4% share of the U.S. population that identifies as Black. Asian faculty accounted for 22.5% – significantly more than the Asian share of the U.S. population, with slightly more Asian women than men represented.

Because publicly available data did not allow us to reliably identify faculty from Hispanic or Indigenous backgrounds, our analysis focuses on representation among Black and Asian professors.

Our findings are based on a review of online faculty directories for every U.S. hospitality and tourism program included in the Academic Ranking of World Universities for 2020. We coded each faculty member by gender, race and academic rank using publicly available information gathered through university websites, LinkedIn and other professional profiles.

While this approach cannot capture the full complexity of individual identity, it reflects how representation is typically perceived by students and prospective faculty. For example, when a student browses a university’s website or sits in a classroom, they notice who looks like them and who does not.

Our results point to a stark imbalance. The people teaching, researching and preparing the next generation of hospitality leaders do not mirror the demographics of either the workforce or the student population.

Despite growing institutional attention to fairness and belonging across higher education, the tourism and hospitality field has been slow to evolve.

Why it matters

Representation in higher education isn’t just a matter of fairness. It affects student outcomes and the long-term sustainability of the field. Researchers have found that when students see role models who share their racial or ethnic identity, they report stronger connections to their academic community, higher retention rates and greater academic confidence.

For hospitality programs, which emphasize service, empathy and cultural understanding, these effects are especially meaningful. The hospitality workforce is one of the most diverse in the United States, spanning global hotels, restaurants, events and tourism operations. Yet the lack of variety among those teaching hospitality sends a conflicting message. Diversity is valued in the workforce, but it remains underrepresented in the classrooms training future leaders.

Major employers such as Marriott, Hyatt and IHG have invested heavily in programs that promote access and belonging, creating leadership pipelines for underrepresented groups. Meanwhile, academic programs that prepare these future leaders have not made comparable progress.

The absence of representation among hospitality and tourism academia also shapes the kinds of research questions that get asked. When faculty from underrepresented backgrounds are missing, issues such as racialized guest experiences, workplace bias and equitable career advancement may be overlooked.

What still isn’t known

Our study provides a snapshot, rather than a complete picture of faculty representation in U.S. hospitality and tourism programs. Because the sample focused on research-intensive universities, it excluded many historically Black universities and teaching-focused institutions, which may have more professors of color.

The research also relied on publicly available photographs and institutional profiles to identify race and gender. While this method mirrors how students visually perceive representation, it cannot fully capture multiethnic or intersectional identities.

We believe that future studies should track how faculty composition evolves over time and explore the lived experiences of educators from underrepresented backgrounds. Understanding the barriers that prevent these scholars from entering or staying in academia is essential for creating environments where all faculty can thrive.

The Research Brief is a short take on interesting academic work. Abigail Foster, admissions specialist at the University of the District of Columbia’s David A. Clarke School of Law, contributed to this article.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. US hospitality and tourism professors don’t reflect the diversity of the industry they serve – https://theconversation.com/us-hospitality-and-tourism-professors-dont-reflect-the-diversity-of-the-industry-they-serve-273345

Ending tax refunds by check will speed payments, but risks sidelining people who don’t have bank accounts

Source: The Conversation – USA (2) – By Beverly Moran, Professor Emerita of Law, Vanderbilt University

More than 6 million Americans receive paper tax refund checks annually. Often, those refunds go to purchase groceries or pay the bills. But this year, those taxpayers may be surprised to learn that the paper check they’re waiting for no longer exists.

That’s because of executive order 14247, which President Donald Trump signed in 2025. It directed the Treasury Department to stop issuing paper checks for tax refunds.

The executive order has its fans. Nacha, the organization that runs the network that electronically moves money between financial institutions, says the new rules could save the government US$68 million each year. The American Bankers Association is also excited, predicting the move will help people save on check-cashing fees. Other supporters argue the change will prevent mail theft and check fraud.

But what about the 6 million Americans without bank accounts – the so-called “unbanked”? Watchdogs warn that they will suffer if exceptions and outreach fall short.

As a professor who specializes in tax law, I think those concerns are valid.

Reform could leave the unbanked behind

Shifting to electronic payments is a classic modernization effort. So how could that be bad?

The problem is that a sizable number of Americans have no bank account. Twenty-three percent of people who earn under $25,000 were unbanked in 2023. Only 1% of people earning over $100,000 in 2023 lacked a bank account.

Black and Hispanic Americans, young adults, and people with disabilities are more likely to be unbanked than other people, and 1 in 5 unbanked households include someone with a disability.

Low-income families often use their refunds to pay for basics such as food and rent. And under the status quo, unbanked people already lose a large slice of those refunds to fees. Check cashers, for example, can charge up to 1.5% for government checks in New York, up to 3% in California, and even more in other states.

But the unbanked might find that they’re paying even higher fees in a post-check world. They might, for example, use paid tax preparation services to access refund loans. The federal courts and investigative journalists have discussed ways that prepaid tax preparers engage in false advertising and overpriced services.

Or they might forgo their tax refunds entirely.

Geography, race and the digital-banking divide

Where people live affects their access to banking.

Gaps in broadband coverage and lack of public transportation to reach libraries make computer access a problem for poor and rural people.

In so-called “banking deserts” – communities with few or no bank branches – people are more likely to use costly alternatives such as payday lenders and check-cashing services. Black-majority communities face distinct banking desert challenges, for both poor and middle class Black families. That’s because a middle-income Black family is more likely to live in a low-income neighborhood than a low-income white family.

Taken together, these barriers mean that many Americans who are legally entitled to tax refunds could soon struggle to receive them.

What should government do now?

The government is aware of the problem. The IRS promises that “limited exceptions” will be available to people who don’t have bank accounts, and that more guidance is on the way.

In the meantime, the agency stepped up on the day after Thanksgiving to urge people without bank accounts to open them, or to check whether their digital wallets can accept direct deposits, while the Bureau of the Fiscal Service has provided a website with all sorts of information for people who need to get up to speed on electronic payments.

For the moment, it’s unclear just how effective these efforts will be. Perhaps this is why the American Bar Association is urging Treasury to keep issuing paper refund checks unless Congress passes a law rather than relying on an executive order.

Consumer groups have urged the Treasury Department to fund robust exceptions, plain-language help lines and no-fee default payment options while also banning junk fees on refund- related cards and mandating easy access to cash-out at banks or retailers.

The problem is that the Treasury Department has lost over 30,000 employees and $20.2 billion in funding since January 2025. Add in the lingering effects of the last government shutdown, adopting a new system for tax filing and refunds might be too much to expect for the 2026 tax season.

The Conversation

Beverly Moran does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Ending tax refunds by check will speed payments, but risks sidelining people who don’t have bank accounts – https://theconversation.com/ending-tax-refunds-by-check-will-speed-payments-but-risks-sidelining-people-who-dont-have-bank-accounts-266562

Political polarization in Pittsburgh communities is rooted in economic neglect − not extremism

Source: The Conversation – USA (2) – By Ilia Murtazashvili, Professor of Public Policy, University of Pittsburgh

Pittsburgh is a city where your politics often depend on how your community and neighborhood are doing. Rebecca Droke/AFP Collection via Getty Images

When it comes to political polarization in the United States, the Pittsburgh region offers a useful window into what communities can do about it.

Pittsburgh is a “comeback city.” The once-prosperous steel industry may have declined, but universities, hospitals and technology are driving reinvention and a new emphasis on manufacturing.

It’s also a city where people’s economic situation and political orientation often depend on where they live and how their community and neighborhood are doing. Different neighborhoods experience different levels of safety, school quality, housing stability and responsiveness from public services. In the region’s hardest-hit communities, this shows up not only in frustration with local institutions, but in shifting voting patterns and growing openness to populist messages of renewal.

Protestors block intersection in the East Liberty neighborhood of Pittsburgh.
Pittsburgh’s political polarization is often less about ideology and more about whether people think local institutions still work for them.
Gene J. Puskar/AP

Our research at the University of Pittsburgh’s Center for Governance and Markets examines Rust Belt revitalization and how economic decline reshapes civic life and political conflict in communities such as Pittsburgh and its surrounding mill towns.

It also shows how local government performance shapes trust and political conflict in distressed communities across the Pittsburgh region.

We’ve found that the region’s polarization is often less about culture war debates and political ideology and more about whether people think local institutions still work for them. It also grows out of economic despair, eroding trust and the feeling that the rules of the game no longer produce a future worth believing in.

This polarization plays out most visibly in practical disputes about safety, housing, schools and basic public services. Residents split between calls for tougher law enforcement and demands for alternative approaches to criminal justice; between building more housing and regulating affordability; between consolidating schools and maintaining neighborhood anchors; and between higher spending on basic services such as construction costs and frustration over government’s ability to deliver.

National politics do matter here, but local conflicts are where politics become tangible and where trust rises or falls based on performance. Those decisions happen locally through city departments, school boards, neighborhood meetings and county agencies.

Not just ‘blue city, red suburbs’

At its core, Pittsburgh is really about the differences in the neighborhoods and communities. This shapes how communities perceive fairness and whether they trust that the government is capable of solving problems.

In some neighborhoods, civic institutions are strong and residents feel empowered in public life. In others, decades of disinvestment have weakened the foundations of everyday governance.

Man holding
A sign reads ‘No Place For Hate’ at a vigil held for the Tree of Life synagogue victims.
SOPA Images/Contributor/Getty Images

Squirrel Hill is one of Pittsburgh’s most civically vibrant neighborhoods. It is affluent and educated, and it has a number of synagogues, bookstores, immigrant service organizations and active civic groups. When political conflict emerged in the aftermath of the Tree of Life synagogue mass shooting, residents had networks to absorb disagreement rather than let it spiral into hostility.

Now shift to the South Side, where gentrification shapes politics differently. The South Side Flats evolved from a blue-collar neighborhood into a place with many renters and younger residents. People are civic-minded, though local debates often revolve around nightlife, public safety, rising costs and development.

Carrick remains politically mixed, reflecting tensions in a working- and middle-class community navigating demographic change and uncertainty about the future. Local concerns include schools, traffic, infrastructure and neighborhood stability, but national polarization shapes how issues are interpreted. Potholes become a service complaint and a symbol of being left behind. Housing projects become flash points for who belongs.

Homewood is a historically Black neighborhood shaped by decades of disinvestment. Deep challenges include poverty, blight and long-standing concerns about safety. Yet it also shows civic resilience through churches, nonprofits, health centers and grassroots leaders who have kept public life intact even when government capacity falls short. Even in heavily democratic neighborhoods like Homewood, citizens feel a sense of being overlooked.

Different neighborhoods experience “Pittsburgh” through different governing realities. The suburbs and mill towns are part of the story, too.

Braddock sign lit up across from steel building
Braddock has suffered economically after the collapse of the steel economy.
Jeff Swensen/Getty Images

In Braddock, where U.S. Sen. John Fetterman once served as mayor, the collapse of the steel economy severely damaged the tax base and weakened local capacity to provide reliable services. When municipal governments are forced to govern with fewer resources, politics become a battle over shortages of basic services, such as trash collection. Civic participation declines, and frustration is unabating.

In Aliquippa, the closure of major steel employers contributed to long-term economic contraction and political realignment. Communities once firmly Democratic have become more open to conservative populism, including among working-class and minority voters attracted to messages of economic renewal. This shift often involves less dramatic ideological conversation than a search for a political language that takes economic loss seriously.

Young girls celebrate Kamala Harris visiting their Aliquippa, PA neighborhood
Young supporters of U.S. Vice President Kamala Harris celebrate as her motorcade departs from Aliquippa High School during her 2024 presidential campaign .
Anna Moneymaker/Getty Images

And in McKeesport, a former manufacturing hub, economic distress combines with infrastructure decay and opioid addiction. Yet McKeesport also shows that polarization does not erase cooperation. Community organizations build partnerships around practical concerns, such as youth programming, small-business support and downtown development.

The Pittsburgh region is not “blue city, red suburbs.” Deindustrialization did more than eliminate jobs: It reduced mobility, strained families, shrank tax bases, weakened local civic institutions and made daily life feel less stable.

A lesson from Pittsburgh’s new mayor

Corey O’Connor, Pittsburgh’s new mayor, has emphasized economic revitalization, but also has argued what many officials forget or ignore: Residents judge government first by whether it delivers basic competence.

For many Pittsburghers, a government that cannot clear streets after a storm, fill potholes or maintain a functional snow removal fleet does not feel capable of managing large-scale economic revitalization or building civic trust. Snow removal and filling potholes aren’t trivial issues, but a test of whether public authority is reliable and fair.

When basic services cannot be provided in real time, mistrust becomes almost inevitable.

Rebuilding legitimacy from the bottom up

Escaping polarization requires a long-term strategy to rebuild opportunity, restore institutional credibility and strengthen civic infrastructure.

For Pittsburgh and its region, this depends on fostering frameworks for civic participation by expanding job training programs and delivering public services effectively, including through municipalities helping each other out to provide them.

Research shows that competence in the everyday work of government is a significant way to rebuild trust in public institutions. Starting with the basics in local government demonstrates that cooperation is possible and institutions can solve problems.

The lesson of Pittsburgh is that economic stability is civic stability. When it collapses, politics become less about disagreement than respect and recognition. Polarization is a consequence of people not feeling seen, heard or treated fairly by the institutions that govern them. Communities cannot wait for Washington to solve problems that are experienced – and addressed – locally.

The Conversation

Ilia Murtazashvili does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Political polarization in Pittsburgh communities is rooted in economic neglect − not extremism – https://theconversation.com/political-polarization-in-pittsburgh-communities-is-rooted-in-economic-neglect-not-extremism-273175

Malaria researchers are getting closer to outsmarting the world’s deadliest parasite

Source: The Conversation – USA (3) – By Kwesi Akonu Adom Mensah Forson, PhD. Candidate in Biology, University of Virginia

Malaria is transmitted to people by mosquitoes infected with a parasite from the _Plasmodium_ family. Jim Gathany via CDC/Dr. William Collins

Every year, malaria kills more than 600,000 people worldwide. Most of them are children under 5 in sub-Saharan Africa. But the disease isn’t confined to poor, rural areas – it’s a global threat that travels with people across borders.

For decades, the fight against malaria has felt like running in place. Bed nets and drugs save lives, but the family of parasites that cause malaria, called Plasmodium, keeps evolving new ways to survive. These parasites transmitted to humans through the bites of infected mosquitoes.

But something is shifting. As a malaria researcher working on my Ph.D., I study how the malaria parasite develops resistance to drugs. I know what malaria feels like. I’ve had it, and I’ve lost a family member to it. That experience drove me into this field.

When I started this work in 2023, few good options existed for protecting the youngest children – the group most likely to die from malaria. Now, for the first time in my career, I’m watching real breakthroughs happen simultaneously: new vaccines, powerful antibodies and genetic surveillance tools that can predict resistance before it spreads.

2 new vaccines for children

In 2023, the World Health Organization approved two malaria vaccines for children: one called RTS,S/AS01, also known as Mosquirix, and another referred to as R21/Matrix-M. Given in four doses starting around 5 months of age, they’re the first vaccines ever shown to prevent severe malaria.

These vaccines don’t provide perfect protection. They reduce the incidence of clinical malaria cases in vaccinated children by about 75% in the first year after receiving the primary dose, and the protection they offer fades over time. But when combined with bed nets and preventive drugs, they’re already preventing thousands of deaths. As of late 2025, about 20 countries, primarily in Africa where malaria burden is highest, have introduced these vaccines into childhood immunization programs.

A baby receiving a vaccine at a hospital.
In the past two years, two malaria vaccines have become available for babies starting at 5 months of age.
ER Productions Limited/DigitalVision via Getty Images

This matters enormously because children under 5 years old do not have fully developed immune systems and haven’t built up any natural resistance to malaria. A single infection can turn deadly within hours.

The vaccine is effective because it contains a molecule that mimics a key protein on the parasite’s surface, called circumsporozoite protein. This molecule trains the immune system to recognize the parasite upon infection after a mosquito bite, before the parasite can hide inside human cells.

Discovering a parasite’s hidden weak spot

In January 2025, researchers found something surprising about how the malaria parasite invades cells.

To invade liver cells, the parasite must shed a dense surface protein that acts as a protective shield. This briefly exposes specific hidden spots of proteins, called epitopes, that were previously invisible. That momentary unmasking could give the immune system a chance to recognize the parasite and stop the invasion.

Because this vulnerability is exposed only for a split second, most immune responses miss it. However, scientists discovered an antibody called MAD21-101 that is precise enough to catch it.

An antibody is essentially a microscopic security tag produced by the immune system that can stick to invaders. While standard antibodies fail to latch because of the parasite’s protein shield, MAD21-101 waits for the unmasking moment and locks directly onto the exposed spot.

In lab tests, this action blocked the parasite from entering liver cells, stopping the infection completely. Scientists envision turning this antibody into a treatment that prevents infections in high-risk infants, potentially to be used alongside existing vaccines to strengthen protection against malaria.

A laboratory technician examines samples in a research laboratory.
By exploiting vulnerabilities in the malaria parasite’s defense system, researchers hope to develop a treatment that blocks the parasite from entering cells.
wilpunt/E+ via Getty Images

Protecting and treating the youngest patients

Because of their undeveloped immune systems, infants have historically faced a double gap: limited ways to prevent malaria, and almost no safe treatments formulated for their tiny bodies when they inevitably got sick.

In 2022, the WHO began recommending a malaria prevention strategy called perennial malaria chemoprevention for babies starting at 2 months. Infants receive a full dose of a standard antimalarial medication, such as sulfadoxine-pyrimethamine, during their routine vaccination checkups. The treatment clears out parasites and provides temporary prevention, regardless of whether the child has a fever or other symptoms.

A new treatment has recently become available. Coartem Baby, approved by Swiss regulators in 2025, is the first malaria treatment designed specifically for infants weighing as little as 4.4 pounds. Unlike older drugs, this formula safely accounts for a baby’s immature metabolism. It contains one ingredient, artemether, which acts fast to reduce the parasite count immediately, and a second ingredient, lumefantrine, which stays in the blood longer to mop up any survivors.

Tracking parasite evolution around the globe

The malaria parasite has an uncanny ability to rewrite its genetic code under pressure, allowing it to adapt and withstand the very medicines designed to destroy it. This adaptability is now threatening the drug artemisinin, the backbone of global malaria treatment, which is starting to fail in parts of Africa and Southeast Asia. But researchers like me are getting a clearer picture of how resistance develops and how it might be interrupted.

One of the parasite’s tricks is to make extra copies of the genes that help it survive antimalarial drug treatment. In my research, I use a high-precision technique that counts the number of genes to estimate a sort of resistance score: A parasite with more copies is far better equipped to survive treatment than a parasite with only one.

Scientists around the world are using molecular scanning tools to hunt for specific mutations – single-letter changes in the parasite’s DNA – that make the parasite more resistant to the drug. For example, researchers in my lab are working to pin down the parasite’s genetic code as it’s in the act of changing, in order to catch dangerous mutations while they’re still rare. That would give researchers time to deploy alternative treatments before children start dying from drug-resistant infections.

These tracking tools allow epidemiologists to create early warning systems that can identify where drug resistance is emerging and predict where it might spread next, as the pathogen hitchhikes across continents in travelers’ bloodstreams. Based on those warnings, health officials can switch treatment strategies before a drug fails completely. What’s more, knowing exactly which genes the parasite modifies may enable researchers to block those changes to prevent resistance from emerging.

Malaria research is entering a new era where, although the parasite adapts, scientists like me can now adapt faster. A malaria-free childhood isn’t guaranteed yet, but for the first time in my career, it feels like a realistic goal rather than a distant dream.

The Conversation

Kwesi Akonu Adom Mensah Forson receives no funding, compensation or financial support from any companies or organizations related to malaria vaccines, drugs or diagnostic technologies. His research on malaria parasite genetics is conducted as part of Ph.D. project at the University of Virginia, supported by university and academic research funds.

ref. Malaria researchers are getting closer to outsmarting the world’s deadliest parasite – https://theconversation.com/malaria-researchers-are-getting-closer-to-outsmarting-the-worlds-deadliest-parasite-268316

How Trump’s Greenland threats amount to an implicit rejection of the legal principles of Nuremberg

Source: The Conversation – USA (3) – By Michael Blake, Professor of Philosophy, Public Policy and Governance, University of Washington

Daily life on a street at sunset in Nuuk, Greenland, on Jan. 21, 2026. AP Photo/Evgeniy Maloletka

U.S. President Donald Trump has, for the moment, indicated a willingness to abandon his threat to take over Greenland through military force – saying that he prefers negotiation to invasion. He is, however, continuing to assert that the United States ought to acquire ownership of the self-governing territory.

Trump has repeatedly raised the possibility of using military action, against both Greenland and Canada.

These threats were often taken as fanciful. The fact that he has, successfully, used military force to remove Venezuelan President Nicolas Maduro from power has lent some plausibility to these threats.

Crucially, these military possibilities have been justified almost exclusively with reference to what Trump’s administration sees as America’s national interests. Anything short of ownership in the case of Greenland, the president has emphasized, would fail to adequately protect American interests.

As a political philosopher concerned with the moral analysis of international relations, I am deeply troubled by this vision of warfare – and by the moral justifications used to legitimize the making of war.

This view of warfare is radically different from the one championed by the U.S. for much of the 20th century. Most notably, it repudiates the legal principle that informed the Nuremberg trials: that military force cannot be justified on the basis of national self-interest alone.

Those trials, set up after World War II to prosecute the leaders of the Nazi regime, were foundational for modern international law; Trump, however, seems to disregard or reject the legal ideas the Nuremberg tribunal sought to establish.

Aggressive war as international crime

The use of warfare as a means by which states might seek political and economic advantage was declared illegal by 1928’s Kellogg-Briand Pact – an international instrument by which many nations, including both Germany and the U.S., agreed to abandon warfare as a tool for national self-interests.

After 1928, invading another country in the name of advancing national interests was formally defined as a crime, rather than a legitimate policy option.

The existence of this pact did not prevent the German military actions that led to World War II. The prosecution for the International Military Tribunal at Nuremberg, accordingly, took two aims as central: reaffirming that aggressive warfare was illegal, and imposing punishment on those who had chosen to use military force against neighboring states.

The first charge laid against the Nazi leadership at Nuremberg was therefore the initiation of a “war of aggression” – a war chosen by a state for its own national interests.

The chief prosecutor in Nuremberg was Robert H. Jackson, who at the time also served as a justice on the U.S. Supreme Court. Jackson began his description of the crime by saying that Germany, in concert with other nations, had bound itself in 1928 to “seek the settlement of disputes only by pacific means.”

More particularly, Jackson noted, Germany had justified its invasion of neighboring countries with reference to “Lebensraum” – living room, or, more generally, space for German citizens – which marked those invasions out as illegal.

A courtroom scene shows several people seated in three rows, with national flags displayed behind them and additional rows of seated attendees visible in front.
Nuremberg trial, Dec. 4, 1945.
Sepia Times/ Universal Images Group via Getty Images

Germany used its own national interests as sufficient reason to initiate deadly force against other nations. In so doing, said Jackson, it engaged in a crime for which individual criminal punishment was an appropriate response.

In the course of this crime, Jackson noted, Germany had shown a willingness to ignore both international law and its own previous commitments – and had given itself “a reputation for duplicity that will handicap it for years.”

Jackson asserted, further, that the extraordinary violence of the 20th century required the building of some legal tools, by which the plague of warfare and violence might be constrained.

If such principles were not codified in law, and respected by nations, then the world might well see, in Jackson’s phrase, the “doom of civilization.” Nuremberg’s task, for Jackson, was nothing less than ensuring that aggressive war was forever to be understood as a criminal act – a proposition backed, crucially, by the U.S. as party to the Nuremberg trials.

The morality of warfare

It is fair to say that the U.S, like other nations, has had a mixed record of living up to the legal principles articulated at Nuremberg, given its record of military intervention in places like Vietnam and Iraq.

President Donald Trump, wearing a blue suit and red tie, is seated in front of the American flag, with the NATO flag displayed beside it.
President Donald Trump at the World Economic Forum in Davos, Switzerland, on Jan. 21, 2026.
AP Photo/Evan Vucci

Trump’s prior statements about Greenland, however, hint at something more extreme: They represent an abandonment of the principle that aggressive war is a criminal act, in favor of the idea that the U.S. can use its military as it wishes, to advance its own national interests.

Previous presidents have perhaps been guilty of paying too little attention to the moral importance of such international principles. Trump, in contrast, has announced that such principles do not bind him in the least.

In a recent interview with The New York Times, Trump asserted that he did not “need international law” to know what to do. He would, instead, be limited only by “his own morality” and “his own mind.”

European leaders, for their part, have increasingly decried Trump’s willingness to go back on his word, or abandon previously insisted-upon principles, if such revisions seem to provide him with some particular advantage.

Trump’s statements, however, imply that his administration has adopted a position strikingly similar to that decried by Justice Jackson: The U.S., on this vision, can simply decide that its own moral interests are more important than those of other countries, and can initiate violence against those countries on its own discretion. It can do this, moreover, regardless of either the content of international law or of previously undertaken political commitments.

This vision, finally, is being undertaken in a world in which the available tools of destruction are even more complex – and more deadly – than those available during the Second World War.

It is, indeed, a historic irony that the U.S. of today has so roundly repudiated the moral values it both helped developed and championed globally during the 20th century.

The Conversation

Michael Blake receives funding from the National Endowment for the Humanities.

ref. How Trump’s Greenland threats amount to an implicit rejection of the legal principles of Nuremberg – https://theconversation.com/how-trumps-greenland-threats-amount-to-an-implicit-rejection-of-the-legal-principles-of-nuremberg-274018

What we get wrong about forgiveness – a counseling professor unpacks the difference between letting go and making up

Source: The Conversation – USA (3) – By Richard Balkin, Distinguished Professor of Counselor Education, University of Mississippi

Take stock of your feelings, and the other person’s, before you decide what kind of forgiveness to offer. Jacob Wackerhausen/iStock via Getty Images Plus

Two in five Americans have fought with a family member about politics, according to a 2024 study by the American Psychiatric Association. One in five have become estranged over controversial issues, and the same percentage has “blocked a family member on social media or skipped a family event” due to disagreements.

Difficulty working through conflict with those close to us can cause irreparable harm to families and relationships. What’s more, inability to heal these relationships can be detrimental to physical and emotional well-being, and even longevity.

Healing relationships often involves forgiveness – and sometimes we have the ability to truly reconcile. But as a professor and licensed professional counselor who researches forgiveness, I believe the process is often misunderstood.

In my 2021 book, “Practicing Forgiveness: A Path Toward Healing,” I talk about how we often feel pressure to forgive and that forgiveness can feel like a moral mandate. Consider 18th-century poet Alexander Pope’s famous phrase: “To err is human; to forgive, divine” – as though doing so makes us better people. The reality is that reconciling a relationship is not just difficult, but sometimes inadvisable or dangerous, especially in cases involving harm or trauma.

I often remind people that forgiveness does not have to mean a reconciliation. At its core, forgiveness is internal: a way of laying down ill will and our emotional burden, so we can heal. It should be seen as a separate process from reconciliation, and deciding whether to renegotiate a relationship.

But either form of forgiveness is difficult – and here may be some insights as to why:

Forgiveness, karma and revenge

In 2025, I conducted a study with my colleagues Alex Hodges and Jason Vannest to explore emotions people may experience around forgiveness, and how those emotions differ from when they experience karma or revenge.

We defined forgiveness as relinquishing feelings of ill will toward someone who engaged in a harmful action or behavior toward you. “Karma” refers to a situation where someone who wronged you got what they deserved without any action from you. “Revenge,” on the other hand, happens when you retaliate.

First, we prompted participants to share memories of three events related to offering forgiveness, witnessing karma and taking revenge. After sharing each event, they completed a questionnaire indicating what emotions they experienced as they retold their story.

A hand holding a car key traces it along the side of a beige-colored car to leave a scratch.
Revenge can feel easier than forgiveness, which often brings sadness or anxiety.
nattul/iStock via Getty Images Plus

We found that most people say they aspire to forgive the person who hurt them. To be specific, participants were about 1.5 times more likely to desire forgiveness than karma or revenge.

Most admitted, though, that karma made them happier than offering forgiveness.

Working toward forgiveness tended to make people sad and anxious. In fact, participants were about 1.5 times more likely to experience sadness during forgiveness than during karma or revenge. Pursuing forgiveness was more stressful, and harder work, because it forces people to confront feelings that may often be perceived as negative, such as stress, anger or sadness.

Two different processes

Forgiveness is also confusing, thanks to the way it is typically conflated with reconciliation.

Forgiveness researchers tie reconciliation to “interpersonal forgiveness,” in which the relationship is renegotiated or even healed. However, at times, reconciliation should not occur – perhaps due to a toxic or unsafe relationship. Other times, it simply cannot occur, such as when the offender has died, or is a stranger.

But not all forgiveness depends on whether a broken relationship has been repaired. Even when reconciliation is impossible, we can still relinquish feelings of ill-will toward an offender, engaging in “intrapersonal forgiveness.”

Not all forgiveness has to involve renegotiating a relationship with the person who hurt you.

I used to practice counseling in a hospital’s adolescent unit, in which all the teens I worked with were considered a danger to themselves or others. Many of them had suffered abuse. When I pictured what “success” could look like for them, I hoped that, in adulthood, my clients would not be focused on their past trauma – that they could experience safety, health, belonging and peace.

Most often, such an outcome was not dependent upon reconciling with the offender. In fact, reconciliation was often ill-advised, especially if offenders had not expressed remorse or commitment to any type of meaningful change. Even if they had, there are times when the victim chooses not to renegotiate the relationship, especially when working through trauma.

Still, working toward intrapersonal forgiveness could help some of these young people begin each day without the burden of trauma, anger and fear. In effect, the client could say, “What I wanted from this person I did not get, and I no longer expect it.” Removing expectations from people by identifying that we are not likely to get what we want can ease the burden of past transgressions. Eventually, you decide whether to continue to expend the emotional energy it takes to stay angry with someone.

Relinquishing feelings of ill will toward someone who has caused you harm can be difficult. It may require patience, time and hard work. When we recognize that we are not going to get what we wanted from someone – trust, safety, love – it can feel a lot like grief. Someone may pass through the same stages, including denial, anger, bargaining and depression, before they can accept and forgive within themselves, without the burden of reconciliation.

Taking stock

With this in mind, I offer four steps to evaluate where you are on your forgiveness journey. A simple tool I developed, the Forgiveness Reconciliation Inventory, looks at each of these steps in more depth.

  1. Talk to someone. You can talk to a friend, mentor, counselor, grandma – someone you trust. Talking makes the unmentionable mentionable. It can reduce pain and help you gain perspective on the person or event that left you hurt.

  2. Examine if reconciliation is beneficial. Sometimes there are benefits to reconciliation. Broken relationships can be healed, and even strengthened. This is especially more likely when the offender expresses remorse and changes behavior – something the victim has no control over.

  3. In some cases, however, there are no benefits, or the benefits are outweighed by the offender’s lack of remorse and change. In this case, you might have to come to terms with processing an emotional – or even tangible – debt that will not be repaid.

  4. Consider your feelings toward the offender, the benefits and consequences of reconciliation, and whether they’ve shown any remorse and change. If you want to forgive them, determine whether it will be interpersonal – talking to them and trying to renegotiate the relationship – or intrapersonal, in which you reconcile your feelings and expectations within yourself.

Either way, forgiveness comes when we relinquish feelings of ill will toward another.

The Conversation

Richard Balkin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What we get wrong about forgiveness – a counseling professor unpacks the difference between letting go and making up – https://theconversation.com/what-we-get-wrong-about-forgiveness-a-counseling-professor-unpacks-the-difference-between-letting-go-and-making-up-273317

A brief history of sugar

Source: The Conversation – UK – By Seamus Higgins, Associate Professor Food Process Engineering, Chemical & Environmental Engineering, University of Nottingham

Still Life by Edward Hartley Mooney (1918). Manchester Art Gallery, CC BY

A few thousand years ago, sugar was unknown in the western world. Sugarcane, a tall grass first domesticated in New Guinea around 6000BC, was initially chewed for its sweet juice rather than crystallised. By around 500BC, methods to boil sugarcane juice into crystals was first developed in India.

One of the earliest references to sugar we have dates to 510BC, when Emperor Darius I of what was then Persia invaded India. There he found “the reed which gives honey without bees”.

Knowledge of sugar-making spread west to Persia, then across the Islamic world after the 7th century AD. Sugar reached medieval Europe only via trade routes. It was extremely expensive and used more like a spice. Indeed, in the 11th century Crusaders returning home talked of how pleasant this “new spice” was.

It was the supply potential of this “new spice” in the early 16th century that encouraged Portuguese entrepreneurs to export enslaved people to newly discovered Brazil. There, they rapidly started growing highly profitable sugar cane crops. By the 1680s, the Dutch, English and French all had their own sugar plantations with enslaved colonies in the Caribbean.

In the 18th century, the increasing popularity of tea and coffee led to the widespread adoption of sugar as a sweetener. In 1874, prime-minister William Gladstone abolished a 34% tax on sugar to ease the costs of basic food for workers. Cheap jam (one-third fruit pulp to two-thirds sugar) began to appear on the table of every working-class household. The growing demand for sugar in Britain and Europe encouraged further growth and profit, earning the name “white gold”.

Painting of a woman carrying sugar cane
Getting in the Sugar Cane, River Nile by Frederick Trevelyan Goodall (1875).
Grundy Art Gallery, CC BY

Britain’s per capita sugar consumption skyrocketed from four pounds in 1704 to 90 pounds by 1901. While slavery was eventually abolished, the supply of cheap labour was sustained by new flows of indentured workers from India, Africa and China.

Britain’s naval blockade of Napoleonic France at the start of the 19th century prodded the French to seek an alternative to Caribbean sugar supplies. It gave birth to the European sugar beet industry.

Sugar beet is a biennial root crop grown for its high sucrose content, which is extracted to produce table sugar. The 20th century has seen this traditionally heavily subsidised and tariff-protected industry grow to produce approximately 50% of Europe’s sugar. This includes the UK’s consumption, which is now around 2 million tons of beet (60%) and cane sugar (40%) annually.

Delights and dangers

In 1886, Atlanta’s prohibition laws forced the businessman and chemist John Pemberton to reformulate his popular drink, Pemberton’s Tonic French Wine Coca. He replaced the alcohol with a 15% sugar syrup and added citric acid. His bookkeeper, Frank Robinson, chose a new name for the drink after its main ingredients – cocaine leaves and kola nuts – and created the Coca-Cola trademark in the flowing script we know today.

In 1879, Swiss chocolatier Daniel Peter invented the world’s first commercial milk chocolate using sweetened condensed milk developed by his neighbour, Henri Nestlé. Milk chocolate, which contains about 50-52 grams of sugar per 100 grams, has now become a global favourite for its sweet taste and creamy texture.

Chocolate and cola have since solidified their status as global staples in the realm of fizzy drinks and sweet treats and have become essential indulgences for people worldwide.

In 1961, an American epidemiologist Ancel Keys appeared on the cover of Time magazine for his “diet-heart hypothesis”. Through his “seven countries” study, he found an association between saturated fat intake, blood cholesterol and heart disease. Keys remarked: “People should know the facts. Then, if they want to eat themselves to death, let them.”

An advert for Cocoa-Cola from 1961.

With competing scientific advice John Yudkin, founder of the nutrition department at Queen’s College, published an article in the Lancet. He argued that international comparisons do not support the claim that total or animal fat is the main cause of coronary thrombosis, highlighting that sugar intake has a stronger correlation with heart disease.

He published his book, Pure, White and Deadly, in 1972. It highlighted the evidence linking sugar consumption to increased coronary thrombosis and its involvement in dental caries, obesity, diabetes and liver disease. He ominously noted: “If only a small fraction of what is already known about the effects of sugar were to be revealed about any other material used as a food additive, that material would promptly be banned.”

The British Sugar Bureau dismissed Yudkin’s claims about sugar as “emotional assertions”, and the World Sugar Research Organisation called his book “science fiction”. In the 1960s and 1970s, the sugar industry promoted sugar as an appetite suppressant and funded research that downplayed the risks of sucrose, while emphasising dietary fat as the primary driver of coronary heart disease.

Scientific debate over the relative health effects of sugar and fat continued for decades. In the meantime, governments began publishing dietary guidelines advising people to eat less saturated fats and high-cholesterol foods. An unavoidable consequence of this was that people began eating more carbohydrates and sugar instead.

Official dietary guidelines did not begin to clearly acknowledge the health risks of excessive sugar consumption until much later, as evidence accumulated toward the end of the 20th century.

In my new book, Food and Us: the Incredible Story of How Food Shapes Humanity I explore the fact that sugar is a relatively new addition to our diet. In just a short period of 300 years, or 0.0001% of our food evolution, sugar has become ubiquitous in our food supply. It has even evolved its own terms of endearment and affection for people, such as sugar, honey and sweetheart.

However, the global addiction to sugar poses significant and interconnected challenges for public health, the economy, society and the environment. The pervasive nature of sugar in processed foods, combined with its effects on the brain’s reward system, creates a cycle of dependency that is driving a worldwide crisis of diet-related diseases and straining health systems.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

The Conversation

Seamus Higgins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A brief history of sugar – https://theconversation.com/a-brief-history-of-sugar-266189

Iran’s biggest centres of protest are also experiencing extreme pollution and water shortages

Source: The Conversation – UK – By Nima Shokri, Professor, Applied Engineering, United Nations University

Iran’s current wave of protests is often interpreted as having been sparked by inflation, currency collapse, corruption and repression. These explanations are not wrong, but they are incomplete.

Beneath the country’s political and economic crisis lies a more destabilising force that is still largely missing from international analysis: environmental breakdown.

Iran is experiencing not one environmental crisis but the convergence of several: water shortages, land subsidence, air pollution and energy failure. All added together, life is a struggle for survival.

So when citizens protest today, they are not only resisting authoritarian governance. They are responding to a state that can no longer reliably provide the most basic forms of security: water to drink, air to breathe, land to stand on, and electricity to carry on their daily lives.

From 2003-2019, Iran lost an estimated 211 cubic kilometres of groundwater, or twice its annual water consumption, leaving the country facing water bankruptcy. Excessive pumping – driven by agricultural expansion, energy subsidies and weak regulation – has caused land subsidence rates of up to 30cm per year, affecting areas where around 14 million people, more than one-fifth of the population, live.

Provinces such as Kerman, Alborz, Khorasan Razavi, Isfahan and the capital Tehran now have more than a quarter of their population living with the risk of subsidence. In all, large sections of the country – particularly around the capital Tehran, the agricultural centre Rafsanjan, and the city of Mashhad – are subsiding at alarming rates of close to 10cm per year.




Read more:
Iran’s record drought and cheap fuel have sparked an air pollution crisis – but the real causes run much deeper


Subsidence has cracked homes, damaged railways, destabilised highways, and threatened airports as well as Unesco-listed heritage sites.

Iran’s lack of water has become politically explosive. When reservoirs fall to extremely low levels, when taps run dry at night in major cities, or when farmers watch rivers and lakes disappear, grievances turn into protest.

As wetlands, lakes and riverbeds dry up, their exposed surfaces generate dust and salt storms that can blanket cities hundreds of kilometres away.

The aftermath of recent protests in Tehran.

At the same time, chronic electricity shortages – caused by underinvestment, inefficiency and poor infrastructure – have forced power plants and industries to burn heavy fuels. The result is extreme concentrations of sulfur dioxide, nitrogen oxides and fine particulate matter.

Ignoring environmental problems

The World Health Organization notes that Iran is facing severe problems in terms of its air quality. Around 11% of deaths and 52% of the burden of diseases across the country are attributable to environmental risk factors.

In recent months, major cities have repeatedly closed schools and offices due to hazardous air quality, while hospitals report surges in respiratory and cardiovascular emergencies.

These environmental failures do not exist in isolation. They are the predictable outcome of decades of distorted national priorities.

Since the 1980s, Iran has channelled vast financial, institutional and political resources into ideological expansion and regional disputes — supporting groups in Lebanon, Syria, Iraq, and Yemen – while systematically underinvesting in domestic environmental governance, infrastructure renewal and job creation.

Meanwhile, Iran’s political economy has been structured around energy subsidies and megaprojects that reward short-term extraction over long-term sustainability. Cheap fuel has encouraged water-intensive agriculture and inefficient industry.

Environmental agencies have remained fragmented and politically weak, unable to restrain more powerful ministries or governmentally linked economic actors. International isolation has compounded these failures.

Sanctions deepened the environmental crisis by restricting access to modern monitoring technologies, clean-energy systems, efficient irrigation and external finance.

While much of the world invested in technology and regulation to curb pollution and stabilise water systems, Iran doubled down on emergency fixes that deepened ecological damage rather than containing it. Sanctions and climate stress amplified the problems, but the root cause lay in state priorities that have consistently ignored environmental security.

The political consequences are now unmistakable. Environmental stress reshapes not only why people protest, but where and how. Maps of unrest in 92 Iranian cities reveal a clear pattern. Protests increasingly erupt in areas where there is groundwater collapse, land subsidence and water rationing.

Water shortages and protest

In provinces such as Tehran, Khuzestan in the south-west and Isfahan in central Iran – all areas with high levels of protest – there are acute water shortages, subsidence causing damage to roads and pipelines, and disputes over access to water.

In other cities such as Kermanshah and Ilam, intensifying unrest reflects the interaction of major environmental problems of drought, rainfall decline and groundwater depletion with severe economic problems and poverty.

But Iran is not unique in this regard. Similar conflicts over water and economic issues have played a destabilising role in neighbouring Syria. Prolonged drought, conflicts over water and access to it, and limited rainfall have affected crop yields and animals there. Hundreds of thousands of people living in agricultural communities have been driven to cities and camps nearby in a desperate attempt to survive.

Water mismanagement and access to decent drinking water have also fuelled unrest in Basra in the south of Iraq.

Iran is not facing a cyclical protest problem that can be stabilised through repression, subsidies or tactical concessions. It is confronting a structural collapse of the systems that make governance possible, and are at the heart of human survival.

When there’s no water and the air becomes unbreathable, the social contract fractures. Citizens no longer debate ideology or reform timelines, they question the state’s right to rule at all.

What Iran sees today is not simply environmental stress but irreversible simultaneous failures across water, land, air and energy. These are not shocks that fade with rainfall or budget injections. They permanently shrink the state’s capacity to deliver security and economic opportunity.

Coercion can disperse crowds but it cannot reverse subsidence, restore collapsed aquifers or neutralise airborne toxins. A state cannot govern indefinitely where the ecological foundations of life, agriculture and public health are failing all at once.

The Conversation

Nima Shokri is affiliated with Hamburg University of Technology.

ref. Iran’s biggest centres of protest are also experiencing extreme pollution and water shortages – https://theconversation.com/irans-biggest-centres-of-protest-are-also-experiencing-extreme-pollution-and-water-shortages-274217

Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

Source: The Conversation – UK – By Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin University

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing.

These faster chips enable greater computing power by allowing devices to perform tasks more efficiently. As a result, we saw scientific simulations improving, weather forecasts becoming more accurate, graphics more realistic, and later, machine learning systems being developed and flourishing. It looked as if computing power itself obeyed a natural law.

This phenomenon became known as Moore’s Law, after the businessman and scientist Gordon Moore. Moore’s Law summarised the empirical observation that the number of transistors on a chip approximately doubled every couple of years. This also allows the size of devices to shrink, so it drives miniaturisation.

That sense of certainty and predictability has now gone, and not because innovation has stopped, but because the physical assumptions that once underpinned it no longer hold.

So what replaces the old model of automatic speed increases? The answer is not a single breakthrough, but several overlapping strategies.

One involves new materials and transistor designs. Engineers are refining how transistors are built to reduce wasted energy and unwanted electrical leakage. These changes deliver smaller, more incremental improvements than in the past, but they help keep power use under control.

Another approach is changing how chips are physically organised. Rather than placing all components on a single flat surface, modern chips increasingly stack parts on top of each other or arrange them more closely. This reduces the distance that data has to travel, saving both time and energy.

Perhaps the most important shift is specialisation. Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is.

Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity.

These are not general-purpose computers, and they are unlikely to replace conventional machines. Their potential lies in very specific areas, such as certain optimisation or simulation problems where classical computers can struggle to explore large numbers of possible solutions efficiently. In practice, these technologies are best understood as specialised co-processors, used selectively and in combination with traditional systems.

For most everyday computing tasks, improvements in conventional processors, memory systems and software design will continue to matter far more than these experimental approaches.

For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

New technologies

At the Supercomputing SC25 conference in St Louis, hybrid systems that mix CPUs (processors) and GPUs (graphics processing units) with emerging technologies such as quantum or photonic processors were increasingly presented and discussed as practical extensions of classical computing. For most everyday tasks, improvements in classical processors, memories and software will continue to deliver the biggest gains.

But there is growing interest in using quantum and photonic devices as co-
processors, not replacements. Their appeal lies in tackling specific classes of
problems, such as complex optimisation or routing tasks, where finding low-energy
or near-optimal solutions can be exponentially expensive for classical machines
alone.

In this supporting role, they offer a credible way to combine the reliability of
classical computing with new computational techniques that expand what these
systems can do.

Life after Moore’s Law is not a story of decline, but one that requires constant
transformation and evolution. Computing progress now depends on architectural
specialisation, careful energy management, and software that is deeply aware of
hardware constraints. The danger lies in confusing complexity with inevitability, or marketing narratives with solved problems.

The post-Moore era forces a more honest relationship with computation where performance is not anymore something we inherit automatically from smaller transistors, but it is something we must design, justify, and pay for, in energy, in complexity, and in trade-offs.

The Conversation

Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Moore’s law: the famous rule of computing has reached the end of the road, so what comes next? – https://theconversation.com/moores-law-the-famous-rule-of-computing-has-reached-the-end-of-the-road-so-what-comes-next-273052