Three reasons why the climate crisis must reshape how we think about war

Source: The Conversation – UK – By Duncan Depledge, Senior Lecturer in Geopolitics and Security, Loughborough University

Rawpixel.com / Shutterstock

Earth’s average temperature rose more than 1.5°C above pre-industrial levels in 2024 for the first time – a critical threshold in the climate crisis. At the same time, major armed conflicts continue to rage in Ukraine, Gaza, Sudan and elsewhere.

What should be increasingly clear is that war now needs to be understood as unfolding in the shadow of climate breakdown.

The relationship between war and climate change is complex. But here are three reasons why the climate crisis must reshape how we think about war.


Wars and climate change are inextricably linked. Climate change can increase the likelihood of violent conflict by intensifying resource scarcity and displacement, while conflict itself accelerates environmental damage. This article is part of a series, War on climate, which explores the relationship between climate issues and global conflicts.


1. War exacerbates climate change

The inherent destructiveness of war has long degraded the environment. But we have only recently become more keenly aware of its climatic implications.

This follows efforts primarily by researchers and civil society organisations to account for the greenhouse gas emissions resulting from fighting, most notably in Ukraine and Gaza, as well as to record emissions from all military operations and post-war reconstruction.

One study, conducted by Scientists for Global Responsibility and the Conflict and Environment Observatory, has made a best guess that the total carbon footprint of militaries across the globe is greater than that of Russia, which currently has the fourth-largest footprint in the world.

The US is believed to have the highest military emissions. Estimates by UK-based researchers Benjamin Neimark, Oliver Belcher and Patrick Bigger suggest that, if it were a country, the US military would be the 47th-largest emitter of greenhouse gases in the world. This would put it between Peru and Portugal.

These studies, though, rest on limited data. Sometimes partial emissions data is reported by military agencies, and researchers have to supplement this with their own calculations using official government figures and those of associated industries.

There is also significant variation from country to country. Some military emissions, most notably those of China and Russia, have proved almost impossible to assess.

Wars can also put international cooperation on climate change and the energy transition at risk. Since the start of the Ukraine war, for instance, scientific cooperation between the west and Russia in the Arctic has broken down. This has prevented crucial climate data from being compiled.

Critics of militarism argue that the acknowledgement of war’s contribution to the climate crisis ought to be the moment of reckoning for those who are too willing to spend vast resources on maintaining and expanding military power. Some even believe that demilitarisation is the only way out of climate catastrophe.

Others are less radical. But the crucial point is that recognition of the climate costs of war increasingly raises moral and practical questions about the need for more strategic restraint and whether the business of war can ever be rendered less environmentally destructive.

2. Climate change demands military responses

Before the impact of war on the climate came into focus, researchers debated whether the climate crisis could act as a “threat multiplier”. This has led some to argue that climate change could intensify the risk of violence in parts of the world already under stress from food and water insecurity, internal tensions, poor governance and territorial disputes.

Some conflicts in the Middle East and Sahel have already been labelled “climate wars”, implying they may not have happened if it were not for the stresses of climate change. Other researchers have shown how such claims are deeply contentious. Any decision to engage in violence or go to war is always still a choice made by people, not the climate.

Harder to contest is the observation that the climate crisis is leading militaries to be deployed with greater frequency to assist with civilian emergencies. This encompasses a wide range of activities from combating wildfires to reinforcing flood defences, assisting with evacuations, conducting search-and-rescue operations, supporting post-disaster recovery and delivering humanitarian aid.

Chinese soldiers stacking sandbags in a flooded area.
Chinese soldiers stacking sandbags in a flooded area of Hebei province.
chinahbzyg / Shutterstock

Whether the climate crisis will result in more violence and armed conflict in the future is impossible to predict. If it does, military force may need to be deployed more frequently. At the same time, if militaries are depended upon to help respond to the growing frequency and intensity of climate-related disasters, their resources will be further stretched.

Governments will be confronted with tough choices about what kinds of tasks should be prioritised and whether military budgets should be increased at the expense of other societal needs.

3. Armed forces will need to adapt

With geopolitical tensions rising and the number of conflicts increasing, it seems unlikely that calls for demilitarisation will be met any time soon. This leaves researchers with the uncomfortable prospect of having to rethink how military force can – and ought to be – wielded in a world simultaneously trying to adapt to accelerating climate change and escape its deep dependence on fossil fuels.

The need to prepare military personnel and adapt bases, equipment and other infrastructure to withstand and operate effectively in increasingly extreme and unpredictable climatic conditions is a matter of growing concern. In 2018, two major hurricanes in the US caused more than US$8 billion (£5.95 billion) worth of damage to military infrastructure.

My own research has demonstrated how, in the UK at least, there is growing awareness among some defence officials that militaries need to think carefully about how they will navigate the major changes unfolding in the global energy landscape that are being brought about by the energy transition.

Militaries are being confronted with a stark choice. They can either remain as one of the last heavy users of fossil fuels in an increasingly low-carbon world or be part of an energy transition that will probably have significant implications for how military force is generated, deployed and sustained.

What is becoming clear is that operational effectiveness will increasingly depend on how aware militaries are of the implications of climate change for future operations. It will also hinge on how effectively they have adapted their capabilities to cope with more extreme climatic conditions and how much they have managed to reduce their reliance on fossil fuels.

Soldiers delivering humanitarian aid.
Soldiers delivering humanitarian aid.
photos_adil / Shutterstock

In the early 19th century, the Prussian general Carl von Clausewitz famously argued that while war’s nature rarely changes, its character is almost constantly evolving with the times.

Recognising the scale and reach of the climate crisis will be essential if we are now to make sense of why and how future wars will be waged, as well as how some might be averted or rendered less destructive.

The Conversation

Duncan Depledge receives funding from the UK Economic and Social Research Council. He is an Associate Fellow of the London-based Royal United Services Institute and a Non-Resident Fellow of the Washington D.C.-based Center for Climate Security (part of the Council on Strategic Risks).

ref. Three reasons why the climate crisis must reshape how we think about war – https://theconversation.com/three-reasons-why-the-climate-crisis-must-reshape-how-we-think-about-war-262469

Indonesia violence: state response to protests echoes darker times in country’s history

Source: The Conversation – UK – By Soe Tjen Marching, Senior Lecturer in the Department of Languages, Cultures and Linguistics, SOAS, University of London

Indonesians have taken to the streets over the past week to protest against elite corruption. The demonstrations began peacefully on August 25 with protests outside parliament in the capital, Jakarta. They soon spread across the country.

The Indonesian People’s Revolution, a group at the centre of the demonstrations, is demanding an investigation into corruption allegations involving the family of former president Joko “Jokowi” Widodo. Jokowi has strongly rejected these accusations, painting them as a smear campaign.

Protesters are also calling for the dissolution of parliament and the impeachment of the current vice-president, Gibran Rakabuming Raka, who is Jokowi’s son.

Gibran’s path to the vice-presidency was controversial. In Indonesia, presidential and vice-presidential candidates must be at least 40 years old, yet he was only 36 during the 2024 election. The constitutional court – led by Gibran’s uncle, Anwar Usman – changed the rules to grant an exception for regional leaders. Usman was dismissed from his post by an ethics council less than a month later.

The group’s demands resonate with wider public anger over the gulf between privilege and poverty in Indonesia. Parliamentarians pocket high salaries, while millions of workers scrape by on some of the lowest minimum wages in the world. News in mid-August that MPs had secured another pay rise only added fuel to the fire.

The protests have now erupted into violence in several areas of the country. The trigger for this came on August 28, when an armoured police vehicle struck and killed a motorcycle taxi driver in Jakarta, before fleeing the scene. Listyo Sigit Prabowo, Indonesia’s national police chief, issued an apology to the victim’s family and has confirmed the case is being investigated.

Indonesia’s current president, Prabowo Subianto, initially denounced demonstrators as “traitors” and “terrorists”, vowing decisive action against them. But he has now backtracked, pledging on August 31 to heed public demands and even cut lawmakers’ allowances.

In the days leading up to this abrupt reversal, echoes of a darker chapter in the nation’s history resurfaced – one marked by state-led violence and intimidation, the mobilisation of Islamist groups, and the scapegoating of minorities.

Indonesia prides itself on bhinneka tunggal ika, unity in diversity. But Prabowo has long relied on conservative Islamist groups to strengthen his power, push through hardline policies and help silence dissent. This includes the Islamic Defenders Front, which the Jokowi government banned in 2020.

Back in 2014, when Jokowi and Prabowo contested presidential elections, Islamist hardliners perpetrated smear campaigns against Jokowi, accusing him of being a communist agent. They also orchestrated the mass mobilisation that toppled Jakarta’s ethnic Chinese Christian governor, Basuki Tjahaja Purnama, in 2017.

The alliance cooled after Prabowo entered Jokowi’s coalition at the end of 2019, but has seemingly been revived amid the current protests. On August 30, the president summoned 16 Islamic organisations to his private residence, reportedly urging them to work with the government to “guard security and peace”.

Meanwhile, racist threats targeting Chinese Indonesian women have flooded online platforms. Popular content creator Elsa Novia Sena, among others, have received rape threats from an account named @endonesaatanpacinak (“Indonesia without Chinese”). I too received rape threats online after criticising the government on X.

For many in Indonesia’s Chinese minority, the atmosphere is chillingly reminiscent of May 1998. That month saw hundreds of women brutally raped – some with sharp tools – in riots characterised by widespread looting and killing. Human rights activists say the 1998 riots were orchestrated or exacerbated by the military to divert public attention from anti-government demonstrations.

Prabowo, an army general at the time, is suspected of being involved in human rights violations during the 1998 riots. He has rejected his alleged involvement in any acts of violence – but was discharged from the military over the allegations, and banned from entering the US for two decades.

Departure from the past

During the blackouts on August 31 in parts of Jakarta (which also occurred prior to the 1998 riots), looting broke out. Yet, in my opinion, something feels different this time. Protesters deliberately targeted the homes of four MPs accused of sneering at the public after securing a pay rise.

The house of Sri Mulyani, Indonesia’s finance minister, was also attacked. She is seen by many Indonesians as complicit in imposing draconian tax policies on ordinary people while sparing elite lawmakers. Sri has dismissed the accusation, stating that any laws are passed in an “open and transparent manner”.

No Chinese Indonesians have been attacked so far. A new slogan, “people looking after people”, has circulated on social media. Many insist the old trick of scapegoating Indonesia’s Chinese minority no longer works.

In May 1998, public anger against the then-president, Suharto, was driven by an economic crash. Indonesia’s ethnic Chinese population – seen as disproportionately successful in business – became convenient scapegoats. This time, however, many Indonesian people have turned against the army.

The protests are no longer only about economic grievances or corruption – they seem to be a stand against the authoritarian playbook of divide and rule. Many even suspect that some of the looters in the current demonstrations are soldiers in disguise.

In Surabaya, a city on the Indonesian island of Java, suspicions deepened when several police posts were torched. People online pointed out that the arsonist, caught in a viral photo, wore an outdated motorcycle taxi uniform paired with Adidas Terrex shoes worth millions of rupiah (hundreds of pounds). The caption asked: “Why would a taxi driver wear a uniform no longer in circulation and, if he really were one, how could he possibly afford such shoes?”

Prabowo may not have anticipated such a reaction from the Indonesian people, forcing him into a U-turn. But despite his gestures of appeasement, many remain unconvinced, dismissing his offers as merely cosmetic.

That scepticism appeared vindicated almost immediately. Late on September 1, the Islamic University of Bandung and Pasundan University came under attack as security forces fired tear gas and rubber bullets at student protesters.

The mass protests, which have spread to 32 provinces of Indonesia, are unlikely to subside soon. The question is whether the government can still weaponise fear and prejudice to cling to power – or whether ordinary Indonesians will stand firm and united against corruption and state violence in demanding justice.

The Conversation

Soe Tjen Marching does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Indonesia violence: state response to protests echoes darker times in country’s history – https://theconversation.com/indonesia-violence-state-response-to-protests-echoes-darker-times-in-countrys-history-264374

Genetic tests for cancer can give uncertain results: new science is making the picture clearer to guide treatment

Source: The Conversation – Africa (2) – By Claudia Christowitz, Postdoctoral Research Fellow, Stellenbosch University

Cancer treatment is becoming more personalised. By considering a patient’s unique genetic and molecular profile, along with their lifestyle and environmental factors, doctors can make more accurate treatment decisions. This approach, known as personalised or precision medicine, has been increasingly used in South Africa and has expanded to other African countries in recent decades. It requires doctors to rely more on genetic tests to guide decisions. But these tests don’t always give clear answers. Functional genomics may offer a way to improve the interpretation of unclear genetic test results. We spoke to physiological scientist Claudia Christowitz about it.


Is cancer a genetic disease and what is personalised medicine?

Cancer is fundamentally a genetic disease. It arises when changes in a person’s DNA (referred to as variants or mutations) disrupt normal cell functions such as cell growth and division. It eventually leads to tumour formation. These changes can be inherited from families or acquired during a person’s lifetime. This can be due to lifestyle and environmental risk factors such as smoking, ultraviolet radiation and infectious agents, among others.

Over the past few decades, we’ve entered the era of personalised medicine. As a result, the role of genetics in cancer treatment has become more prominent. Personalised medicine involves tailoring cancer treatment to each patient’s unique characteristics.

For example, even if two people are diagnosed with the same type and stage of cancer, their treatment outcomes may differ. This is because factors such as their genetic and molecular make-up, overall health status, age, body composition, lifestyle habits, and use of other medication can all influence how well a treatment works for them.

How have advances in genetic testing helped in treating cancer?

Advances in DNA sequencing technologies have made it possible to detect genetic variants more quickly and accurately. The tests can look for just a few genes linked to certain medical conditions, or they can describe the entire genome of an individual, or just the protein-coding regions of the genome (the exome).

DNA sequencing has revolutionised cancer care. Doctors can use it to improve prevention in people who are at risk of cancer, detect cancer early, and select the most appropriate treatment.

Africa’s first high-throughput Genomics Centre was launched in 2019 by the South African Medical Research Council. Cancer patients can now undergo whole exome sequencing and whole genome sequencing locally for around R10,000 (about US$566) to R20,000 (about US$1,132). This is sometimes covered by medical insurance. These services are also available at research facilities like the Centre for Proteomic and Genomic Research or the Centre for Epidemiological Research and Innovation at Stellenbosch University.

These facilities strengthen the capacity to sequence, analyse and store human genomes, particularly for the diverse gene pool in Africa. But routine genome sequencing, especially in the public health sector, remains limited due to high costs, limited awareness and the need for trained personnel.

What are the shortcomings of genetic testing?

Genetic testing doesn’t provide all the answers. Unfortunately, not all genetic results are clear-cut. In many cases, patients receive results showing changes in their DNA that cannot be confidently classified as either harmful (pathogenic variants or mutations) or harmless (benign variants). These unclassified variants are known as variants of uncertain significance. The uncertainty often leaves both patients and their oncologists (cancer doctors) unsure of the way forward.

With the advancement of sequencing technologies, rare or novel variants are more frequently detected. But without a clear understanding of whether the variant affects gene function, clinicians are often forced to wait – sometimes for years – until more information emerges.

When patients undergo genetic testing – often as part of a hereditary cancer screening or in response to early-onset or familial cancers – the hope is to find a variant that clearly explains their condition. But sequencing may yield variants of uncertain significance, raising questions about its usefulness in patient care and whether the tests are worth the cost.

What is functional genomics and how can it make genetic test results clearer?

Functional genomics is a growing field that could transform how we interpret these unresolved genetic results and make it possible to improve clinical care for cancer patients.

Functional genomics goes beyond simply reading the DNA code. It investigates how genetic variants behave in biological systems. By examining how a variant alters gene expression, protein function, cell behaviour, or response to treatments, scientists can determine whether it is likely to be benign or pathogenic.

This information is crucial for making timely medical decisions. Importantly, cells derived from patients can be used to mimic real biological conditions more accurately. By using cells carrying such a variant and comparing them to cells without the variant, scientists can determine whether the variant is influencing the response of cells to certain treatments or not.

In short: genetic testing is like reading the “instruction manual” of a cell. Functional genomics is like testing the effects of changes to these instructions.

My study, using patient-derived cells, investigated the effects of a rare TP53 variant that was identified for the first time in germline (inherited) DNA through whole exome sequencing in a South African family with multiple cancers. I found that this variant made cells resistant to the chemotherapy drug doxorubicin. Instead of undergoing cell death as expected, the cells went into a kind of “sleep mode” called senescence, where damaged cells stop dividing.

Although this prevents the growth of damaged cells, senescent cells can release signals that may inflame and harm nearby healthy cells. The variant also reduced how well immune cells can move, which may affect their ability to go to cancer cells and attack them. This study, supervised by Prof Anna-Mart Engelbrecht, Prof Maritha Kotze, and Dr Daniel Olivier from Stellenbosch University, highlighted how functional genomics can unravel the impact of a variant of uncertain significance, which may guide medical decisions.

In a world where personalised medicine is rapidly evolving, functional genomics represents a critical step forward, offering more clarity, better care, and renewed hope to those facing cancer.

The Conversation

Claudia Christowitz received funding from the National Research Foundation, South Africa.

ref. Genetic tests for cancer can give uncertain results: new science is making the picture clearer to guide treatment – https://theconversation.com/genetic-tests-for-cancer-can-give-uncertain-results-new-science-is-making-the-picture-clearer-to-guide-treatment-262545

We decoded the oldest genetic data from an Egyptian, a man buried around 4,500 years ago – what it told us

Source: The Conversation – Africa (2) – By Adeline Morez Jacobs, Postdoctoral researcher, University of Padova (Italy); visiting lecturer, Liverpool John Moores University (UK), University of Padua

A group of scientists has sequenced the genome of a man who was buried in Egypt around 4,500 years ago. The study offers rare insight into the genetic ancestry of early Egyptians and reveals links to both ancient north Africa and Mesopotamia, which includes modern day Iraq and parts of Syria, Turkey and Iran.

Egypt’s heat and terrain made it difficult for such studies to be conducted but lead researcher Adeline Morez Jacobs and team made a breakthrough. We spoke to her about the challenges of sequencing ancient remains, the scientific advances that made this discovery possible, and why this genome could reshape how we understand Egypt’s early dynastic history.


What is genome sequencing? How does it work in your world?

Genome sequencing is the process of reading an organism’s entire genetic code. In humans, that’s about 3 billion chemical “letters” (A, C, T and G). The technology was first developed in the late 1970s, and by 2003 scientists had completed the first full human genome. But applying it to ancient remains came much later and has been far more difficult.

DNA breaks down over time. Heat, humidity and chemical reactions damage it, and ancient bones and teeth are filled with DNA from soil microbes rather than from the individual we want to study. In early attempts during the 1980s, scientists hoped mummified remains might still hold usable DNA. But the available sequencing methods weren’t suited to the tiny, fragmented molecules left after centuries or millennia.

To sequence DNA, scientists first need to make lots of copies of it, so there’s enough to read. Originally, this meant putting DNA into bacteria and waiting for the colonies to grow. It took days, demanded careful upkeep and yielded inconsistent results. Two breakthroughs changed this.

In the early 1990s, PCR (polymerase chain reaction) allowed millions of DNA copies to be made in hours, and by the mid-2000s, new sequencing machines could read thousands of fragments in parallel. These advances not only sped up the process but also made it more reliable, enabling even highly degraded DNA to be sequenced.

Since then, researchers have reconstructed the genomes of extinct human relatives like Neanderthals, and more than 10,000 ancient people who lived over the past 45,000 years. But the work is still challenging – success rates are low for very old remains, and tropical climates destroy DNA quickly.

What’s exceptional about the sequencing you did on these remains?

What made our study unusual is that we were able to sequence a surprisingly well-preserved genome from a region where ancient DNA rarely survives.

When we analysed the sample, we found that about 4%-5% of all DNA fragments came from the person himself (the rest came from bacteria and other organisms that colonised the remains after burial). The quantity of DNA of interest (here, human) is usually between 40% and 90% when working with living organisms. That 4%-5% might sound tiny, but in this part of the world, it’s a relatively high proportion, and enough to recover meaningful genetic information.

We think the individual’s unusual burial may have helped. He was placed inside a ceramic vessel within a rock-cut tomb, which could have shielded him from heat, moisture and other damaging elements for thousands of years.

To make the most of this rare preservation, we filtered out the very shortest fragments, which are too damaged to be useful. The sequencing machines could then focus on higher-quality pieces. Thanks to advanced facilities at the Francis Crick Institute, we were able to read the DNA over and over, generating about eight billion sequences in total. This gave us enough data to reconstruct the genome of what we call the Nuwayrat individual, making him the oldest genome from Egypt to date.

Does this open new frontiers?

We did not develop entirely new techniques for this study but we combined some of the most effective methods currently available into a single optimised pipeline. This is what palaeogeneticists (scientists who study the DNA of ancient organisms) often do: we adapt and refine existing methods to push the limits of what can be recovered from fragile remains.

That’s why this result matters. It shows that, with the right combination of methods, we can sometimes retrieve genomes even from places where DNA usually doesn’t survive well, like Egypt.

Egypt is also a treasure trove for archaeology, with remains that could answer major questions about human history, migration and cultural change.

Our success suggests that other ancient Egyptian remains might still hold genetic secrets, opening the door to discoveries we couldn’t have imagined just a decade ago.

What was your biggest takeaway from the sequencing?

The most exciting result was uncovering this man’s genetic ancestry. By comparing his DNA to ancient genomes from Africa, western Asia and Europe, we found that about 80% of his ancestry was shared with earlier north African populations, suggesting shared roots within the earlier local population. The remaining 20% was more similar to groups from the eastern Fertile Crescent, particularly Neolithic Mesopotamia (present-day Iraq).

This might sound expected, but until now we had no direct genetic data from an Old Kingdom (2686–2125 BCE) Egyptian individual. The results support earlier studies of skeletal features from this period, which suggested close links to predynastic populations, but the genome gives a far more precise and conclusive picture.

This genetic profile fits with archaeological evidence of long-standing connections between Egypt and the eastern Fertile Crescent, dating back at least 10,000 years with the spread of farming, domesticated animals and new crops into Egypt. Both regions also developed some of the world’s first writing systems, hieroglyphs in Egypt and cuneiform in Mesopotamia. Our finding adds genetic evidence to the picture, suggesting that along with goods and ideas, people themselves were moving between these regions.

Of course, one person can’t represent the full diversity of the ancient Egyptian society, which was likely complex and cosmopolitan, but this successful sequencing opens the door for future studies, building a richer and more nuanced picture of the people who lived there over thousands of years.

The Conversation

Adeline Morez Jacobs does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. We decoded the oldest genetic data from an Egyptian, a man buried around 4,500 years ago – what it told us – https://theconversation.com/we-decoded-the-oldest-genetic-data-from-an-egyptian-a-man-buried-around-4-500-years-ago-what-it-told-us-262061

How do bodies decompose? Cape Town forensic scientists are pushing frontiers of new detection methods

Source: The Conversation – Africa (2) – By Victoria Gibbon, Professor in Biological Anthropology, Division of Clinical Anatomy and Biological Anthropology, University of Cape Town

Cape Town has consistently been one of the metropolitan regions in South Africa with the highest murder rates. It has more than double the national average, and is currently ranked second overall and 16th worldwide. Many victims are discovered only after their bodies have decomposed, burned, or been exposed to the elements. That makes identification difficult and delays justice.

Each year, more than 3,500 unnatural deaths, including murders and accidents, are handled by the city’s Observatory Forensic Pathology Institute. Around 9% remain unidentified. That’s hundreds of families left without answers. We asked Victoria Gibbon and colleagues about their work in forensic taphonomy.

What is the role of forensic taphonomists?

In death, we all decompose in the same general way. But understanding the nuances, especially those introduced by unnatural deaths, requires forensic taphonomy – the science of understanding how bodies break down. Every decomposition process is unique. It is shaped by everything around us: what we’re wearing, how we’re buried and what animals and insects might find us first.

Forensic taphonomists study all these variables and more, specialising in the recovery and analysis of human remains in the context of their environment. They play a vital role in death investigations involving unidentified persons, which requires specialised expertise in the human body and environment. There is a close working relationship with police and pathologists who hold the responsibility for identification and circumstances of death.

Imagine: a body is uncovered amid the sand and scrub of Cape Town’s coastline. By the time it’s found, the remains are in an advanced state of decomposition – identity unclear, the timeline murky. Understanding decomposition helps to determine how long someone has been dead, which can support identification, narrow down missing persons lists, or confirm (or contradict) witness accounts. It’s essential, delicate and some could say, grim work.




Read more:
Clothed pig carcasses are revealing the secrets of mummification – South African study provides insights for forensic scientists


Forensic taphonomists’ expertise lies in understanding how bodies decompose under different conditions and how that process can reveal time-since-death, potential trauma, and ultimately, identity. Forensic taphonomists answer questions like: Who was this person? How long have they been there? And what happened to them? Their work sits at the intersection of science, justice and innovation. Because in the end, forensic science is about justice, not just science.

One of the main challenges in forensic taphonomy is that many of the global standards were developed in countries with very different climates and ecological systems. So, they are not representative of South Africa. Cape Town’s internationally unique microclimates, soil types and scavenger populations don’t align neatly with existing models.

To produce locally relevant data, researchers need to observe how decomposition actually happens in these settings. In South Africa, the legislation does not allow forensic taphonomists to study the decomposition of human bodies donated to medical science for research, as happens elsewhere in the world. Therefore they most frequently study the decomposition of adult domestic pigs as internationally accepted models for human decomposition. Pigs have numerous biological similarities to humans that are important for decomposition.

Initial decomposition studies in the Western Cape more than a decade ago began by examining unclothed bodies to establish baseline data. But as it turns out, that’s not what most cases look like. In reality, most deceased persons are clothed, and usually discovered alone. This mismatch prompted a shift.

What have you done differently in your research?

More realistic, single-body, clothed studies were needed. That meant smaller sample sizes, longer timelines, and greater data accuracy. But it leads to findings that are actually applicable in local forensic work.

We innovated, creating a world-first automated data collection machine to tackle the challenge of consistency and cost-effective, reliable long-term monitoring. It tracks decomposition in real-time, continuously and remotely. As bodies lose mass (due to water evaporation, insect activity, or tissue breakdown), the machine logs the weight changes, providing high-resolution data on the progression of decomposition. This removes the subjectivity of human observation. It allows researchers to collect standardised information across multiple cases and environments, simultaneously. It is solar-powered and transmits data remotely via cell phone networks, meaning it can be deployed anywhere we need to establish data for.

Our system has tracked in detail how tissues dry out beneath the skin. This can help reconstruct the time since death by linking drying patterns to environmental conditions and weather.

In addition to weighing decomposing bodies, our system provides continuous power to two motion-activated infrared trail cameras.




Read more:
How scavengers can help forensic scientists identify human corpses


One camera trap is positioned directly above the body; the other is alongside the body. Together, these cameras record photos and videos of the decomposition process, giving us detailed insight into the activities of the animals that come to eat and otherwise interact with the decomposing body.

This machine offers precision, reliability and adaptability. It transforms how decomposition can be studied.

What’s next?

This technological innovation isn’t just a local solution. The team aims to provide a means by which researchers from different countries can share results that are directly comparable. These will form the basis for a global taphonomic data network: a collaborative platform for researchers to gain insights into decomposition as it plays out across geographies, environments and case types.

The hope is that this network will allow forensic anthropologists to adapt decomposition estimates to local contexts while contributing to an international evidence base.

Collectively, our research innovations may help produce more accurate case outcomes, that are admissible in court, and capable of providing justice for victims. Assistance with case resolution means restoring the identities of those who might otherwise have been lost to justice and history.

The Conversation

Victoria Gibbon receives funding from National Research Foundation of South Africa. She is affiliated with The University of Cape Town.

Devin Alexander Finaughty receives funding from the Oppenheimer Memorial Trust. He is affiliated with the University of the Witwatersrand and the Wildlife Forensic Academy.

Kara Adams is affiliated with the University of Cape Town.

ref. How do bodies decompose? Cape Town forensic scientists are pushing frontiers of new detection methods – https://theconversation.com/how-do-bodies-decompose-cape-town-forensic-scientists-are-pushing-frontiers-of-new-detection-methods-262832

China’s electric vehicle influence expands nearly everywhere – except the US and Canada

Source: The Conversation – USA (2) – By Jack Barkenbus, Visiting Scholar, Vanderbilt University

BYD electric cars wait at a Chinese port to be loaded onto the automobile carrier BYD Shenzhen, which was slated to sail to Brazil. STR/AFP via Getty Images

In 2025, 1 in 4 new automotive vehicle sales globally are expected to be an electric vehicle – either fully electric or a plug-in hybrid.

That is a significant rise from just five years ago, when EV sales amounted to fewer than 1 in 20 new car sales, according to the International Energy Agency, an intergovernmental organization examining energy use around the world.

In the U.S., however, EV sales have lagged, only reaching 1 in 10 in 2024. By contrast, in China, the world’s largest car market, more than half of all new vehicle sales are electric.

The International Energy Agency has reported that two-thirds of fully electric cars in China are now cheaper to buy than their gasoline equivalents. With operating and maintenance costs already cheaper than gasoline models, EVs are attractive purchases.

Most EVs purchased in China are made there as well, by a range of different companies. NIO, Xpeng, Xiaomi, Zeekr, Geely, Chery, Great Wall Motor, Leapmotor and especially BYD are household names in China. As someone who has followed and published on the topic of EVs for over 15 years, I expect they will soon become as widely known in the rest of the world.

What kinds of EVs is China producing?

China’s automakers are producing a full range of electric vehicles, from the subcompact, like the BYD Seagull, to full-size SUVs, like the Xpeng G9, and luxury cars, like the Zeekr 009.

Recent European crash-test evaluations have given top safety ratings to Chinese EVs, and many of them cost less than similar models made by other companies in other countries.

A Wall Street Journal video explores a Chinese ‘dark factory’ – one so automated that it doesn’t need lights inside.

What’s behind Chinese EV success?

There are several factors behind Chinese companies’ success in producing and selling EVs. To be sure, relatively low labor costs are part of the explanation. So are generous government subsidies, as EVs were one of several advanced technologies selected by the Chinese government to propel the nation’s global technological profile.

But Chinese EV makers are also making other advances. They make significant use of industrial robotics, even to the point of building so-called “dark factories” that can operate with minimal human intervention. For passengers, they have reimagined vehicles’ interiors, with large touchscreens for information and entertainment, and even added a refrigerator, bed or karaoke system.

Competition among Chinese EV makers is fierce, which drives additional innovation. BYD is the largest seller of EVs, both domestically and globally. Yet the company says it employs over 100,000 scientists and engineers seeking continual improvement.

From initial concept models to actual rollout of factory-made cars, BYD takes 18 months – half as long as U.S. and other global automakers take for their product development processes, Reuters reported.

BYD is also the world’s second-largest EV battery seller and has developed a new battery that can recharge in just five minutes, roughly the same time it takes to fill a gas-powered car’s tank.

A gray car sits on a showroom floor under bright lights.
An Xpeng M03, whose base model costs about US$17,000, is displayed at a car show in Shanghai in April 2025.
VCG/VCG via Getty Images

Exports

The real test of how well Chinese vehicles appeal to consumers will come from export sales. Chinese EV manufacturers are eager to sell abroad because their factories can produce far more than the 25 million vehicles they can sell within China each year – perhaps twice as much.

China already exports more cars than any other nation, though primarily gas-powered ones at the moment. Export markets for Chinese EVs are developing in Western Europe, Southeast Asia, Latin America, Australia and elsewhere.

The largest market where Chinese vehicles, whether gasoline or electric, are not being sold is North America. Both the U.S. and Canadian governments have created what some have called a “tariff fortress” protecting their domestic automakers, by imposing tariffs of 100% on the import of Chinese EVs – literally doubling their cost to consumers.

Customers’ budgets matter too. The average price of a new electric vehicle in the U.S. is approximately $55,000. Less expensive vehicles make up part of this average, but without tax credits, which the Trump administration is eliminating after September 2025, nothing gets close to $25,000. By contrast, Chinese companies produce several sub-$25,000 EVs, including the Xpeng M03, the BYD Dolphin and the MG4 without tax credits. If sold in America, however, the 100% tariffs would remove the price advantage.

Tesla, Ford and General Motors all claim they are working on inexpensive EVs. More expensive vehicles, however, generate higher profits, and with the protection of the “tariff fortress,” their incentive to develop cheaper EVs is not as high as it might be.

In the 1970s and 1980s, there was considerable U.S. opposition to importing Japanese vehicles. But ultimately, a combination of consumer sentiment and the willingness of Japanese companies to open factories in the U.S. overcame that opposition, and Japanese brands like Toyota, Honda and Nissan are common on North American roads. The same process may play out for Chinese automakers, though it’s not clear how long that might take.

The Conversation

Jack Barkenbus does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. China’s electric vehicle influence expands nearly everywhere – except the US and Canada – https://theconversation.com/chinas-electric-vehicle-influence-expands-nearly-everywhere-except-the-us-and-canada-262459

5 forecasts early climate models got right – the evidence is all around you

Source: The Conversation – USA (2) – By Nadir Jeevanjee, Research Physical Scientist, National Oceanic and Atmospheric Administration

The island nation of Tuvalu is losing land to sea-level rise, and its farms and water supplies are under threat from salt water. Mario Tama/Getty Images

Climate models are complex, just like the world they mirror. They simultaneously simulate the interacting, chaotic flow of Earth’s atmosphere and oceans, and they run on the world’s largest supercomputers.

Critiques of climate science, such as the report written for the Department of Energy by a panel in 2025, often point to this complexity to argue that these models are too uncertain to help us understand present-day warming or tell us anything useful about the future.

But the history of climate science tells a different story.

The earliest climate models made specific forecasts about global warming decades before those forecasts could be proved or disproved. And when the observations came in, the models were right. The forecasts weren’t just predictions of global average warming – they also predicted geographical patterns of warming that we see today.

An older man smiles at the camera with an impish grin.
Syukuro Manabe was awarded the Nobel Prize in physics in 2021.
Johan Nilsson/TT News Agency/AFP

These early predictions starting in the 1960s emanated largely out of a single, somewhat obscure government laboratory outside Princeton, New Jersey: the Geophysical Fluid Dynamics Laboratory. And many of the discoveries bear the fingerprints of one particularly prescient and persistent climate modeler, Syukuro Manabe, who was awarded the 2021 Nobel Prize in physics for his work.

Manabe’s models, based in the physics of the atmosphere and ocean, forecast the world we now see while also drawing a blueprint for today’s climate models and their ability to simulate our large-scale climate. While models have limitations, it is this track record of success that gives us confidence in interpreting the changes we’re seeing now, as well as predicting changes to come.

Forecast No. 1: Global warming from CO2

Manabe’s first assignment in the 1960s at the U.S. Weather Bureau, in a lab that would become the Geophysical Fluid Dynamics Laboratory, was to accurately model the greenhouse effect – to show how greenhouse gases trap radiant heat in Earth’s atmosphere. Since the oceans would freeze over without the greenhouse effect, this was a key first step in building any kind of credible climate model.

To test his calculations, Manabe created a very simple climate model. It represented the global atmosphere as a single column of air and included key components of climate, such as incoming sunlight, convection from thunderstorms, and his greenhouse effect model.

Chart showing temperatures warming at ground level and in the atmosphere as carbon dioxide concentrations rises.
Results from Manabe’s 1967 single-column global warming simulations show that as carbon dioxide (CO2) increases, the surface and lower atmosphere warm, while the stratosphere cools.
Syukuro Manabe and Richard Wetherald, 1967

Despite its simplicity, the model reproduced Earth’s overall climate quite well. Moreover, it showed that doubling carbon dioxide concentrations in the atmosphere would cause the planet to warm by about 5.4 degrees Fahrenheit (3 degrees Celsius).

This estimate of Earth’s climate sensitivity, published in 1967, has remained essentially unchanged in the many decades since and captures the overall magnitude of observed global warming. Right now the world is about halfway to doubling atmospheric carbon dioxide, and the global temperature has warmed by about 2.2 F (1.2 C) – right in the ballpark of what Manabe predicted.

Other greenhouses gases such as methane, as well as the ocean’s delayed response to global warming, also affect temperature rise, but the overall conclusion is unchanged: Manabe got Earth’s climate sensitivity about right.

Forecast No. 2: Stratospheric cooling

The surface and lower atmosphere in Manabe’s single-column model warmed as carbon dioxide concentrations rose, but in what was a surprise at the time, the model’s stratosphere actually cooled.

Temperatures in this upper region of the atmosphere, between roughly 7.5 and 31 miles (12 and 50 km) in altitude, are governed by a delicate balance between the absorption of ultraviolet sunlight by ozone and release of radiant heat by carbon dioxide. Increase the carbon dioxide, and the atmosphere traps more radiant heat near the surface but actually releases more radiant heat from the stratosphere, causing it to cool.

Heat map shows cooling in the stratosphere. The stratosphere, starting at 10-15 kilometers above the surface and extending up to an altitude of 50 kilometers, has been cooling over the past 20 years at all latitudes while the atmosphere beneath it has warmed.

IPCC 6th Assessment Report

This cooling of the stratosphere has been detected over decades of satellite measurements and is a distinctive fingerprint of carbon dioxide-driven warming, as warming from other causes such as changes in sunlight or El Niño cycles do not yield stratospheric cooling.

Forecast No. 3: Arctic amplification

Manabe used his single-column model as the basis for a prototype quasi-global model, which simulated only a fraction of the globe. It also simulated only the upper 100 meters or so of the ocean and neglected the effects of ocean currents.

In 1975, Manabe published global warming simulations with this quasi-global model and again found stratospheric cooling. But he also made a new discovery – that the Arctic warms significantly more than the rest of the globe, by a factor of two to three times.

Map shows the Arctic warming much faster than the rest of the planet.

Map from IPCC 6th Assessment Report

This “Arctic amplification” turns out to be a robust feature of global warming, occurring in present-day observations and subsequent simulations. A warming Arctic furthermore means a decline in Arctic sea ice, which has become one of the most visible and dramatic indicators of a changing climate.

Forecast No. 4: Land-ocean contrast

In the early 1970s, Manabe was also working to couple his atmospheric model to a first-of-its-kind dynamical model of the full world ocean built by oceanographer Kirk Bryan.

Around 1990, Manabe and Bryan used this coupled atmosphere-ocean model to simulate global warming over realistic continental geography, including the effects of the full ocean circulation. This led to a slew of insights, including the observation that land generally warms more than ocean, by a factor of about 1.5.

As with Arctic amplification, this land-ocean contrast can be seen in observed warming. It can also be explained from basic scientific principles and is roughly analogous to the way a dry surface, such as pavement, warms more than a moist surface, such as soil, on a hot, sunny day.

The contrast has consequences for land-dwellers like ourselves, as every degree of global warming will be amplified over land.

Forecast No. 5: Delayed Southern Ocean warming

Perhaps the biggest surprise from Manabe’s models came from a region most of us rarely think about: the Southern Ocean.

This vast, remote body of water encircles Antarctica and has strong eastward winds whipping across it unimpeded, due to the absence of land masses in the southern midlatitudes. These winds continually draw up deep ocean waters to the surface.

An illustration shows how ocean upwelling works
Winds around Antarctica contribute to upwelling of cold deep water that keeps the Southern Ocean cool while also raising nutrients to the surface waters.
NOAA

Manabe and colleagues found that the Southern Ocean warmed very slowly when atmospheric carbon dioxide concentrations increased because the surface waters were continually being replenished by these upwelling abyssal waters, which hadn’t yet warmed.

This delayed Southern Ocean warming is also visible in the temperature observations.

What does all this add up to?

Looking back on Manabe’s work more than half a century later, it’s clear that even early climate models captured the broad strokes of global warming.

Manabe’s models simulated these patterns decades before they were observed: Arctic Amplification was simulated in 1975 but only observed with confidence in 2009, while stratospheric cooling was simulated in 1967 but definitively observed only recently.

Climate models have their limitations, of course. For instance, they cannot predict regional climate change as well as people would like. But the fact that climate science, like any field, has significant unknowns should not blind us to what we do know.

The Conversation

Nadir Jeevanjee works for NOAA’s Geophysical Fluid Dynamics Laboratory, which is discussed in this article. The views expressed herein are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.

ref. 5 forecasts early climate models got right – the evidence is all around you – https://theconversation.com/5-forecasts-early-climate-models-got-right-the-evidence-is-all-around-you-263248

AI’s ballooning energy consumption puts spotlight on data center efficiency

Source: The Conversation – USA (2) – By Divya Mahajan, Assistant Professor of Computer Engineering, Georgia Institute of Technology

These ‘chillers’ on the roof of a data center in Germany, seen from above, work to cool the equipment inside the building. AP Photo/Michael Probst

Artificial intelligence is growing fast, and so are the number of computers that power it. Behind the scenes, this rapid growth is putting a huge strain on the data centers that run AI models. These facilities are using more energy than ever.

AI models are getting larger and more complex. Today’s most advanced systems have billions of parameters, the numerical values derived from training data, and run across thousands of computer chips. To keep up, companies have responded by adding more hardware, more chips, more memory and more powerful networks. This brute force approach has helped AI make big leaps, but it’s also created a new challenge: Data centers are becoming energy-hungry giants.

Some tech companies are responding by looking to power data centers on their own with fossil fuel and nuclear power plants. AI energy demand has also spurred efforts to make more efficient computer chips.

I’m a computer engineer and a professor at Georgia Tech who specializes in high-performance computing. I see another path to curbing AI’s energy appetite: Make data centers more resource aware and efficient.

Energy and heat

Modern AI data centers can use as much electricity as a small city. And it’s not just the computing that eats up power. Memory and cooling systems are major contributors, too. As AI models grow, they need more storage and faster access to data, which generates more heat. Also, as the chips become more powerful, removing heat becomes a central challenge.

Small blue and green lights arranged in columns glow behind black mesh screens
Data centers house thousands of interconnected computers.
Alberto Ortega/Europa Press via Getty Images

Cooling isn’t just a technical detail; it’s a major part of the energy bill. Traditional cooling is done with specialized air conditioning systems that remove heat from server racks. New methods like liquid cooling are helping, but they also require careful planning and water management. Without smarter solutions, the energy requirements and costs of AI could become unsustainable.

Even with all this advanced equipment, many data centers aren’t running efficiently. That’s because different parts of the system don’t always talk to each other. For example, scheduling software might not know that a chip is overheating or that a network connection is clogged. As a result, some servers sit idle while others struggle to keep up. This lack of coordination can lead to wasted energy and underused resources.

A smarter way forward

Addressing this challenge requires rethinking how to design and manage the systems that support AI. That means moving away from brute-force scaling and toward smarter, more specialized infrastructure.

Here are three key ideas:

Address variability in hardware. Not all chips are the same. Even within the same generation, chips vary in how fast they operate and how much heat they can tolerate, leading to heterogeneity in both performance and energy efficiency. Computer systems in data centers should recognize differences among chips in performance, heat tolerance and energy use, and adjust accordingly.

Adapt to changing conditions. AI workloads vary over time. For instance, thermal hotspots on chips can trigger the chips to slow down, fluctuating grid supply can cap the peak power that centers can draw, and bursts of data between chips can create congestion in the network that connects them. Systems should be designed to respond in real time to things like temperature, power availability and data traffic.

How data center cooling works.

Break down silos. Engineers who design chips, software and data centers should work together. When these teams collaborate, they can find new ways to save energy and improve performance. To that end, my colleagues, students and I at Georgia Tech’s AI Makerspace, a high-performance AI data center, are exploring these challenges hands-on. We’re working across disciplines, from hardware to software to energy systems, to build and test AI systems that are efficient, scalable and sustainable.

Scaling with intelligence

AI has the potential to transform science, medicine, education and more, but risks hitting limits on performance, energy and cost. The future of AI depends not only on better models, but also on better infrastructure.

To keep AI growing in a way that benefits society, I believe it’s important to shift from scaling by force to scaling with intelligence.

The Conversation

Divya Mahajan owns shares in Google, AMD, Microsoft, and Nvidia. She receives funding from Google and AMD.

ref. AI’s ballooning energy consumption puts spotlight on data center efficiency – https://theconversation.com/ais-ballooning-energy-consumption-puts-spotlight-on-data-center-efficiency-254192

AI is transforming weather forecasting − and that could be a game changer for farmers around the world

Source: The Conversation – USA (2) – By Paul Winters, Professor of Sustainable Development, University of Notre Dame

Weather forecasts help farmers figure out when to plant, where to use fertilizer and much more. Maitreya Shah/Studio India

For farmers, every planting decision carries risks, and many of those risks are increasing with climate change. One of the most consequential is weather, which can damage crop yields and livelihoods. A delayed monsoon, for example, can force a rice farmer in South Asia to replant or switch crops altogether, losing both time and income.

Access to reliable, timely weather forecasts can help farmers prepare for the weeks ahead, find the best time to plant or determine how much fertilizer will be needed, resulting in better crop yields and lower costs.

Yet, in many low- and middle-income countries, accurate weather forecasts remain out of reach, limited by the high technology costs and infrastructure demands of traditional forecasting models.

A new wave of AI-powered weather forecasting models has the potential to change that.

A farmer in a field holds a dried out corn stalk.
A farmer holds dried-up maize stalks in his field in Zimbabwe on March 22, 2024. A drought had caused widespread water shortages and crop failures.
AP Photo/Tsvangirayi Mukwazhi

By using artificial intelligence, these models can deliver accurate, localized predictions at a fraction of the computational cost of conventional physics-based models. This makes it possible for national meteorological agencies in developing countries to provide farmers with the timely, localized information about changing rainfall patterns that the farmers need.

The challenge is getting this technology where it’s needed.

Why AI forecasting matters now

The physics-based weather prediction models used by major meteorological centers around the world are powerful but costly. They simulate atmospheric physics to forecast weather conditions ahead, but they require expensive computing infrastructure. The cost puts them out of reach for most developing countries.

Moreover, these models have mainly been developed by and optimized for northern countries. They tend to focus on temperate, high-income regions and pay less attention to the tropics, where many low- and middle-income countries are located.

A major shift in weather models began in 2022 as industry and university researchers developed deep learning models that could generate accurate short- and medium-range forecasts for locations around the globe up to two weeks ahead.

These models worked at speeds several orders of magnitude faster than physics-based models, and they could run on laptops instead of supercomputers. Newer models, such as Pangu-Weather and GraphCast, have matched or even outperformed leading physics-based systems for some predictions, such as temperature.

A woman in a red sari tosses pellets into a rice field.
A farmer distributes fertilizer in India.
EqualStock IN from Pexels

AI-driven models require dramatically less computing power than the traditional systems.

While physics-based systems may need thousands of CPU hours to run a single forecast cycle, modern AI models can do so using a single GPU in minutes once the model has been trained. This is because the intensive part of the AI model training, which learns relationships in the climate from data, can use those learned relationships to produce a forecast without further extensive computation – that’s a major shortcut. In contrast, the physics-based models need to calculate the physics for each variable in each place and time for every forecast produced.

While training these models from physics-based model data does require significant upfront investment, once the AI is trained, the model can generate large ensemble forecasts — sets of multiple forecast runs — at a fraction of the computational cost of physics-based models.

Even the expensive step of training an AI weather model shows considerable computational savings. One study found the early model FourCastNet could be trained in about an hour on a supercomputer. That made its time to presenting a forecast thousands of times faster than state-of-the-art, physics-based models.

The result of all these advances: high-resolution forecasts globally within seconds on a single laptop or desktop computer.

Research is also rapidly advancing to expand the use of AI for forecasts weeks to months ahead, which helps farmers in making planting choices. AI models are already being tested for improving extreme weather prediction, such as for extratropical cyclones and abnormal rainfall.

Tailoring forecasts for real-world decisions

While AI weather models offer impressive technical capabilities, they are not plug-and-play solutions. Their impact depends on how well they are calibrated to local weather, benchmarked against real-world agricultural conditions, and aligned with the actual decisions farmers need to make, such as what and when to plant, or when drought is likely.

To unlock its full potential, AI forecasting must be connected to the people whose decisions it’s meant to guide.

That’s why groups such as AIM for Scale, a collaboration we work with as researchers in public policy and sustainability, are helping governments to develop AI tools that meet real-world needs, including training users and tailoring forecasts to farmers’ needs. International development institutions and the World Meteorological Organization are also working to expand access to AI forecasting models in low- and middle-income countries.

A man sells grain in Dawanau International Market in Kano, Nigeria on July 14, 2023.
Many low-income countries in Africa face harsh effects from climate change, from severe droughts to unpredictable rain and flooding. The shocks worsen conflict and upend livelihoods.
AP Photo/Sunday Alamba

AI forecasts can be tailored to context-specific agricultural needs, such as identifying optimal planting windows, predicting dry spells or planning pest management. Disseminating those forecasts through text messages, radio, extension agents or mobile apps can then help reach farmers who can benefit. This is especially true when the messages themselves are constantly tested and improved to ensure they meet the farmers’ needs.

A recent study in India found that when farmers there received more accurate monsoon forecasts, they made more informed decisions about what and how much to plant – or whether to plant at all – resulting in better investment outcomes and reduced risk.

A new era in climate adaptation

AI weather forecasting has reached a pivotal moment. Tools that were experimental just five years ago are now being integrated into government weather forecasting systems. But technology alone won’t change lives.

With support, low- and middle-income countries can build the capacity to generate, evaluate and act on their own forecasts, providing valuable information to farmers that has long been missing in weather services.

The Conversation

Paul Winters receives funding from the Gates Foundation. He is the Executive Director of AIM for Scale.

Amir Jina receives funding from AIM for Scale.

ref. AI is transforming weather forecasting − and that could be a game changer for farmers around the world – https://theconversation.com/ai-is-transforming-weather-forecasting-and-that-could-be-a-game-changer-for-farmers-around-the-world-263030

Green gruel? Pea soup? What Westerners thought of matcha when they tried it for the first time

Source: The Conversation – USA (2) – By Rebecca Corbett, Japanese Studies Librarian and Senior Lecturer in History, University of Southern California

Matcha lattes are prepared at a cafe in the Los Feliz neighborhood of Los Angeles in May 2025. Frederic J. Brown/AFP via Getty Images

Matcha mania” shows no signs of slowing, with global demand pushing “supply chains to the brink,” as Australia’s ABC News reported in July 2025.

The powdered drink retains a massive following in Tokyo, where long lines of customers snake out of The Matcha Tokyo on any given Saturday. At the trendy, minimalist cafe, the staff uses a cast-iron kettle and a bamboo ladle. Both are a nod to the traditional Japanese way of preparing matcha, called “chanoyu,” which literally means “hot water for tea” but in English has been translated as “tea ceremony.”

Beyond Tokyo, matcha cafes and bars have also become a familiar sight in Western cities, from Stockholm to Melbourne to Los Angeles. Matcha has been a permanent fixture on the menu at Starbucks since 2019 and at Dunkin’ since 2020.

It’s been quite the rise for a drink long met with skepticism in the West.

World’s fairs serve as a stage

I spent part of 2024 as a Japan Foundation Fellow at Waseda University, where I researched how Westerners experienced matcha and chanoyu during the Meiji period, an era of rapid modernization and Westernization that lasted from 1868 to 1912.

Matcha is a form of green tea in which young tea leaves have been ground to a powder using a stone mill. Unlike other teas, which involve steeping leaves and removing them before drinking, matcha powder is then whisked into hot water.

Spoon holding a pile of green powder.
Matcha is made using tea leaves ground into a powder.
Sina Schuldt/Picture Alliance via Getty Images

Matcha actually originated in China. It was introduced to Japan around 1250 C.E., where it assumed a key role in chanoyu from the 1500s on. Portuguese Jesuit missionaries in Japan in the 1500s wrote about both matcha and chanoyu. But only in the 19th century did interest in matcha really take off outside Japan.

Beginning in the late 19th century, world’s fairs and expositions started being held in European and American cities. These events allowed countries from around the world to showcase their art, inventions and culture before huge audiences.

For emerging Japan, world’s fairs and expositions presented a tremendous opportunity. In its exhibits, Japanese representatives often gave chanoyu demonstrations, while both the Japanese government and tea industry heavily marketed all varieties of Japanese green tea, including matcha.

Initial skepticism

Though steeped Japanese green tea became popular in 19th-century America, where it was usually sipped with milk and sugar, matcha didn’t initially jibe with Western palates.

Eliza Ruhamah Scidmore, an American journalist and travel writer who spent decades living in Japan, described matcha as “a bowl of green gruel more bitter than quinine” in her 1891 book “Jinrikisha Days in Japan.” Wealthy Canadian tourist Katharine Schuyler Baxter detailed her experience at a matcha tea gathering in her 1895 book “In Beautiful Japan: A Story of Bamboo Lands.”

“The beverage is made of powdered leaves, is greenish in color, thick like pea-soup, fragrant, and not very palatable,” she wrote. In my research, I encountered “pea soup” being the most common descriptor of matcha at this time.

Descriptions of matcha and chanoyu also abound in newspaper articles from the era.

Canadian journalist Helen E. Gregory-Flesher described the Japanese tea ceremony for readers in San Francisco.

“Very few Europeans can drink it without feeling very unhappy,” she wrote of a thick preparation of matcha called “koicha.” “For in the first place the taste is not agreeable, and then it is so intensely strong that it is sure to disagree with them if they do manage to swallow it.”

For the St. Louis Globe Democrat, the Countess Anna de Montaigu reported on a tea gathering she attended at the St. Louis World’s Fair in 1904. She described matcha’s flavor as “exquisite,” but left her American readers with a warning: “Drunk without sugar or cream, this expensive tea … is not pleasant to the palate of the uninitiated.”

Embracing the ceremony

There are also records of a few Westerners studying chanoyu while living in Japan. While those records don’t include their thoughts on matcha, I have to assume they enjoyed drinking it – at least enough to continue their practice, since in all cases they studied chanoyu for several years.

Chanoyu isn’t a simple serving ceremony. It’s a practice that involves learning the range of ways to serve and receive matcha, as well as food, and it’s taught by various “lineages,” or schools.

Lessons involve students learning how to be a host and a guest through observation and practice. All of this learning is put into practice by hosting or being a guest at a formal tea gathering, called a “chaji.” This can last three to four hours and include a multicourse meal – the “kaiseki” – several rounds of sake, and the laying and replenishing of charcoal.

There are two servings of matcha; one is prepared as thick tea – koicha – the other as thinner tea known as “usucha.” Each is accompanied by sweets.

A Swedish woman named Ida Trotzig lived in Japan from 1888 to 1921, during which she took lessons in chanoyu. Upon returning to Sweden she published a book about chanoyu in 1911, “Cha-no-yu Japanernas teceremoni.” American Mary Averil also studied both chanoyu and ikebana, the art of Japanese flower arrangement.

Newspaper clipping featuring images of flowers and a seated white woman dressed in Japanese garb performing a tea ceremony.
Mary Averil performs chanoyu in a 1911 issue of The San Francisco Call.
Library of Congress

In 1905, the Urasenke School of Tea in Kyoto welcomed three American sisters, Helen, Grace and Florence Scottfield, as students. There, they studied under the head of the school, and a photograph of all three girls wearing kimonos with their hair styled in a Japanese manner appeared in an issue of the school’s monthly magazine in 1908.

Matcha minus chanoyu

Scholars haven’t pinpointed the reasons for the recent global matcha boom. But I think it’s worth considering a few factors.

First, it’s clear that social media, particularly Instagram and TikTok, have played a big role. The bright green beverage is aesthetically pleasing. Its many purported health benefits have also allowed it to join the ranks of other viral superfoods, such as acai berries and kombucha.

Then there’s the way Westerners often mythologize Japan as a source of “ancient wisdom.” Accompanying that is a particular infatuation with traditional Japanese practices, lifestyles and foods – matcha included.

Finally, people seem drawn to the minimalist aesthetics associated with chanoyu, which have echoes in other Japanese practices such as dry rock gardening and calligraphy.

Green, frothy drink in a craft ceramic cup, next to a dish of green powder, a ceramic teapot and a bamboo whisk.
Many cafes use tools involved in chanoyu such as bamboo scoops and whisks.
Natasha Breen/REDA/Universal Images Group via Getty Images

Interestingly, the vast majority of matcha drinkers today don’t experience chanoyu, even as matcha purveyors borrow from the practice’s aesthetics. In the late 19th century, you couldn’t drink matcha without first experiencing chanoyu. And the drink was always served “straight” – no milks, flavorings or sweeteners.

Sometimes I wonder what the Countess de Montaigu would order if she visited Pipers Tea and Coffee, which is rated the best matcha in St. Louis on Yelp. Would she prefer it straight? Or would she be won over by its In Bloom Latte – a vanilla matcha latte topped with cherry blossom-sakura cold foam?

The Conversation

Rebecca Corbett receives funding from The Japan Foundation. She is affiliated with the Urasenke School of Tea through membership in the Urasenke Tankokai Los Angeles Association.

ref. Green gruel? Pea soup? What Westerners thought of matcha when they tried it for the first time – https://theconversation.com/green-gruel-pea-soup-what-westerners-thought-of-matcha-when-they-tried-it-for-the-first-time-263014