Ancient India, Living Traditions: an earnest effort to show how the art of Hinduism, Buddhism and Jainism is sacred and personal

Source: The Conversation – UK – By Ram Prasad, Fellow of the British Academy and Distinguished Professor in the Department Politics, Philosophy and Religion, University of Leicester

The British Museum’s Ancient India, Living Traditions exhibition brings together exhibits on the sacred art of Hinduism, Buddhism and Jainism. It also encompasses the spread of the devotional art of these traditions to other parts of Asia.

The exhibition speaks to religious identity and relationships. Buddhism and Jainism distinguish themselves from the vast surrounding traditions that together we call Hinduism; but they have close kinship with it in practices, beliefs and iconography. Museums that have presented sculptures in isolation have usually not attempted to narrate this complex history.

Not all the items displayed, some going back 2,000 years, are of purely historical interest. There are representations of traditions that are continuously living in a way the gods of ancient Egypt or classical Europe are not.

The most instantly recognisable example for visitors of such living ancient tradition is likely to be statues of the elephant-headed deity Ganesha. Visitors can see a rare and valuable 4th century sandstone Ganesha on show. They can also see a small bronze version of that ancient Ganesha that is like the kind you would find in people’s home and to which a quick prayer would be addressed every morning.

The question of how to respect that sense of the sacred while still mounting an exhibition is a moral and aesthetic challenge that few museums (including in India) have started to address. It’s not uncommon to see such pieces wrenched from the reality of their continued practice and presented in secular art displays. Here, however, the curators have tried to make connections between “statues” on display and “icons” in temples and homes.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


Finally, there’s the problematic history of the imperial museum and its need to reckon with its past. Most objects on display in this exhibition, and The British Museum more widely, have been presented with scarcely any acknowledgement of how they came to be acquired.

The exhibition makes an earnest effort to tackle most of these issues.

Ancient but not dead

The spaces of the exhibition are structured to be respectful of the historical and contemporary sensitivities of Buddhism and Jainism. This is signalled through subtle changes of colour and the placement of translucent drapery, allowing for transitions between distinct Jain, Buddhist and Hindu displays.

At the same time, conceptual and sensory commonalities are powerfully conveyed. The first space focuses on nature spirits and demi-deities that are shared across all the ancient traditions. The air is filled with the sound of south Asian birds and musical instruments. The explanatory labels draw attention to the percolation of iconographic features between traditions, for instance, those between the Buddha and the Jaina teachers, or the direct inclusion of the deity of learning (Sarasvati) in both Hindu and Jain worship.

Also well presented is a final space on the spread of south Asian iconography to central, east and southeast Asia. This is a long story that needs its own telling, but can only be hinted at through some beautifully chosen figures.

It’s the curators’ use of a community advisory panel of people who practice such traditions today that gives the information its sensitivity. Their inclusion in the exhibition’s production can be seen in a marked mindfulness that the content and symbols of these inert objects are alive and sacred to hundreds of millions.

For example, one Ganesha from Java in Indonesia draws attention to different elements of his iconography. There is the trans-continentally stable depiction of his having a broken tusk (which, as Hindus will know, he is said to have broken off to write down the epic Mahabharata). But this Ganesha also holds a skull, which is unique to the Javanese version. The label gently points out that “various communities understood and worshipped him differently”.

The combination of community engagement and creative presentation not only conveys a sense of respect for the traditions, but also elicits a respectful response from visitors. Those from within the tradition will note with satisfaction the description of a symbol or icon. Those from outside the traditions are invited to look at the exhibits with attention and care as they might in a cathedral.

I saw a pair of young Indian Americans looking at a fossilised ammonite from Nepal that is taken as a symbolic representation of god for worshippers of Vishnu. They animatedly compared it to the one in their own diasporic home.

Elsewhere in the exhibition, I caught an elderly English couple stood in wondering silence in front of a drum slab from the famous 1st century BC Amaravathi Buddhist site in south India. This slab was carved just before figural representations of the Buddha rapidly gained in popularity. Here, there are symbols associated with him, but the Buddha himself is represented by the empty seat from whence he has gone.

How did it all get here?

One potential interpretive danger lies in the emphasis on continuity between past objects and present realities. Hindus today from social backgrounds that did not have the privilege of reaching back to high sacred art might ask where they sit in the smoothed out historical narrative. More broadly, there is no acknowledgement of the complexity of Hindu identity and its formation across centuries, regions, social strata, languages and theologies.

The weakest part of this exhibition’s generally innovative retelling is the faint-hearted way in which it obliquely acknowledges the dubious acquisition process of the British Museum. To say something was “collected” by a major general “while serving in the East India Company army” is hardly facing up to the question with which the exhibition boldly begins: “How did it get here?”

This exhibition offers a powerful visual narrative of the multi-spiritual traditions of ancient India, mounted with sensitivity to their living communities today. Its immersive presentation is appealing, and the story it tells is respectful and innovative.

The task of honest self-representation and difficult conversations on reparation remain. Within that larger imperative, Ancient India, Living Traditions is a step in the right direction. It is a direction towards addressing context, responsiveness and engagement that museums can no longer ignore.

Ancient India, Living Traditions in on at The British Museum, London until October 19 2025


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

The Conversation

Ram Prasad does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Ancient India, Living Traditions: an earnest effort to show how the art of Hinduism, Buddhism and Jainism is sacred and personal – https://theconversation.com/ancient-india-living-traditions-an-earnest-effort-to-show-how-the-art-of-hinduism-buddhism-and-jainism-is-sacred-and-personal-262163

Starmer’s move on Palestinian statehood is clever politics

Source: The Conversation – UK – By Brian Brivati, Visiting Professor of Contemporary History and Human Rights, Kingston University

Keir Starmer has announced that the UK will recognise Palestinian statehood by September 2025 unless Israel meets certain conditions, marking a significant shift in UK policy.

For decades, successive UK governments withheld recognition, insisting it could only come as part of a negotiated settlement between Israel and Palestine. This position, rooted in the Oslo accords of the 1990s and aligned with US policy, effectively gave Israel a veto over Palestinian statehood. As long as Israel refused to engage seriously in peace talks, the UK refrained from acting.

Starmer has now broken with this precedent, potentially aligning the UK with 147 other countries. But the Israeli government must take what the UK calls “substantive steps” toward peace. These include agreeing to a ceasefire in Gaza, allowing full humanitarian access, explicitly rejecting any plans to annex West Bank territory, and returning to a credible peace process aimed at establishing a two-state solution.




Read more:
UK to recognise Palestinian statehood unless Israel agrees to ceasefire – here’s what that would mean


If Israel meets these conditions, the UK would presumably withhold recognition until the “peace process” has been completed. Starmer made clear that Britain will assess Israeli compliance in September and reserves the right to proceed with recognition regardless of Israel’s response. The message was unambiguous: no one side will have a veto.

This is more than just clever internal politics and party management. Anything that puts any pressure on Israel to move towards peace should be welcomed. But will it amount to much more than that?

Starmer has faced criticism over the last few years for resisting recognising Palestine as a state. While Labour’s frontbench held the line for much of the past year, rank-and-file discontent has grown – and with it, the political risks.

At the heart of Labour’s internal tensions lie two irreconcilable blocs. On one side are MPs and activists – both inside the party and expelled from it – who are vocally pro-Palestinian and have been outraged by the government’s failure to act. On the other side are members of the Labour right who continue to back Israel, oppose unilateral recognition of statehood and focus on the terrible crimes of Hamas but not the IDF campaign in Gaza.

Between them sits a soft-centre majority, for whom foreign policy is not a defining issue. They are not ideologically committed to either side but have become increasingly uneasy with the escalating violence and the UK’s diplomatic inertia.

As the humanitarian catastrophe in Gaza deepens, public outrage in the UK has grown. Mass protests have put mounting pressure on the government to act. Within parliament, over 200 MPs, including many from Labour, signed a letter demanding immediate recognition of Palestine. Senior cabinet ministers reportedly pushed hard for the shift on electoral grounds, as well as principle.

International dynamics have also played a crucial role. France’s announcement that it would recognise Palestine by September, becoming the first major western power to do so, created additional pressure. Spain, Ireland, Norway and several other European states have already taken the step. Britain chose to align itself with this emerging consensus.

These pressures combined created a sense of urgency and political opportunity. Starmer’s government appears to be using the threat of recognition as leverage –pressuring Israel to return to negotiations and halt annexation plans.

The calculation seems to be that Israel will either meet the UK’s conditions or face diplomatic consequences, including recognition of Palestine without its consent. There is also the possibility that Israel will simply ignore the UK and press on with its campaign for “Greater Israel”.

Challenges ahead

That is why, while this is a meaningful departure from the past, it is not without problems. Chief among them is the principle of conditionality itself. By making recognition contingent on Israeli behaviour, the UK risks reinforcing the very logic it claims to be rejecting – that Palestinian rights can be granted or withheld based on the actions of the occupying power.

Recognition of statehood should not be used as a diplomatic carrot or stick. It is a matter of justice, not reward. Palestinians are entitled to self-determination under international law.

There is also concern that the September deadline could become another missed opportunity. If Israel makes vague or symbolic gestures – such as issuing carefully worded statements or temporarily suspending one settlement expansion – will the UK delay recognition further, claiming that “progress” is being made?

Palestinians have seen such tactics before. Recognition has been delayed for decades in the name of preserving leverage. But leverage for what?

The Israeli government, dominated by ultra-nationalists and pro-annexation hardliners, is unlikely to satisfy the UK’s conditions in good faith. The risk is that the deadline becomes a mirage – always imminent, never reached.

Recognition also comes as part of a proposed new peace plan. This will be supported by the UK, France and Germany, and it allows the government to say it is being consist with its policy that recognition is part of a peace plan.

If, by some miracle, pressure works and Israel meets all the conditions, then the UK can claim that recognition has played a role in bringing Israel back to the negotiating table.

But if recognition is then withheld, there will not be two equal actors at that table. The State of Palestine will not have been recognised by key international players, and a new round of western-run peace processes will begin. These do not have a good track record.

If Israel fails to agree to a ceasefire and let aid into Gaza, then Starmer will be forced to go through with recognition.

For now, he has defused the internal division in his party. It is clever politics, good party management – it remains to be seen if it is also statesmanship.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

The Conversation

Brian Brivati is affiliated with Britain Palestine Project, a Scottish Charity that campaigns for equal rights, justice and security for Israelis and Palestiniains

ref. Starmer’s move on Palestinian statehood is clever politics – https://theconversation.com/starmers-move-on-palestinian-statehood-is-clever-politics-262239

As climate change hits, what might the British garden of the future look like?

Source: The Conversation – UK – By Adele Julier, Senior Lecturer in Ecology, University of Portsmouth

Maria Evseyeva/Shutterstock

Hosepipe bans in summer 2025 will mean many gardeners having to choose which of their plants to keep going with the watering can, and which to abandon. Are these temporary restrictions actually a sign we need to rethink British gardens altogether?

Climate change will bring the United Kingdom warmer, wetter winters and hotter, drier summers. Britain has seen warm periods before, such as in the last interglacial period 130,000 years ago, but the current speed of change is unprecedented. This will have many effects, but it will also change one of the core parts of British life: our gardens.

Rather than fighting the inevitable and trying to keep growing the same plants we have always grown, how might we adapt what we grow and how we grow it?

The first to go, tragically for some, may be the classic British lawn. Already this year across the country, large areas of grass are looking parched and brown in the face of a long drought. The traditional lawn has just a few species of grass and is unlikely to be very drought-resistant. You can maintain a grass lawn that is more tolerant of dry weather by using drought-resistant fescue species of grass, and keeping the lawn well aerated (that means putting small holes in it to allow air, water and nutrients to reach the grass roots). But it may still suffer periods in which it looks unhealthy.

Swapping a lawn for a meadow can increase drought tolerance and decrease maintenance such as regular mowing and watering, because meadows only need to be cut once a year and don’t need as much water. Perhaps instead of lawns we can embrace No Mow May all year round, creating a greater diversity of plant and animal life in gardens.

Wildflowers such as yarrow and common knapweed can be great for pollinators and the birds that feed on them. These plants are drought-tolerant too.

As well as challenges in the face of a changing climate, there will be opportunities. Grape vines were grown in Britain in Roman times, and British wine production is once again a growing industry. Regular British gardeners could also grow a wider variety of grape vines, and even make their own wine. Warmer, drier summers could make plants such as citrus and olive trees easier to grow, with fruits more likely to ripen and less likely to be lost to frost in winter. Sunflowers, while they already grow here, could also thrive in the new conditions.

There will be a shift in the best types of decorative plants for gardens, with those needing lots of water, such as hydrangeas, delphiniums and gentians, becoming difficult to grow. We could look to the Mediterranean for inspiration, and choose shrubs such as thyme and lavender, or climbers like passion flowers, that need less water. It is also possible to grow a drought-tolerant garden with plants that are native to Britain, such as species of Geranium and Sedum. Coastal plants such as sea kale and sea holly that grow in harsh, rocky conditions can also make great garden plants in a drier climate.

Blue thistles on long stems
Sea holly doesn’t mind our changing climate as much as other garden plants.
olko1975/Shutterstock

Finally, the way we garden will need to change. Setting up water storage systems, from simple water butts to larger, more complex systems that could include grey water harvesting (used but clean water from baths and washing up) or underground water storage, will help gardeners to make the most of storms by storing the rainwater for use during droughts. You can set up a dispersion system to recycle lightly used household water, such as from a dishwasher or shower.

Soil health is important too, as soils with more organic matter are better at holding water. Composting food waste to add to soil would be a great way of helping to increase the organic content and make watering more efficient. This has the added value of avoiding peat composts. Peat comes from wetlands and it will eventually run out. Peat harvesting also releases carbon dioxide into the atmosphere, contributing to climate change.

The next few decades will be challenging for gardeners. Britain will probably experience an increase in prolonged droughts and other extreme weather, as well as overall warming caused by climate change. Our gardens may cover a small proportion of land in the UK. But we can use them to experiment and develop sustainable ways of existing, growing not just new plants but also hope in the face of adversity.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

The Conversation

Adele Julier does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. As climate change hits, what might the British garden of the future look like? – https://theconversation.com/as-climate-change-hits-what-might-the-british-garden-of-the-future-look-like-261608

Why the Pacific tsunami was smaller than expected – a geologist explains

Source: The Conversation – UK – By Alan Dykes, Associate Professor in Engineering Geology, Kingston University

The earthquake near the east coast of the Kamchatka peninsula in Russia on July 30 2025 generated tsunami waves that have reached Hawaii and coastal areas of the US mainland. The earthquake’s magnitude of 8.8 is significant, potentially making it one of the largest quakes ever recorded.

Countries around much of the Pacific, including in east Asia, North and South America, issued alerts and in some cases evacuation orders in anticipation of potentially devastating waves. Waves of up to four metres hit coastal towns in Kamchatka near where the earthquake struck, apparently causing severe damage in some areas.

But in other places waves have been smaller than expected, including in Japan, which is much closer to Kamchatka than most of the Pacific rim. Many warnings have now been downgraded or lifted with relatively little damage. It seems that for the size of the earthquake, the tsunami has been rather smaller than might have been the case. To understand why, we can look to geology.

The earthquake was associated with the Pacific tectonic plate, one of several major pieces of the Earth’s crust. This pushes north-west against the part of the North American plate that extends west into Russia, and is forced downwards beneath the Kamchatka peninsula in a process called subduction.

The United States Geological Survey (USGS) says the average rate of convergence – a measure of plate movement – is around 80mm per year. This is one of the highest rates of relative movement at a plate boundary.

But this movement tends to take place as an occasional sudden movement of several metres. In any earthquake of this type and size, the displacement may occur over a contact area between the two tectonic plates of slightly less than 400km by 150km, according to the USGS.

The Earth’s crust is made of rock that is very hard and brittle at the small scale and near the surface. But over very large areas and depths, it can deform with slightly elastic behaviour. As the subducting slab – the Pacific plate – pushes forward and descends, the depth of the ocean floor may suddenly change.

Nearer to the coastline, the crust of the overlying plate may be pushed upward as the other pushed underneath, or – as was the case off Sumatra in 2004 – the outer edge of the overlying plate may be dragged down somewhat before springing back a few metres.

It is these near-instantaneous movements of the seabed that generate tsunami waves by displacing huge volumes of ocean water. For example, if the seabed rose just one metre across an area of 200 by 100km where the water is 1km deep, then the volume of water displaced would fill Wembley stadium to the roof 17.5 million times.

A one-metre rise like this will then propagate away from the area of the uplift in all directions, interacting with normal wind-generated ocean waves, tides and the shape of the sea floor to produce a series of tsunami waves. In the open ocean, the tsunami wave would not be noticed by boats and ships, which is why a cruise ship in Hawaii was quickly moved out to sea.

Waves sculpted by the seabed

The tsunami waves travel across the deep ocean at up to 440 miles per hour, so they may be expected to reach any Pacific Ocean coastline within 24 hours. However, some of their energy will dissipate as they cross the ocean, so they will usually be less hazardous at the furthest coastlines away from the earthquake.

The hazard arises from how the waves are modified as the seabed rises towards a shoreline. They will slow and, as a result, grow in height, creating a surge of water towards and then beyond the normal coastline.

The Kamchatka earthquake was slightly deeper in the Earth’s crust (20.7km) than the Sumatran earthquake of 2004 and the Japanese earthquake of 2011. This will have resulted in somewhat less vertical displacement of the seabed, with the movement of that seabed being slightly less instantaneous. This is why we’ve seen tsunami warnings lifted some time before any tsunami waves would have arrived there.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

The Conversation

Alan Dykes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the Pacific tsunami was smaller than expected – a geologist explains – https://theconversation.com/why-the-pacific-tsunami-was-smaller-than-expected-a-geologist-explains-262273

Canada could use thermal infrastructure to turn wasted heat emissions into energy

Source: The Conversation – Canada – By James (Jim) S. Cotton, Professor, Department of Mechanical Engineering, McMaster University

Buildings are the third-largest source of greenhouse gas emissions in Canada. In many cities, including Vancouver, Toronto and Calgary, buildings are the single highest source of emissions.

The recently launched Infrastructure for Good barometer, released by consulting firm Deloitte, suggests that Canada’s infrastructure investments already top the global list in terms of positive societal, economic and environmental benefits.

In fact, over the past 150 years, Canada has built railways, roads, clean water systems, electrical grids, pipelines and communication networks to connect and serve people across the country.

Now, there’s an opportunity to build on Canada’s impressive tradition by creating a new form of infrastructure: capturing, storing and sharing the massive amounts of heat lost from industry, electricity generation and communities, even in summer.

Natural gas precedent

Indoor heating often comes from burning fossil fuels — three-quarters of Ontario homes, for example, are heated by natural gas. Until about 1966, homes across Canada were primarily heated by wood stoves, coal boilers, oil furnaces or heaters using electricity from coal-fired power plants.

After the oil crisis of the 1970s, many of those fuels were replaced by natural gas, delivered directly to individual homes. The cost of the natural gas infrastructure, including a national network of pipelines, was amortized over more than 50 years to make the cost more practical.

two pie charts showing the source of Ontario's greenhouse gas emissions
Sources of greenhouse gas emissions in Ontario.
(J. Cotton), CC BY

This reliable, low-cost energy source quickly proved to be popular. The change cut heating emissions across Ontario by roughly half throughout the 1970s and 1980s, long before climate change was the concern it is today.

Now, as the need to decarbonize becomes more pressing, recent studies not only emphasize the often-overstated emissions reductions benefits from using natural gas; they also indicate that burning this fuel source is still far from net-zero.

However, there’s no reason why Canadian governments can’t invest in new infrastructure-based alternative heating solutions. This time, they could replace natural gas with an alternative, net-zero source: the wasted heat already emitted by other energy uses.

Heat capture and storage

Depending on the source temperature, technology used and system design, heat can be captured throughout the year, stored and distributed as needed. A type of infrastructure called thermal networks could capture leftover heat from factories and nuclear and gas-fired power plants.

In essence, thermal networks take excess thermal energy from industrial processes (though thermal energy can theoretically be captured from a variety of different sources), and use it as a centralized heating source for a series of insulated underground pipelines connected to multiple other buildings. These pipelines, in turn, are used to heat or cool these connected buildings.

A substantial potential to capture heat similarly exists in every neighbourhood. Heat is produced by data centres, grocery stores, laundromats, restaurants, sewage systems and even hockey arenas.

In Ontario, the amount of energy we dump in the form of heat is greater than all the natural gas we use to heat our homes.

A restaurant, for example, can produce enough heat for seven family homes. To take advantage of the wasted heat, Canada needs to build thermal networks, corridors and storage to capture and distribute heat directly to consumers.

The effort demands substantial leadership from all levels of government. Creating these systems would be expensive, but the technology does exist, and the one-time cost would pay for itself many times over.

Such systems are already working in other cold countries. Thermal networks heat half the homes in Sweden and two-thirds of homes in Denmark.

pipes being laid under a city street
District heating pipes being laid at Gullbergs Strandgata in Gothenburg, Sweden in May 2021.
(Shutterstock)

The oil crisis of the 1970s motivated both countries to find new domestic heating sources. They financed their new infrastructure over 50 years and reduced their investment risks through low-interest bonds (loaned by public banks) and generous subsidies.

These were offered to utility companies looking to expand district energy operations, and to consumers by incentivizing connections to such systems. Additionally, in Denmark, controlled consumer prices served a similar function.

At least seven American states have established thermal energy networks, with New York being the first. The state’s Utility Thermal Energy Network and Jobs Act allows public utilities to own, operate and manage thermal networks.

They can supply thermal energy, but so can private producers such as data centres, all with public oversight. Such a strategy avoids monopolies and allows gas and electric utilities to deliver services through central networks.

An opportunity for Canada

Canada has a real opportunity to learn from the experiences of Sweden, Denmark and New York. In doing so, Canada can create a beneficial and truly national heating system in the process. Beginning with federal government leadership, thermal networks could be built across Canada, tailored to the unique and individual needs, strengths and opportunities of municipalities and provinces.

Such a shift would reduce emissions and generate greater energy sovereignty for Canada. It could drive a just energy strategy that could provide employment opportunities for those displaced by the transition away from fossil fuels, while simultaneously increasing Canada’s economic independence in the process.

Thermal networks could be built using pipelines made from Canadian steel. Oil-well drillers from Alberta could dig borehole heat-storage systems. A new market for heat-recovery pumps would create good advanced-manufacturing jobs in Canada.




Read more:
How heat storage technologies could keep Canada’s roads and bridges ice-free all winter long


Funding for the infrastructure could come through public-private partnerships, with major investments from public banks and pension funds, earning a solid and secure rate of return. A regulated approach and process could permit this infrastructure cost to be amortized over decades, similar to the way past governments have financed gas, electrical and water networks.

As researchers studying the engineering and policy potential of such an opportunity, we view such actions as essential if net-zero is to be achieved in the Canadian building sector. They are also a win-win solution for incumbent industry, various levels of government and citizens across Canada alike.

Yet efforts to install robust thermal networks remain stalled by institutional inertia, the strong influence of the oil industry, limited citizen awareness of the technology’s potential and a tendency for government to view the electrification of heating as the primary solution to building decarbonization.

In this time of environmental crisis and international uncertainty, pushing past these barriers, drawing on Canada’s lengthy history of constructing infrastructure and creating this new form thermal energy infrastructure would be a safe, beneficial and conscientious way to move Canada into a more climate-friendly future.

The Conversation

James (Jim) S. Cotton receives funding from the Natural Sciences and Engineering Research Council of Canada.

Caleb Duffield does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Canada could use thermal infrastructure to turn wasted heat emissions into energy – https://theconversation.com/canada-could-use-thermal-infrastructure-to-turn-wasted-heat-emissions-into-energy-254972

US government may be abandoning the global climate fight, but new leaders are filling the void – including China

Source: The Conversation – USA (2) – By Shannon Gibson, Professor of Environmental Studies, Political Science and International Relations, USC Dornsife College of Letters, Arts and Sciences

Chinese President Xi Jinping and Brazilian President Luiz Inácio Lula da Silva meet in Beijing in May 2025. Tingshu Wang/Pool Photo via AP

When President Donald Trump announced in early 2025 that he was withdrawing the U.S. from the Paris climate agreement for the second time, it triggered fears that the move would undermine global efforts to slow climate change and diminish America’s global influence.

A big question hung in the air: Who would step into the leadership vacuum?

I study the dynamics of global environmental politics, including through the United Nations climate negotiations. While it’s still too early to fully assess the long-term impact of the United States’ political shift when it comes to global cooperation on climate change, there are signs that a new set of leaders is rising to the occasion.

World responds to another US withdrawal

The U.S. first committed to the Paris Agreement in a joint announcement by President Barack Obama and China’s Xi Jinping in 2015. At the time, the U.S. agreed to reduce its greenhouse gas emissions 26% to 28% below 2005 levels by 2025 and pledged financial support to help developing countries adapt to climate risks and embrace renewable energy.

Some people praised the U.S. engagement, while others criticized the original commitment as too weak. Since then, the U.S. has cut emissions by 17.2% below 2005 levels – missing the goal, in part because its efforts have been stymied along the way.

Just two years after the landmark Paris Agreement, Trump stood in the Rose Garden in 2017 and announced he was withdrawing the U.S. from the treaty, citing concerns that jobs would be lost, that meeting the goals would be an economic burden, and that it wouldn’t be fair because China, the world’s largest emitter today, wasn’t projected to start reducing its emissions for several years.

Scientists and some politicians and business leaders were quick to criticize the decision, calling it “shortsighted” and “reckless.” Some feared that the Paris Agreement, signed by almost every country, would fall apart.

But it did not.

In the United States, businesses such as Apple, Google, Microsoft and Tesla made their own pledges to meet the Paris Agreement goals.

Hawaii passed legislation to become the first state to align with the agreement. A coalition of U.S. cities and states banded together to form the United States Climate Alliance to keep working to slow climate change.

Globally, leaders from Italy, Germany and France rebutted Trump’s assertion that the Paris Agreement could be renegotiated. Others from Japan, Canada, Australia and New Zealand doubled down on their own support of the global climate accord. In 2020, President Joe Biden brought the U.S. back into the agreement.

A solar farm in a field.
Amazon partnered with Dominion Energy to build solar farms, like this one, in Virginia. They power the company’s cloud-computing and other services.
Drew Angerer/Getty Images

Now, with Trump pulling the U.S. out again – and taking steps to eliminate U.S. climate policies, boost fossil fuels and slow the growth of clean energy at home – other countries are stepping up.

On July 24, 2025, China and the European Union issued a joint statement vowing to strengthen their climate targets and meet them. They alluded to the U.S., referring to “the fluid and turbulent international situation today” in saying that “the major economies … must step up efforts to address climate change.”

In some respects, this is a strength of the Paris Agreement – it is a legally nonbinding agreement based on what each country decides to commit to. Its flexibility keeps it alive, as the withdrawal of a single member does not trigger immediate sanctions, nor does it render the actions of others obsolete.

The agreement survived the first U.S. withdrawal, and so far, all signs point to it surviving the second one.

Who’s filling the leadership vacuum

From what I’ve seen in international climate meetings and my team’s research, it appears that most countries are moving forward.

One bloc emerging as a powerful voice in negotiations is the Like-Minded Group of Developing Countries – a group of low- and middle-income countries that includes China, India, Bolivia and Venezuela. Driven by economic development concerns, these countries are pressuring the developed world to meet its commitments to both cut emissions and provide financial aid to poorer countries.

A man with his arms crossed leans on a desk with a 'Bolivia' label in front of it.
Diego Pacheco, a negotiator from Bolivia, spoke on behalf of the Like-Minded Developing Countries group during a climate meeting in Bonn, Germany, in June 2025.
IISD/ENB | Kiara Worth

China, motivated by economic and political factors, seems to be happily filling the climate power vacuum created by the U.S. exit.

In 2017, China voiced disappointment over the first U.S. withdrawal. It maintained its climate commitments and pledged to contribute more in climate finance to other developing countries than the U.S. had committed to – US$3.1 billion compared with $3 billion.

This time around, China is using leadership on climate change in ways that fit its broader strategy of gaining influence and economic power by supporting economic growth and cooperation in developing countries. Through its Belt and Road Initiative, China has scaled up renewable energy exports and development in other countries, such as investing in solar power in Egypt and wind energy development in Ethiopia.

While China is still the world’s largest coal consumer, it has aggressively pursued investments in renewable energy at home, including solar, wind and electrification. In 2024, about half the renewable energy capacity built worldwide was in China.

Three people talk under the shade of solar panels.
China’s interest in South America’s energy resources has been growing for years. In 2019, China’s special representative for climate change, Xie Zhenhua, met with Chile’s then-ministers of energy and environment, Juan Carlos Jobet and Carolina Schmidt, in Chile.
Martin Bernetti/AFP via Getty Images

While it missed the deadline to submit its climate pledge due this year, China has a goal of peaking its emissions before 2030 and then dropping to net-zero emissions by 2060. It is continuing major investments in renewable energy, both for its own use and for export. The U.S. government, in contrast, is cutting its support for wind and solar power. China also just expanded its carbon market to encourage emissions cuts in the cement, steel and aluminum sectors.

The British government has also ratcheted up its climate commitments as it seeks to become a clean energy superpower. In 2025, it pledged to cut emissions 77% by 2035 compared with 1990 levels. Its new pledge is also more transparent and specific than in the past, with details on how specific sectors, such as power, transportation, construction and agriculture, will cut emissions. And it contains stronger commitments to provide funding to help developing countries grow more sustainably.

In terms of corporate leadership, while many American businesses are being quieter about their efforts, in order to avoid sparking the ire of the Trump administration, most appear to be continuing on a green path – despite the lack of federal support and diminished rules.

USA Today and Statista’s “America’s Climate Leader List” includes about 500 large companies that have reduced their carbon intensity – carbon emissions divided by revenue – by 3% from the previous year. The data shows that the list is growing, up from about 400 in 2023.

What to watch at the 2025 climate talks

The Paris Agreement isn’t going anywhere. Given the agreement’s design, with each country voluntarily setting its own goals, the U.S. never had the power to drive it into obsolescence.

The question is if developed and developing country leaders alike can navigate two pressing needs – economic growth and ecological sustainability – without compromising their leadership on climate change.

This year’s U.N. climate conference in Brazil, COP30, will show how countries intend to move forward and, importantly, who will lead the way.

Research assistant Emerson Damiano, a recent graduate in environmental studies at USC, contributed to this article.

The Conversation

Shannon Gibson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. US government may be abandoning the global climate fight, but new leaders are filling the void – including China – https://theconversation.com/us-government-may-be-abandoning-the-global-climate-fight-but-new-leaders-are-filling-the-void-including-china-251786

Gene Hackman had a will, but the public may never find out who inherits his $80M fortune

Source: The Conversation – USA (2) – By Naomi Cahn, Professor of Law, University of Virginia

Gene Hackman and his wife, Betsy Arakawa, pose for a photo in 1986 in Los Angeles. Donaldson Collection/Michael Ochs Archives via Getty Images

Gene Hackman was found dead inside his New Mexico home on Feb. 26, 2025, at the age of 95. The acclaimed actor’s wife, Betsy Arakawa, had also died of a rare virus – a week before his death from natural causes.

Details about the couple’s plans for Hackman’s reportedly US$80 million fortune are only starting to emerge, months after the discovery of their tragic demise. While their wills have not yet been made public, we have seen them through a reputable source.

Both documents are short and sought to give the bulk of their assets to Hackman’s trust – a legal arrangement that allows someone to state their wishes for how their assets should be managed and distributed. Wills and trusts are similar in that both can be used to distribute someone’s property. They differ in that a trust can take effect during someone’s lifetime and continue long after their death. Wills take effect only upon someone’s death, for the purpose of distributing assets that person had owned.

Both trusts and wills can be administered by someone who does not personally benefit from the property.

Hackman, widely revered for his memorable roles in movies such as “The French Connection,” “Bonnie and Clyde” and “The Birdcage,” made it clear in his will that he wanted the trust to manage his assets, and he apparently named Arakawa as a third-party trustee. But that plan was dashed by Arakawa’s sudden death.

The person managing Hackman’s estate asked the court to appoint a new trustee, a request that the court approved, according to public records. But the court order is not public, and the trust itself remains private, so the public doesn’t yet know who will manage his estate or inherit his fortune. U.S. courts vary in how much access they provide to case records.

As law professors who specialize in trusts and estates, we teach courses about the transfer of property during life and at death. We believe that the drama playing out over Hackman’s assets offers valuable lessons for anyone leaving an estate, large or small, for their loved ones to inherit. It also is a cautionary tale for the tens of millions of Americans in stepfamilies.

‘Pour-over’ wills are a popular technique

The couple signed the wills in 2005, more than a decade before Hackman was diagnosed with dementia. There’s no reason to doubt whether Hackman was of sound mind at that time. Although he had retired from acting and led a very private life for a public figure, after the last film he starred in, “Welcome to Mooseport,” was released in 2004, Hackman continued to write books and narrate documentaries for several more years.

Based on the wills that we have been able to review, Hackman and Arakawa used a popular estate planning technique that combined two documents: a lifetime trust and a will.

The first document, sometimes called a “living trust,” usually contains the most important details about who ultimately inherits a person’s property once they die. All other estate planning documents, including wills, all financial and brokerage accounts, and life insurance policies can pour assets into the trust at death by naming the trustee as the death beneficiary.

The trust is the only document that needs to be updated when life circumstances change, such as divorce, the death of a spouse, or the birth of a child. All of the other planning documents can be left alone because they already name the trustee of the trust as the property recipient.

Hackman also signed a second document, known as a “pour-over” will. A pour-over will is a catchall measure to ensure that anything owned at death ends up in the trust if it wasn’t transferred during life. Hackman’s pour-over will gave his estate at death to Arakawa as the designated trustee of the trust he had created.

The combination of a trust coupled with a pour-over will – a technique that Michael Jackson also used – offers many advantages.

One is that, if the trust is created during life, it can be administered privately at death without the cost, publicity and delay of probate – the court-supervised process for estate administration. That is why, while Hackman’s personal representative filed his will in probate court to administer any remaining property owned at death, the trust created during Hackman’s life can manage assets without court supervision.

An older man and an older woman look puzzled while reading a document.
It’s important to carefully consider what should happen if you both die around the same time.
Inside Creative House/iStock via Getty Images Plus

Who might get what

The trust document has not been made public, but Hackman’s personal representative stated that the trust “contains mainly out-of-state beneficiaries” who will inherit his assets.

Hackman’s beneficiaries are unlikely to be publicly identified because they appear in the trust rather than the pour-over will. His will does not leave anything directly to any relatives. Even Arawaka was not slated to receive anything herself, only as trustee, but the will does mention his children in a paragraph describing his family.

Hackman had three children, all born during his first marriage, to Faye Maltese: Christopher, Elizabeth and Leslie. Hackman had acknowledged that it was hard for them to grow up with an often-absent celebrity father, but his daughters and one granddaughter released a statement after he died about missing their “Dad and Grandpa.” It is possible that Hackman’s children, as well as Arakawa, are named as beneficiaries of the trust.

Arakawa had no children of her own. Little is known about her family, except that her mother, now 91, is still alive. Arakawa’s will gave the bulk of her estate to Hackman as trustee of his trust, but only if he survived her by 90 days. If he failed to survive by 90 days, then she instructed her personal representative to establish a charitable trust “to achieve purposes beneficial to the community” consistent with the couple’s charitable preferences.

Her will refers to charitable “interests expressed … by my spouse and me during our lifetimes.” But it offers no specific guidance on which charities should benefit. Because Hackman did not survive Arakawa by 90 days, no part of her estate will pass to Hackman’s trust or his children.

Christopher Hackman has reportedly hired a lawyer, leading to speculation that he might contest some aspect of his father’s or stepmother’s estates.

Research shows that the average case length of a probate estate is 532 days, but individual cases can vary greatly in length and complexity. It is possible that the public may never learn what happens to the trust if the parties reach a settlement without litigation in court.

Man in tuxedo and a large bowtie stands next to two teenagers who are looking away.
Gene Hackman and his daughters, Elizabeth Hackman and Leslie Hackman, attend the screening of ‘Superman’ in 1978 at the Kennedy Center in Washington, D.C.
Ron Galella Collection via Getty Images

Takeaways for the rest of us

We believe that anyone thinking about who will inherit their property after they die can learn three important lessons from the fate of Hackman’s estate.

First, a living trust can provide more privacy than a will by avoiding the publicity of a court-supervised probate administration. It can also simplify the process for updating the estate plan by avoiding the need to amend multiple documents every time life circumstances change, such as the birth of a child or end of a marriage. Because all estate planning documents pour into the trust, the trust is the only document that requires any updating.

You don’t need a multimillion-dollar estate to justify the cost of creating a living trust. Some online platforms charge less than $400 for help creating one.

Second, remember that even when your closest loved ones are much younger than you are, it’s impossible to predict who will die first. If you do create a living trust, it should include a backup plan in case someone named in it dies before you. You can choose a “contingent beneficiary” – someone who will take the property if the primary beneficiary dies first. You can also choose a successor trustee who will manage the trust if the primary trustee dies first or declines to serve.

Finally, it’s important to carefully consider how best to divide the estate.

Hackman’s children and some of his other relatives may ultimately receive millions through his trust. But parents in stepfamilies must often make difficult decisions about how to divide their estate between a surviving spouse and any children they had with other partners.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Gene Hackman had a will, but the public may never find out who inherits his $80M fortune – https://theconversation.com/gene-hackman-had-a-will-but-the-public-may-never-find-out-who-inherits-his-80m-fortune-259650

Too many em dashes? Weird words like ‘delves’? Spotting text written by ChatGPT is still more art than science

Source: The Conversation – USA (2) – By Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

Language experts fare no better than everyday people. Aitor Diago/Moment via Getty Images

People are now routinely using chatbots to write computer code, summarize articles and books, or solicit advice. But these chatbots are also employed to quickly generate text from scratch, with some users passing off the words as their own.

This has, not surprisingly, created headaches for teachers tasked with evaluating their students’ written work. It’s also created issues for people seeking advice on forums like Reddit, or consulting product reviews before making a purchase.

Over the past few years, researchers have been exploring whether it’s even possible to distinguish human writing from artificial intelligence-generated text. But the best strategies to distinguish between the two may come from the chatbots themselves.

Too good to be human?

Several recent studies have highlighted just how difficult it is to determine whether text was generated by a human or a chatbot.

Research participants recruited for a 2021 online study, for example, were unable to distinguish between human- and ChatGPT-generated stories, news articles and recipes.

Language experts fare no better. In a 2023 study, editorial board members for top linguistics journals were unable to determine which article abstracts had been written by humans and which were generated by ChatGPT. And a 2024 study found that 94% of undergraduate exams written by ChatGPT went undetected by graders at a British university.

Clearly, humans aren’t very good at this.

A commonly held belief is that rare or unusual words can serve as “tells” regarding authorship, just as a poker player might somehow give away that they hold a winning hand.

Researchers have, in fact, documented a dramatic increase in relatively uncommon words, such as “delves” or “crucial,” in articles published in scientific journals over the past couple of years. This suggests that unusual terms could serve as tells that generative AI has been used. It also implies that some researchers are actively using bots to write or edit parts of their submissions to academic journals. Whether this practice reflects wrongdoing is up for debate.

In another study, researchers asked people about characteristics they associate with chatbot-generated text. Many participants pointed to the excessive use of em dashes – an elongated dash used to set off text or serve as a break in thought – as one marker of computer-generated output. But even in this study, the participants’ rate of AI detection was only marginally better than chance.

Given such poor performance, why do so many people believe that em dashes are a clear tell for chatbots? Perhaps it’s because this form of punctuation is primarily employed by experienced writers. In other words, people may believe that writing that is “too good” must be artificially generated.

But if people can’t intuitively tell the difference, perhaps there are other methods for determining human versus artificial authorship.

Stylometry to the rescue?

Some answers may be found in the field of stylometry, in which researchers employ statistical methods to detect variations in the writing styles of authors.

I’m a cognitive scientist who authored a book on the history of stylometric techniques. In it, I document how researchers developed methods to establish authorship in contested cases, or to determine who may have written anonymous texts.

One tool for determining authorship was proposed by the Australian scholar John Burrows. He developed Burrows’ Delta, a computerized technique that examines the relative frequency of common words, as opposed to rare ones, that appear in different texts.

It may seem counterintuitive to think that someone’s use of words like “the,” “and” or “to” can determine authorship, but the technique has been impressively effective.

Black-and-white photographic portrait of young woman with short hair seated and posing for the camera.
A stylometric technique called Burrow’s Delta was used to identify LaSalle Corbell Pickett as the author of love letters attributed to her deceased husband, Confederate Gen. George Pickett.
Encyclopedia Virginia

Burrows’ Delta, for example, was used to establish that Ruth Plumly Thompson, L. Frank Baum’s successor, was the author of a disputed book in the “Wizard of Oz” series. It was also used to determine that love letters attributed to Confederate Gen. George Pickett were actually the inventions of his widow, LaSalle Corbell Pickett.

A major drawback of Burrows’ Delta and similar techniques is that they require a fairly large amount of text to reliably distinguish between authors. A 2016 study found that at least 1,000 words from each author may be required. A relatively short student essay, therefore, wouldn’t provide enough input for a statistical technique to work its attribution magic.

More recent work has made use of what are known as BERT language models, which are trained on large amounts of human- and chatbot-generated text. The models learn the patterns that are common in each type of writing, and they can be much more discriminating than people: The best ones are between 80% and 98% accurate.

However, these machine-learning models are “black boxes” – that is, we don’t really know which features of texts are responsible for their impressive abilities. Researchers are actively trying to find ways to make sense of them, but for now, it isn’t clear whether the models are detecting specific, reliable signals that humans can look for on their own.

A moving target

Another challenge for identifying bot-generated text is that the models themselves are constantly changing – sometimes in major ways.

Early in 2025, for example, users began to express concerns that ChatGPT had become overly obsequious, with mundane queries deemed “amazing” or “fantastic.” OpenAI addressed the issue by rolling back some changes it had made.

Of course, the writing style of a human author may change over time as well, but it typically does so more gradually.

At some point, I wondered what the bots had to say for themselves. I asked ChatGPT-4o: “How can I tell if some prose was generated by ChatGPT? Does it have any ‘tells,’ such as characteristic word choice or punctuation?”

The bot admitted that distinguishing human from nonhuman prose “can be tricky.” Nevertheless, it did provide me with a 10-item list, replete with examples.

These included the use of hedges – words like “often” and “generally” – as well as redundancy, an overreliance on lists and a “polished, neutral tone.” It did mention “predictable vocabulary,” which included certain adjectives such as “significant” and “notable,” along with academic terms like “implication” and “complexity.” However, though it noted that these features of chatbot-generated text are common, it concluded that “none are definitive on their own.”

Chatbots are known to hallucinate, or make factual errors.

But when it comes to talking about themselves, they appear to be surprisingly perceptive.

The Conversation

Roger J. Kreuz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Too many em dashes? Weird words like ‘delves’? Spotting text written by ChatGPT is still more art than science – https://theconversation.com/too-many-em-dashes-weird-words-like-delves-spotting-text-written-by-chatgpt-is-still-more-art-than-science-259629

Water recycling is paramount for space stations and long-duration missions − an environmental engineer explains how the ISS does it

Source: The Conversation – USA – By Berrin Tansel, Professor of Civil and Environmental Engineering, Florida International University

The water recovery system on the ISS is state of the art. Roscosmos State Space Corporation via AP, File

When you’re on a camping trip, you might have to pack your own food and maybe something to filter or treat water that you find. But imagine your campsite is in space, where there’s no water, and packing jugs of water would take up room when every inch of cargo space counts. That’s a key challenge engineers faced when designing the International Space Station.

Before NASA developed an advanced water recycling system, water made up nearly half the payload of shuttles traveling to the ISS. I am an environmental engineer and have conducted research at Kennedy Space Center’s Space Life Sciences Laboratory. As part of this work, I helped to develop a closed-loop water recovery system.

Today, NASA recovers over 90% of the water used in space. Clean water keeps an astronaut crew hydrated, hygienic and fed, as it can use it to rehydrate food. Recovering used water is a cornerstone of closed-loop life support, which is essential for future lunar bases, Mars missions and even potential space settlements.

A rack of machinery.
A close-up view of the water recovery system’s racks – these contain the hardware that provides a constant supply of clean water for four to six crew members aboard the ISS.
NASA

NASA’s environmental control and life support system is a set of equipment and processes that perform several functions to manage air and water quality, waste, atmospheric pressure and emergency response systems such as fire detection and suppression. The water recovery system − one component of environmental control and life support − supports the astronauts aboard the ISS and plays a central role in water recycling.

Water systems built for microgravity

In microgravity environments like the ISS, every form of water available is valuable. The water recovery systems on the ISS collect water from several sources, including urine, moisture in cabin air, and hygiene – meaning from activities such as brushing teeth.

On Earth, wastewater includes various types of water: residential wastewater from sinks, showers and toilets; industrial wastewater from factories and manufacturing processes; and agricultural runoff, which contains fertilizers and pesticides.

In space, astronaut wastewater is much more concentrated than Earth-based wastewater. It contains significantly higher levels of urea – a compound from urine – salts, and surfactants from soaps and materials used for hygiene. To make the water safe to drink, the system needs to remove all of these quickly and effectively.

The water recovery systems used in space employ some of the same principles as Earth-based water treatment. However, they are specifically engineered to function in microgravity with minimal maintenance. These systems also must operate for months or even years without the need for replacement parts or hands-on intervention.

NASA’s water recovery system captures and recycles nearly all forms of water used or generated aboard the space station. It routes the collected wastewater to a system called the water processor assembly, where it is purified into safe, potable water that exceeds many Earth-based drinking water standards.

The water recovery and treatment system on the ISS consists of several subsystems.

Recovering water from urine and sweat

The urine processor assembly recovers about 75% of the water from urine by heating and vacuum compression. The recovered water is sent to the water processor assembly for further treatment. The remaining liquid, called brine, still contains a significant amount of water. So, NASA developed a brine processor assembly system to extract the final fraction of water from this urine brine.

In the brine processor assembly, warm, dry air evaporates water from the leftover brine. A filter separates the contaminants from the water vapor, and the water vapor is collected to become drinking water. This innovation pushed the water recovery system’s overall water recovery rate to an impressive 98%. The remaining 2% is combined with the other waste generated.

An astronaut in a red shirt holds a small metal cylinder.
The filter used in brine processing has helped achieve 98% recovery.
NASA

The air revitalization system condenses moisture from the cabin air – primarily water vapor from sweat and exhalation – into liquid water. It directs the recovered water to the water processor assembly, which treats all the collected water.

Treating recovered water

The water processor assembly’s treatment process includes several steps.

First, all the recovered water goes through filters to remove suspended particles such as dust. Then, a series of filters removes salts and some of the organic contaminants, followed by a chemical process called catalytic oxidation that uses heat and oxygen to break down the remaining organic compounds. The final step is adding iodine to the water to prevent microbial growth while it is stored.

Japan Aerospace Exploration Agency astronaut Koichi Wakata next to the International Space Station’s water recovery system, which recycles urine and wastewater into drinking water. As Wakata humorously puts it, ‘Here on board the ISS, we turn yesterday’s coffee into tomorrow’s coffee.’

The output is potable water — often cleaner than municipal tap water on Earth.

Getting to Mars and beyond

To make human missions to Mars possible, NASA has estimated that spacecraft must reclaim at least 98% of the water used on board. While self-sustaining travel to Mars is still a few years away, the new brine processor on the ISS has increased the water recovery rate enough that this 98% goal is now in reach. However, more work is needed to develop a compact system that can be used in a space ship.

The journey to Mars is complex, not just because of the distance involved, but because Mars and Earth are constantly moving in their respective orbits around the Sun.

The distance between the two planets varies depending on their positions. On average, they’re about 140 million miles (225 million km) apart, with the shortest theoretical approach, when the two planets’ orbits bring them close together, taking 33.9 million miles (54.6 million km).

A typical crewed mission is expected to take about nine months one way. A round-trip mission to Mars, including surface operations and return trajectory planning, could take around three years. In addition, launch windows occur only every 26 months, when Earth and Mars align favorably.

As NASA prepares to send humans on multiyear expeditions to the red planet, space agencies around the world continue to focus on improving propulsion and perfecting life support systems. Advances in closed-loop systems, robotic support and autonomous operations are all inching the dream of putting humans on Mars closer to reality.

The Conversation

Berrin Tansel does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Water recycling is paramount for space stations and long-duration missions − an environmental engineer explains how the ISS does it – https://theconversation.com/water-recycling-is-paramount-for-space-stations-and-long-duration-missions-an-environmental-engineer-explains-how-the-iss-does-it-260171

To better detect chemical weapons, materials scientists are exploring new technologies

Source: The Conversation – USA – By Olamilekan Joseph Ibukun, Postdoctoral Research Associate in Chemistry, Washington University in St. Louis

German troops make their way through a cloud of smoke or gas during a gas training drill, circa 1916. Henry Guttmann/Hulton Archive via Getty Images

Chemical warfare is one of the most devastating forms of conflict. It leverages toxic chemicals to disable, harm or kill without any physical confrontation. Across various conflicts, it has caused tens of thousands of deaths and affected over a million people through injury and long-term health consequences.

Mustard gas is a class of chemical that isn’t a gas at room temperature – it’s a yellow-brown oily liquid that can vaporize into a toxic mist. Viktor Meyer refined the synthesis of mustard into a more stable form. Mustard gas gained international notoriety during World War I and has been used as a weapon many times.

A vintage photograph of a soldier poking a cylinder, which releases a cloud of smoke.
German soldiers release poison gas from cylinders during World War I.
Henry Guttmann Collection/Hulton Archive via Getty Images

It is nearly impossible to guarantee that mustard gas will never be used in the future, so the best way to prepare for the possibility is to develop a very easy way to detect it in the field.

My colleagues and I, who are chemists and materials science researchers, are keen on developing a rapid, easy and reliable way to detect toxic chemicals in the environment. But doing so will require overcoming several technological challenges.

Effects on human health and communities

Mustard gas damages the body at the cellular level. When it comes into contact with the skin or eyes or is inhaled, it dissolves easily in fats and tissues and quickly penetrates the body. Once inside the body, it changes into a highly reactive form that attaches to and damages DNA, proteins and other essential parts of cells. Once it reacts with DNA, the damage can’t be undone – it may stop cells from functioning properly and kill them.

Mustard gas exposure can trigger large, fluid-filled blisters on the skin. It can also severely irritate the eyes, leading to redness, swelling and even permanent blindness. When inhaled, it burns the lining of the airways, leading to coughing, difficulty breathing and long-term lung damage. Symptoms often don’t appear for several hours, which delays treatment.

Four photos of people holding out their forearms, which have large blisters.
The forearms of test subjects exposed to nitrogen mustard and lewisite, chemicals that cause large, fluid-filled blisters on the skin.
Naval Research Laboratory

Even small exposures can cause serious health problems. Over time, it can weaken the immune system and has been linked to an increased risk of cancers due to its effects on DNA.

The effect of just one-time exposure carries down to the next generation. For example, studies have reported physical abnormalities and disorders in the children of men who were exposed to mustard gas, while some of the men became infertile.

The best way to prevent serious health problems is to detect mustard gas early and keep people away from it.

Detecting mustard gas early

The current methods to detect mustard gas rely on sophisticated chemistry techniques. These require expensive, delicate instruments that are difficult to carry to the war front and are too fragile to be kept in the field as a tool for detecting toxic chemicals. These instruments are conventionally designed for the laboratory, where they stay in one location and are handled carefully.

Many researchers have attempted to improve detection techniques. While each offers a glimpse of hope, they also come with setbacks.

Some scientists have been working on a wearable electrochemical biosensor that could detect mustard gas in both liquid and vapor form. They succeeded in developing tiny devices that provide real-time alerts. But here, stability became a problem. The enzymes degrade, and environmental noise can cloud the signal. Because of this issue, these strips haven’t been used successfully in the field.

To simplify detection, others developed molecularly imprinted polymer test strips targeting thiodiglycol, a mustard gas breakdown product. These strips change color when they come into contact with the gas, and they’re cheap, portable and easy to use in the field. The main concern is that they detect a chemical present in the aftermath of mustard gas use, not the agent itself, which isn’t quite as effective.

One of the most promising breakthroughs came in 2023 in the form of fluorescent probes, which change color when they sense the chemical. This probe is a tiny detective tool that detects or measures the target chemical and generates a signal. But these probes remain vulnerable to environmental interference such as humidity and temperature, meaning they’re less reliable in rugged field conditions.

Some other examples under development include a chemical sensor device that families could have at home, or even a wearable device.

Wearable devices are tricky, however, since they need to be small. Researchers have been trying to integrate tiny nanomaterials into sensors. Other teams are looking at how to incorporate artificial intelligence. Artificial intelligence could help a device interpret data faster and respond more quickly.

Researchers bridging the gap

Now at Washington University in St Louis, Makenzie Walk and I are part of the team of researchers working on detecting these chemicals, led by Jennifer Heemstra and M.G. Finn. Another member is Seth Taylor, a postdoctoral researcher at Georgia Tech.

Our team of researchers hopes to use the lessons learned from prior sensors to develop an easy and reliable way to rapidly detect these chemicals in the field. Our approach will involve testing different molecular sensor designs on compounds modeled after specific chemical weapons. The sensors would initiate a cascade of reactions that generate a bright, colorful fluorescent signal in the laboratory.

We are figuring out to which compounds these chemicals react best, and which might make a good candidate for use in a detector. These tests allow us to determine how much of the chemical will need to be in the air to trigger a reaction that we can detect, as well as how long it will need to be in the air before we can detect it.

Additionally, we are investigating how the structure of the chemicals we work with influences how they react. Some react more quickly than others, and understanding their behavior will help us pick the right compounds for our detector. We want them to be sensitive enough to detect even small amounts of mustard gas quickly, but not so sensitive that they frequently give falsely positive results.

Eliminating the use of these chemicals would be the best approach to avoid future recurrence. The 1997 Chemical Weapons Convention bans the production, use and accumulation of chemical weapons. But countries such as Egypt, North Korea and South Sudan have not signed or officially adopted the international arms control treaty.

To discourage countries that don’t sign the treaty from using these weapons, other countries can use sanctions. For example, the U.S. learned that Sudan used chemical weapons in 2024 during a conflict, and in response it placed sanctions on the government.

Even without continued use of these chemical weapons, some traces of the chemical may still linger in the environment. Technology that can quickly identify the chemical threat in the environment could prevent more disasters from occurring.

As scientists and global leaders collectively strive for a safer world, the ability to detect when a dangerous chemical is released or is present in real time will improve a community’s preparedness, protection and peace of mind.

The Conversation

Mekenzie Walk and Jen Heemstra contributed to this article.

Heemstra lab receives funding from the Defense Threat Reduction Agency (DTRA).

ref. To better detect chemical weapons, materials scientists are exploring new technologies – https://theconversation.com/to-better-detect-chemical-weapons-materials-scientists-are-exploring-new-technologies-257296