Afrique de l’Ouest : comment les transitions militaires se nourrissent des dynamiques sécuritaires

Source: The Conversation – in French – By Christian Abadioko Sambou, Dr en Sciences Politiques, spécialiste en paix & sécurité, Université Numérique Cheikh Hamidou Kane

Depuis plus de dix ans, les violences terroristes déstabilisent le Sahel central (Burkina Faso, Mali, Niger), fragilisant les sociétés, les institutions et les régimes démocratiques. Dans ce contexte, des coups d’État militaires ont eu lieu au Mali (2020, 2021), au Burkina Faso (2022) et au Niger (2023), invoquant la nécessité de restaurer l’ordre. Ces régimes prolongent la transition au-delà de son cadre temporaire, privilégiant la promesse de stabilité à un retour constitutionnel.

En tant que chercheur, j’ai étudié les crises sécuritaires en Afrique de l’Ouest, notamment les conflits sécessionnistes et leur évolution dans des Etats en permanente modernisation et démocratisation. Selon moi, la trajectoire des institutions et des régimes politiques, ainsi que leur effet sur la paix et la sécurité au Sahel, constitue un sujet de recherche particulièrement pertinent.

Dans cet article, j’explore comment l’insécurité persistante légitime les gouvernements militaires, où le pouvoir repose sur la gestion de la menace plutôt que sur l’élection, transformant ainsi la transition en un mode de gouvernance durable.

Echec de dix ans de lutte contre le terrorisme

Les transitions militaires au Sahel central sont le symptôme d’un échec plus large : celui des politiques internationales de lutte contre le terrorisme. Depuis 2013, l’engagement militaire étranger — à travers l’opération Serval puis Barkhane, la force conjointe du G5 Sahel, ainsi que divers dispositifs bilatéraux et multilatéraux soutenus par l’Union européenne — a privilégié une approche strictement sécuritaire. Cette stratégie a eu pour effet de marginaliser des dimensions essentielles telles que la gouvernance démocratique, la justice sociale et le développement.

La coordination entre les multiples acteurs internationaux et locaux a manqué de cohérence, chacun poursuivant des agendas parfois divergents. Cette fragmentation stratégique a favorisé l’extension des violences : du nord du Mali vers le centre, puis dans l’ensemble du Burkina Faso et certaines régions du Niger comme Tillabéry, Tahoua ou Diffa.

Les données d’Armed Conflict Location & Event Data (ACLED, une organisation qui recueille des données sur les conflits violents et les manifestations) montrent une nette augmentation des violences et des morts après 2020, principalement au Burkina Faso épicentre du terrorisme. Le nombre de décès lié aux violences a augmenté de 28 % entre 2020 et 2022 (année du coup d’Etat). Depuis 2020, le pays a enregistré plus de 28 000 décès (20 000 au Mali,7 000 au Niger).

Les conséquences humanitaires sont alarmantes : des milliers de morts au Sahel central, la fermeture de près de 10 000 écoles et centres de santé (OCHA), et près de 3 millions de personnes déplacées selon le Haut commissariat des unies pour les réfugiés (UNHCR).

Dans ce chaos, les groupes armés prolifèrent. Ils ont de plus en plus recours à l’utilisation de drones. Le Groupe de soutien à l’islam et aux musulmans (GSIM), l’État islamique au grand Sahara (EIGS), Boko Haram (au Nigeria), la Katiba Macina (au Mali), ainsi que des milices d’autodéfense et des groupes communautaires armés, contribuent à aggraver les tensions locales.

La perte de légitimité des gouvernements civils et la défiance envers les partenaires internationaux se sont accentuées. Cela s’est traduit par un rejet massif de la présence militaire étrangère et une contestation croissante des élites locales. Des manifestations ont éclaté à Bamako (2020), Ouagadougou (2021) et Niamey (2023). La confiance envers la MINUSMA, Barkhane et la Mission de formation de l’Union européenne au Mali (EUTM) s’est progressivement effritée.

Dans ce contexte, les transitions militaires apparaissent pour certains citoyens de ces pays comme une réponse à l’échec des interventions extérieures. Les militaires sont perçus comme les seuls capables d’agir dans un environnement dominé par l’urgence sécuritaire. Or, cette réponse ne règle ni les causes profondes ni les effets délétères du conflit.




Read more:
Terrorisme au Sahel : pourquoi les attaques contre les bases militaires se multiplient et comment y répondre


Coups d’Etat militaires et « solutions transitoires »

Depuis 2020, les coups d’État militaires au Sahel central se multiplient, directement liés à la persistance d’une crise sécuritaire profonde. Le Mali a ainsi connu deux coups en août 2020 et mai 2021, suivi du Burkina Faso en janvier et septembre 2022, puis du Niger en juillet 2023.

Si les premiers coups d’Etat des années 60 à 80 ont souvent été attribués aux influences étrangères, la vague de coups d’État récente depuis 2020 au Sahel central, résulte en réalité davantage d’une grave détérioration sécuritaire interne que d’une manipulation extérieure. Mais désormais, les dirigeants utilisent une rhétorique sécuritaire prévalente depuis les années 2000 pour légitimer leurs actions politiques et judiciaires, réduisant ainsi tout débat politique au seul prisme sécuritaire.

Face à l’échec sécuritaire des États, les régimes militaires du Sahel central ont adopté une stratégie de durcissement, s’alliant à des acteurs comme la Russie et les paramilitaires d’Africa Corps (ex-Wagner). Rejetant les approches multilatérales de la Minusma, de la Communauté économique des Etats de l’Afrique de l’ouest (Cedeao) ou de la France, ces régimes revendiquent leur souveraineté et bouleversent la géopolitique régionale. Pourtant, les transitions se prolongent sans freiner les violences : les attaques terroristes se poursuivent, les groupes armés restent actifs.

Pire, cette réponse strictement militaire engendre des violations des droits humains, aggrave les tensions communautaires et favorise la radicalisation.

Ainsi, loin d’apporter une véritable stabilisation, les transitions deviennent progressivement des outils de répression politique, de restriction des libertés individuelles et de musèlement des contre-pouvoirs, notamment au Mali.

Finalement, la combinaison complexe entre instabilité politique et sécuritaire entretient ce cycle continu de transitions prolongées. Cela révèle les limites intrinsèques de ces transitions lorsqu’elles sont envisagées comme seule réponse à la crise durable du Sahel central.




Read more:
Lutte anti-terrorisme en Afrique de l’Ouest : une collaboration étrangère et régionale est nécessaire


Des assises nationales plutôt que des élections

La notion de transition, entendue comme une période brève entre deux régimes politiques, tend à perdre son sens au Sahel central. Traditionnellement, une transition comprend deux sous-phases : la libéralisation (ouverture politique) et l’organisation d’élections libres menant à un régime démocratique. Toutefois, au Mali, au Burkina Faso et au Niger, les autorités militaires prolongent cette phase au point de consolider un nouveau pouvoir qui se veut permanent, et rompt avec la tradition de démocratisation.

Les assises nationales, censées être inclusives, sont devenues un outil de légitimation du pouvoir militaire au détriment des élections. Ces assises, tenues au Mali (septembre 2020, décembre 2021, juin 2023), au Burkina Faso (février 2022, octobre 2022, mai 2024) et au Niger (concertations limitées et non nationales), se présentent comme des assemblées populaires, mais elles contournent les mécanismes démocratiques en excluant les voix divergentes et en opérant sur la base d’un consensus préétabli favorable aux militaires.

Elles se substituent à l’élection en définissant la durée de la transition, en érigeant des colonels en généraux, en nommant des présidents de la République, et en fixant les orientations économiques du pays. Ces pratiques traduisent une institutionnalisation de la gouvernance militaire, éloignée du caractère temporaire d’une transition.

Parallèlement, les régimes militaires jouissent d’un soutien populaire alimenté par un rejet massif de la France, de la communauté économique des Etas de l’Afrique de l’ouest (Cedeao) et des élites politiques traditionnelles. La perception d’un manque de compassion et de solidarité dans la lutte contre le terrorisme, les sanctions imposées par les institutions régionales suite aux coups d’Etat, perçues comme des relais des puissances occidentales, ont légitimé aux yeux des opinions locales un retrait des trois Etats de la Cedeao.

Ce contexte a contribué à la création de l’Alliance des États du Sahel (AES), structure à vocation sécuritaire mais de plus en plus politique. L’AES cherche à structurer un nouvel ordre régional en bousculant les équilibres régionaux.

Recul de la démocratie

Le discours souverainiste, profondément anti-occidental, s’appuie à la fois sur le passif colonial et l’échec sécuritaire des partenaires classiques. Il a permis de matérialiser le départ de la France de la région, sans pour autant construire une véritable autonomie stratégique. Le remplacement de la force Barkhane par Wagner, devenu Africa Corps, interpelle toutefois sur la souveraineté revendiquée.

La gouvernance militaire repose désormais sur un agenda sécuritaire renforcé, où les élections sont repoussées sine die, les libertés restreintes et les droits humains souvent bafoués. Tout cela justifié par l’état de guerre contre le terrorisme. Cette militarisation se manifeste aussi par l’intégration des civils dans des gouvernements militaires et la mobilisation de forces paramilitaires comme les Volontaires pour la défense de la patrie (VDP) au Burkina Faso.

Face à cette dynamique, la démocratie électorale recule tandis que d’autres pays ouest-africains (Sénégal, Cap-Vert, Bénin) conservent une trajectoire démocratique. Le contraste souligne les défis historique de sécurité et de stabilité politique communs à l’ensemble des pays de la région. Il soulève surtout la question de l’évolution des régimes politiques en situation de conflits asymétriques.

Au Sahel central, on assiste ainsi à une réinvention du logiciel de lecture des régimes de transition.

The Conversation

Christian Abadioko Sambou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Afrique de l’Ouest : comment les transitions militaires se nourrissent des dynamiques sécuritaires – https://theconversation.com/afrique-de-louest-comment-les-transitions-militaires-se-nourrissent-des-dynamiques-securitaires-262056

Canada could use thermal infrastructure to turn wasted heat emissions into energy

Source: The Conversation – Canada – By James (Jim) S. Cotton, Professor, Department of Mechanical Engineering, McMaster University

Buildings are the third-largest source of greenhouse gas emissions in Canada. In many cities, including Vancouver, Toronto and Calgary, buildings are the single highest source of emissions.

The recently launched Infrastructure for Good barometer, released by consulting firm Deloitte, suggests that Canada’s infrastructure investments already top the global list in terms of positive societal, economic and environmental benefits.

In fact, over the past 150 years, Canada has built railways, roads, clean water systems, electrical grids, pipelines and communication networks to connect and serve people across the country.

Now, there’s an opportunity to build on Canada’s impressive tradition by creating a new form of infrastructure: capturing, storing and sharing the massive amounts of heat lost from industry, electricity generation and communities, even in summer.

Natural gas precedent

Indoor heating often comes from burning fossil fuels — three-quarters of Ontario homes, for example, are heated by natural gas. Until about 1966, homes across Canada were primarily heated by wood stoves, coal boilers, oil furnaces or heaters using electricity from coal-fired power plants.

After the oil crisis of the 1970s, many of those fuels were replaced by natural gas, delivered directly to individual homes. The cost of the natural gas infrastructure, including a national network of pipelines, was amortized over more than 50 years to make the cost more practical.

two pie charts showing the source of Ontario's greenhouse gas emissions
Sources of greenhouse gas emissions in Ontario.
(J. Cotton), CC BY

This reliable, low-cost energy source quickly proved to be popular. The change cut heating emissions across Ontario by roughly half throughout the 1970s and 1980s, long before climate change was the concern it is today.

Now, as the need to decarbonize becomes more pressing, recent studies not only emphasize the often-overstated emissions reductions benefits from using natural gas; they also indicate that burning this fuel source is still far from net-zero.

However, there’s no reason why Canadian governments can’t invest in new infrastructure-based alternative heating solutions. This time, they could replace natural gas with an alternative, net-zero source: the wasted heat already emitted by other energy uses.

Heat capture and storage

Depending on the source temperature, technology used and system design, heat can be captured throughout the year, stored and distributed as needed. A type of infrastructure called thermal networks could capture leftover heat from factories and nuclear and gas-fired power plants.

In essence, thermal networks take excess thermal energy from industrial processes (though thermal energy can theoretically be captured from a variety of different sources), and use it as a centralized heating source for a series of insulated underground pipelines connected to multiple other buildings. These pipelines, in turn, are used to heat or cool these connected buildings.

A substantial potential to capture heat similarly exists in every neighbourhood. Heat is produced by data centres, grocery stores, laundromats, restaurants, sewage systems and even hockey arenas.

In Ontario, the amount of energy we dump in the form of heat is greater than all the natural gas we use to heat our homes.

A restaurant, for example, can produce enough heat for seven family homes. To take advantage of the wasted heat, Canada needs to build thermal networks, corridors and storage to capture and distribute heat directly to consumers.

The effort demands substantial leadership from all levels of government. Creating these systems would be expensive, but the technology does exist, and the one-time cost would pay for itself many times over.

Such systems are already working in other cold countries. Thermal networks heat half the homes in Sweden and two-thirds of homes in Denmark.

pipes being laid under a city street
District heating pipes being laid at Gullbergs Strandgata in Gothenburg, Sweden in May 2021.
(Shutterstock)

The oil crisis of the 1970s motivated both countries to find new domestic heating sources. They financed their new infrastructure over 50 years and reduced their investment risks through low-interest bonds (loaned by public banks) and generous subsidies.

These were offered to utility companies looking to expand district energy operations, and to consumers by incentivizing connections to such systems. Additionally, in Denmark, controlled consumer prices served a similar function.

At least seven American states have established thermal energy networks, with New York being the first. The state’s Utility Thermal Energy Network and Jobs Act allows public utilities to own, operate and manage thermal networks.

They can supply thermal energy, but so can private producers such as data centres, all with public oversight. Such a strategy avoids monopolies and allows gas and electric utilities to deliver services through central networks.

An opportunity for Canada

Canada has a real opportunity to learn from the experiences of Sweden, Denmark and New York. In doing so, Canada can create a beneficial and truly national heating system in the process. Beginning with federal government leadership, thermal networks could be built across Canada, tailored to the unique and individual needs, strengths and opportunities of municipalities and provinces.

Such a shift would reduce emissions and generate greater energy sovereignty for Canada. It could drive a just energy strategy that could provide employment opportunities for those displaced by the transition away from fossil fuels, while simultaneously increasing Canada’s economic independence in the process.

Thermal networks could be built using pipelines made from Canadian steel. Oil-well drillers from Alberta could dig borehole heat-storage systems. A new market for heat-recovery pumps would create good advanced-manufacturing jobs in Canada.




Read more:
How heat storage technologies could keep Canada’s roads and bridges ice-free all winter long


Funding for the infrastructure could come through public-private partnerships, with major investments from public banks and pension funds, earning a solid and secure rate of return. A regulated approach and process could permit this infrastructure cost to be amortized over decades, similar to the way past governments have financed gas, electrical and water networks.

As researchers studying the engineering and policy potential of such an opportunity, we view such actions as essential if net-zero is to be achieved in the Canadian building sector. They are also a win-win solution for incumbent industry, various levels of government and citizens across Canada alike.

Yet efforts to install robust thermal networks remain stalled by institutional inertia, the strong influence of the oil industry, limited citizen awareness of the technology’s potential and a tendency for government to view the electrification of heating as the primary solution to building decarbonization.

In this time of environmental crisis and international uncertainty, pushing past these barriers, drawing on Canada’s lengthy history of constructing infrastructure and creating this new form thermal energy infrastructure would be a safe, beneficial and conscientious way to move Canada into a more climate-friendly future.

The Conversation

James (Jim) S. Cotton receives funding from the Natural Sciences and Engineering Research Council of Canada.

Caleb Duffield does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Canada could use thermal infrastructure to turn wasted heat emissions into energy – https://theconversation.com/canada-could-use-thermal-infrastructure-to-turn-wasted-heat-emissions-into-energy-254972

US government may be abandoning the global climate fight, but new leaders are filling the void – including China

Source: The Conversation – USA (2) – By Shannon Gibson, Professor of Environmental Studies, Political Science and International Relations, USC Dornsife College of Letters, Arts and Sciences

Chinese President Xi Jinping and Brazilian President Luiz Inácio Lula da Silva meet in Beijing in May 2025. Tingshu Wang/Pool Photo via AP

When President Donald Trump announced in early 2025 that he was withdrawing the U.S. from the Paris climate agreement for the second time, it triggered fears that the move would undermine global efforts to slow climate change and diminish America’s global influence.

A big question hung in the air: Who would step into the leadership vacuum?

I study the dynamics of global environmental politics, including through the United Nations climate negotiations. While it’s still too early to fully assess the long-term impact of the United States’ political shift when it comes to global cooperation on climate change, there are signs that a new set of leaders is rising to the occasion.

World responds to another US withdrawal

The U.S. first committed to the Paris Agreement in a joint announcement by President Barack Obama and China’s Xi Jinping in 2015. At the time, the U.S. agreed to reduce its greenhouse gas emissions 26% to 28% below 2005 levels by 2025 and pledged financial support to help developing countries adapt to climate risks and embrace renewable energy.

Some people praised the U.S. engagement, while others criticized the original commitment as too weak. Since then, the U.S. has cut emissions by 17.2% below 2005 levels – missing the goal, in part because its efforts have been stymied along the way.

Just two years after the landmark Paris Agreement, Trump stood in the Rose Garden in 2017 and announced he was withdrawing the U.S. from the treaty, citing concerns that jobs would be lost, that meeting the goals would be an economic burden, and that it wouldn’t be fair because China, the world’s largest emitter today, wasn’t projected to start reducing its emissions for several years.

Scientists and some politicians and business leaders were quick to criticize the decision, calling it “shortsighted” and “reckless.” Some feared that the Paris Agreement, signed by almost every country, would fall apart.

But it did not.

In the United States, businesses such as Apple, Google, Microsoft and Tesla made their own pledges to meet the Paris Agreement goals.

Hawaii passed legislation to become the first state to align with the agreement. A coalition of U.S. cities and states banded together to form the United States Climate Alliance to keep working to slow climate change.

Globally, leaders from Italy, Germany and France rebutted Trump’s assertion that the Paris Agreement could be renegotiated. Others from Japan, Canada, Australia and New Zealand doubled down on their own support of the global climate accord. In 2020, President Joe Biden brought the U.S. back into the agreement.

A solar farm in a field.
Amazon partnered with Dominion Energy to build solar farms, like this one, in Virginia. They power the company’s cloud-computing and other services.
Drew Angerer/Getty Images

Now, with Trump pulling the U.S. out again – and taking steps to eliminate U.S. climate policies, boost fossil fuels and slow the growth of clean energy at home – other countries are stepping up.

On July 24, 2025, China and the European Union issued a joint statement vowing to strengthen their climate targets and meet them. They alluded to the U.S., referring to “the fluid and turbulent international situation today” in saying that “the major economies … must step up efforts to address climate change.”

In some respects, this is a strength of the Paris Agreement – it is a legally nonbinding agreement based on what each country decides to commit to. Its flexibility keeps it alive, as the withdrawal of a single member does not trigger immediate sanctions, nor does it render the actions of others obsolete.

The agreement survived the first U.S. withdrawal, and so far, all signs point to it surviving the second one.

Who’s filling the leadership vacuum

From what I’ve seen in international climate meetings and my team’s research, it appears that most countries are moving forward.

One bloc emerging as a powerful voice in negotiations is the Like-Minded Group of Developing Countries – a group of low- and middle-income countries that includes China, India, Bolivia and Venezuela. Driven by economic development concerns, these countries are pressuring the developed world to meet its commitments to both cut emissions and provide financial aid to poorer countries.

A man with his arms crossed leans on a desk with a 'Bolivia' label in front of it.
Diego Pacheco, a negotiator from Bolivia, spoke on behalf of the Like-Minded Developing Countries group during a climate meeting in Bonn, Germany, in June 2025.
IISD/ENB | Kiara Worth

China, motivated by economic and political factors, seems to be happily filling the climate power vacuum created by the U.S. exit.

In 2017, China voiced disappointment over the first U.S. withdrawal. It maintained its climate commitments and pledged to contribute more in climate finance to other developing countries than the U.S. had committed to – US$3.1 billion compared with $3 billion.

This time around, China is using leadership on climate change in ways that fit its broader strategy of gaining influence and economic power by supporting economic growth and cooperation in developing countries. Through its Belt and Road Initiative, China has scaled up renewable energy exports and development in other countries, such as investing in solar power in Egypt and wind energy development in Ethiopia.

While China is still the world’s largest coal consumer, it has aggressively pursued investments in renewable energy at home, including solar, wind and electrification. In 2024, about half the renewable energy capacity built worldwide was in China.

Three people talk under the shade of solar panels.
China’s interest in South America’s energy resources has been growing for years. In 2019, China’s special representative for climate change, Xie Zhenhua, met with Chile’s then-ministers of energy and environment, Juan Carlos Jobet and Carolina Schmidt, in Chile.
Martin Bernetti/AFP via Getty Images

While it missed the deadline to submit its climate pledge due this year, China has a goal of peaking its emissions before 2030 and then dropping to net-zero emissions by 2060. It is continuing major investments in renewable energy, both for its own use and for export. The U.S. government, in contrast, is cutting its support for wind and solar power. China also just expanded its carbon market to encourage emissions cuts in the cement, steel and aluminum sectors.

The British government has also ratcheted up its climate commitments as it seeks to become a clean energy superpower. In 2025, it pledged to cut emissions 77% by 2035 compared with 1990 levels. Its new pledge is also more transparent and specific than in the past, with details on how specific sectors, such as power, transportation, construction and agriculture, will cut emissions. And it contains stronger commitments to provide funding to help developing countries grow more sustainably.

In terms of corporate leadership, while many American businesses are being quieter about their efforts, in order to avoid sparking the ire of the Trump administration, most appear to be continuing on a green path – despite the lack of federal support and diminished rules.

USA Today and Statista’s “America’s Climate Leader List” includes about 500 large companies that have reduced their carbon intensity – carbon emissions divided by revenue – by 3% from the previous year. The data shows that the list is growing, up from about 400 in 2023.

What to watch at the 2025 climate talks

The Paris Agreement isn’t going anywhere. Given the agreement’s design, with each country voluntarily setting its own goals, the U.S. never had the power to drive it into obsolescence.

The question is if developed and developing country leaders alike can navigate two pressing needs – economic growth and ecological sustainability – without compromising their leadership on climate change.

This year’s U.N. climate conference in Brazil, COP30, will show how countries intend to move forward and, importantly, who will lead the way.

Research assistant Emerson Damiano, a recent graduate in environmental studies at USC, contributed to this article.

The Conversation

Shannon Gibson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. US government may be abandoning the global climate fight, but new leaders are filling the void – including China – https://theconversation.com/us-government-may-be-abandoning-the-global-climate-fight-but-new-leaders-are-filling-the-void-including-china-251786

Gene Hackman had a will, but the public may never find out who inherits his $80M fortune

Source: The Conversation – USA (2) – By Naomi Cahn, Professor of Law, University of Virginia

Gene Hackman and his wife, Betsy Arakawa, pose for a photo in 1986 in Los Angeles. Donaldson Collection/Michael Ochs Archives via Getty Images

Gene Hackman was found dead inside his New Mexico home on Feb. 26, 2025, at the age of 95. The acclaimed actor’s wife, Betsy Arakawa, had also died of a rare virus – a week before his death from natural causes.

Details about the couple’s plans for Hackman’s reportedly US$80 million fortune are only starting to emerge, months after the discovery of their tragic demise. While their wills have not yet been made public, we have seen them through a reputable source.

Both documents are short and sought to give the bulk of their assets to Hackman’s trust – a legal arrangement that allows someone to state their wishes for how their assets should be managed and distributed. Wills and trusts are similar in that both can be used to distribute someone’s property. They differ in that a trust can take effect during someone’s lifetime and continue long after their death. Wills take effect only upon someone’s death, for the purpose of distributing assets that person had owned.

Both trusts and wills can be administered by someone who does not personally benefit from the property.

Hackman, widely revered for his memorable roles in movies such as “The French Connection,” “Bonnie and Clyde” and “The Birdcage,” made it clear in his will that he wanted the trust to manage his assets, and he apparently named Arakawa as a third-party trustee. But that plan was dashed by Arakawa’s sudden death.

The person managing Hackman’s estate asked the court to appoint a new trustee, a request that the court approved, according to public records. But the court order is not public, and the trust itself remains private, so the public doesn’t yet know who will manage his estate or inherit his fortune. U.S. courts vary in how much access they provide to case records.

As law professors who specialize in trusts and estates, we teach courses about the transfer of property during life and at death. We believe that the drama playing out over Hackman’s assets offers valuable lessons for anyone leaving an estate, large or small, for their loved ones to inherit. It also is a cautionary tale for the tens of millions of Americans in stepfamilies.

‘Pour-over’ wills are a popular technique

The couple signed the wills in 2005, more than a decade before Hackman was diagnosed with dementia. There’s no reason to doubt whether Hackman was of sound mind at that time. Although he had retired from acting and led a very private life for a public figure, after the last film he starred in, “Welcome to Mooseport,” was released in 2004, Hackman continued to write books and narrate documentaries for several more years.

Based on the wills that we have been able to review, Hackman and Arakawa used a popular estate planning technique that combined two documents: a lifetime trust and a will.

The first document, sometimes called a “living trust,” usually contains the most important details about who ultimately inherits a person’s property once they die. All other estate planning documents, including wills, all financial and brokerage accounts, and life insurance policies can pour assets into the trust at death by naming the trustee as the death beneficiary.

The trust is the only document that needs to be updated when life circumstances change, such as divorce, the death of a spouse, or the birth of a child. All of the other planning documents can be left alone because they already name the trustee of the trust as the property recipient.

Hackman also signed a second document, known as a “pour-over” will. A pour-over will is a catchall measure to ensure that anything owned at death ends up in the trust if it wasn’t transferred during life. Hackman’s pour-over will gave his estate at death to Arakawa as the designated trustee of the trust he had created.

The combination of a trust coupled with a pour-over will – a technique that Michael Jackson also used – offers many advantages.

One is that, if the trust is created during life, it can be administered privately at death without the cost, publicity and delay of probate – the court-supervised process for estate administration. That is why, while Hackman’s personal representative filed his will in probate court to administer any remaining property owned at death, the trust created during Hackman’s life can manage assets without court supervision.

An older man and an older woman look puzzled while reading a document.
It’s important to carefully consider what should happen if you both die around the same time.
Inside Creative House/iStock via Getty Images Plus

Who might get what

The trust document has not been made public, but Hackman’s personal representative stated that the trust “contains mainly out-of-state beneficiaries” who will inherit his assets.

Hackman’s beneficiaries are unlikely to be publicly identified because they appear in the trust rather than the pour-over will. His will does not leave anything directly to any relatives. Even Arawaka was not slated to receive anything herself, only as trustee, but the will does mention his children in a paragraph describing his family.

Hackman had three children, all born during his first marriage, to Faye Maltese: Christopher, Elizabeth and Leslie. Hackman had acknowledged that it was hard for them to grow up with an often-absent celebrity father, but his daughters and one granddaughter released a statement after he died about missing their “Dad and Grandpa.” It is possible that Hackman’s children, as well as Arakawa, are named as beneficiaries of the trust.

Arakawa had no children of her own. Little is known about her family, except that her mother, now 91, is still alive. Arakawa’s will gave the bulk of her estate to Hackman as trustee of his trust, but only if he survived her by 90 days. If he failed to survive by 90 days, then she instructed her personal representative to establish a charitable trust “to achieve purposes beneficial to the community” consistent with the couple’s charitable preferences.

Her will refers to charitable “interests expressed … by my spouse and me during our lifetimes.” But it offers no specific guidance on which charities should benefit. Because Hackman did not survive Arakawa by 90 days, no part of her estate will pass to Hackman’s trust or his children.

Christopher Hackman has reportedly hired a lawyer, leading to speculation that he might contest some aspect of his father’s or stepmother’s estates.

Research shows that the average case length of a probate estate is 532 days, but individual cases can vary greatly in length and complexity. It is possible that the public may never learn what happens to the trust if the parties reach a settlement without litigation in court.

Man in tuxedo and a large bowtie stands next to two teenagers who are looking away.
Gene Hackman and his daughters, Elizabeth Hackman and Leslie Hackman, attend the screening of ‘Superman’ in 1978 at the Kennedy Center in Washington, D.C.
Ron Galella Collection via Getty Images

Takeaways for the rest of us

We believe that anyone thinking about who will inherit their property after they die can learn three important lessons from the fate of Hackman’s estate.

First, a living trust can provide more privacy than a will by avoiding the publicity of a court-supervised probate administration. It can also simplify the process for updating the estate plan by avoiding the need to amend multiple documents every time life circumstances change, such as the birth of a child or end of a marriage. Because all estate planning documents pour into the trust, the trust is the only document that requires any updating.

You don’t need a multimillion-dollar estate to justify the cost of creating a living trust. Some online platforms charge less than $400 for help creating one.

Second, remember that even when your closest loved ones are much younger than you are, it’s impossible to predict who will die first. If you do create a living trust, it should include a backup plan in case someone named in it dies before you. You can choose a “contingent beneficiary” – someone who will take the property if the primary beneficiary dies first. You can also choose a successor trustee who will manage the trust if the primary trustee dies first or declines to serve.

Finally, it’s important to carefully consider how best to divide the estate.

Hackman’s children and some of his other relatives may ultimately receive millions through his trust. But parents in stepfamilies must often make difficult decisions about how to divide their estate between a surviving spouse and any children they had with other partners.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Gene Hackman had a will, but the public may never find out who inherits his $80M fortune – https://theconversation.com/gene-hackman-had-a-will-but-the-public-may-never-find-out-who-inherits-his-80m-fortune-259650

Too many em dashes? Weird words like ‘delves’? Spotting text written by ChatGPT is still more art than science

Source: The Conversation – USA (2) – By Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

Language experts fare no better than everyday people. Aitor Diago/Moment via Getty Images

People are now routinely using chatbots to write computer code, summarize articles and books, or solicit advice. But these chatbots are also employed to quickly generate text from scratch, with some users passing off the words as their own.

This has, not surprisingly, created headaches for teachers tasked with evaluating their students’ written work. It’s also created issues for people seeking advice on forums like Reddit, or consulting product reviews before making a purchase.

Over the past few years, researchers have been exploring whether it’s even possible to distinguish human writing from artificial intelligence-generated text. But the best strategies to distinguish between the two may come from the chatbots themselves.

Too good to be human?

Several recent studies have highlighted just how difficult it is to determine whether text was generated by a human or a chatbot.

Research participants recruited for a 2021 online study, for example, were unable to distinguish between human- and ChatGPT-generated stories, news articles and recipes.

Language experts fare no better. In a 2023 study, editorial board members for top linguistics journals were unable to determine which article abstracts had been written by humans and which were generated by ChatGPT. And a 2024 study found that 94% of undergraduate exams written by ChatGPT went undetected by graders at a British university.

Clearly, humans aren’t very good at this.

A commonly held belief is that rare or unusual words can serve as “tells” regarding authorship, just as a poker player might somehow give away that they hold a winning hand.

Researchers have, in fact, documented a dramatic increase in relatively uncommon words, such as “delves” or “crucial,” in articles published in scientific journals over the past couple of years. This suggests that unusual terms could serve as tells that generative AI has been used. It also implies that some researchers are actively using bots to write or edit parts of their submissions to academic journals. Whether this practice reflects wrongdoing is up for debate.

In another study, researchers asked people about characteristics they associate with chatbot-generated text. Many participants pointed to the excessive use of em dashes – an elongated dash used to set off text or serve as a break in thought – as one marker of computer-generated output. But even in this study, the participants’ rate of AI detection was only marginally better than chance.

Given such poor performance, why do so many people believe that em dashes are a clear tell for chatbots? Perhaps it’s because this form of punctuation is primarily employed by experienced writers. In other words, people may believe that writing that is “too good” must be artificially generated.

But if people can’t intuitively tell the difference, perhaps there are other methods for determining human versus artificial authorship.

Stylometry to the rescue?

Some answers may be found in the field of stylometry, in which researchers employ statistical methods to detect variations in the writing styles of authors.

I’m a cognitive scientist who authored a book on the history of stylometric techniques. In it, I document how researchers developed methods to establish authorship in contested cases, or to determine who may have written anonymous texts.

One tool for determining authorship was proposed by the Australian scholar John Burrows. He developed Burrows’ Delta, a computerized technique that examines the relative frequency of common words, as opposed to rare ones, that appear in different texts.

It may seem counterintuitive to think that someone’s use of words like “the,” “and” or “to” can determine authorship, but the technique has been impressively effective.

Black-and-white photographic portrait of young woman with short hair seated and posing for the camera.
A stylometric technique called Burrow’s Delta was used to identify LaSalle Corbell Pickett as the author of love letters attributed to her deceased husband, Confederate Gen. George Pickett.
Encyclopedia Virginia

Burrows’ Delta, for example, was used to establish that Ruth Plumly Thompson, L. Frank Baum’s successor, was the author of a disputed book in the “Wizard of Oz” series. It was also used to determine that love letters attributed to Confederate Gen. George Pickett were actually the inventions of his widow, LaSalle Corbell Pickett.

A major drawback of Burrows’ Delta and similar techniques is that they require a fairly large amount of text to reliably distinguish between authors. A 2016 study found that at least 1,000 words from each author may be required. A relatively short student essay, therefore, wouldn’t provide enough input for a statistical technique to work its attribution magic.

More recent work has made use of what are known as BERT language models, which are trained on large amounts of human- and chatbot-generated text. The models learn the patterns that are common in each type of writing, and they can be much more discriminating than people: The best ones are between 80% and 98% accurate.

However, these machine-learning models are “black boxes” – that is, we don’t really know which features of texts are responsible for their impressive abilities. Researchers are actively trying to find ways to make sense of them, but for now, it isn’t clear whether the models are detecting specific, reliable signals that humans can look for on their own.

A moving target

Another challenge for identifying bot-generated text is that the models themselves are constantly changing – sometimes in major ways.

Early in 2025, for example, users began to express concerns that ChatGPT had become overly obsequious, with mundane queries deemed “amazing” or “fantastic.” OpenAI addressed the issue by rolling back some changes it had made.

Of course, the writing style of a human author may change over time as well, but it typically does so more gradually.

At some point, I wondered what the bots had to say for themselves. I asked ChatGPT-4o: “How can I tell if some prose was generated by ChatGPT? Does it have any ‘tells,’ such as characteristic word choice or punctuation?”

The bot admitted that distinguishing human from nonhuman prose “can be tricky.” Nevertheless, it did provide me with a 10-item list, replete with examples.

These included the use of hedges – words like “often” and “generally” – as well as redundancy, an overreliance on lists and a “polished, neutral tone.” It did mention “predictable vocabulary,” which included certain adjectives such as “significant” and “notable,” along with academic terms like “implication” and “complexity.” However, though it noted that these features of chatbot-generated text are common, it concluded that “none are definitive on their own.”

Chatbots are known to hallucinate, or make factual errors.

But when it comes to talking about themselves, they appear to be surprisingly perceptive.

The Conversation

Roger J. Kreuz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Too many em dashes? Weird words like ‘delves’? Spotting text written by ChatGPT is still more art than science – https://theconversation.com/too-many-em-dashes-weird-words-like-delves-spotting-text-written-by-chatgpt-is-still-more-art-than-science-259629

Water recycling is paramount for space stations and long-duration missions − an environmental engineer explains how the ISS does it

Source: The Conversation – USA – By Berrin Tansel, Professor of Civil and Environmental Engineering, Florida International University

The water recovery system on the ISS is state of the art. Roscosmos State Space Corporation via AP, File

When you’re on a camping trip, you might have to pack your own food and maybe something to filter or treat water that you find. But imagine your campsite is in space, where there’s no water, and packing jugs of water would take up room when every inch of cargo space counts. That’s a key challenge engineers faced when designing the International Space Station.

Before NASA developed an advanced water recycling system, water made up nearly half the payload of shuttles traveling to the ISS. I am an environmental engineer and have conducted research at Kennedy Space Center’s Space Life Sciences Laboratory. As part of this work, I helped to develop a closed-loop water recovery system.

Today, NASA recovers over 90% of the water used in space. Clean water keeps an astronaut crew hydrated, hygienic and fed, as it can use it to rehydrate food. Recovering used water is a cornerstone of closed-loop life support, which is essential for future lunar bases, Mars missions and even potential space settlements.

A rack of machinery.
A close-up view of the water recovery system’s racks – these contain the hardware that provides a constant supply of clean water for four to six crew members aboard the ISS.
NASA

NASA’s environmental control and life support system is a set of equipment and processes that perform several functions to manage air and water quality, waste, atmospheric pressure and emergency response systems such as fire detection and suppression. The water recovery system − one component of environmental control and life support − supports the astronauts aboard the ISS and plays a central role in water recycling.

Water systems built for microgravity

In microgravity environments like the ISS, every form of water available is valuable. The water recovery systems on the ISS collect water from several sources, including urine, moisture in cabin air, and hygiene – meaning from activities such as brushing teeth.

On Earth, wastewater includes various types of water: residential wastewater from sinks, showers and toilets; industrial wastewater from factories and manufacturing processes; and agricultural runoff, which contains fertilizers and pesticides.

In space, astronaut wastewater is much more concentrated than Earth-based wastewater. It contains significantly higher levels of urea – a compound from urine – salts, and surfactants from soaps and materials used for hygiene. To make the water safe to drink, the system needs to remove all of these quickly and effectively.

The water recovery systems used in space employ some of the same principles as Earth-based water treatment. However, they are specifically engineered to function in microgravity with minimal maintenance. These systems also must operate for months or even years without the need for replacement parts or hands-on intervention.

NASA’s water recovery system captures and recycles nearly all forms of water used or generated aboard the space station. It routes the collected wastewater to a system called the water processor assembly, where it is purified into safe, potable water that exceeds many Earth-based drinking water standards.

The water recovery and treatment system on the ISS consists of several subsystems.

Recovering water from urine and sweat

The urine processor assembly recovers about 75% of the water from urine by heating and vacuum compression. The recovered water is sent to the water processor assembly for further treatment. The remaining liquid, called brine, still contains a significant amount of water. So, NASA developed a brine processor assembly system to extract the final fraction of water from this urine brine.

In the brine processor assembly, warm, dry air evaporates water from the leftover brine. A filter separates the contaminants from the water vapor, and the water vapor is collected to become drinking water. This innovation pushed the water recovery system’s overall water recovery rate to an impressive 98%. The remaining 2% is combined with the other waste generated.

An astronaut in a red shirt holds a small metal cylinder.
The filter used in brine processing has helped achieve 98% recovery.
NASA

The air revitalization system condenses moisture from the cabin air – primarily water vapor from sweat and exhalation – into liquid water. It directs the recovered water to the water processor assembly, which treats all the collected water.

Treating recovered water

The water processor assembly’s treatment process includes several steps.

First, all the recovered water goes through filters to remove suspended particles such as dust. Then, a series of filters removes salts and some of the organic contaminants, followed by a chemical process called catalytic oxidation that uses heat and oxygen to break down the remaining organic compounds. The final step is adding iodine to the water to prevent microbial growth while it is stored.

Japan Aerospace Exploration Agency astronaut Koichi Wakata next to the International Space Station’s water recovery system, which recycles urine and wastewater into drinking water. As Wakata humorously puts it, ‘Here on board the ISS, we turn yesterday’s coffee into tomorrow’s coffee.’

The output is potable water — often cleaner than municipal tap water on Earth.

Getting to Mars and beyond

To make human missions to Mars possible, NASA has estimated that spacecraft must reclaim at least 98% of the water used on board. While self-sustaining travel to Mars is still a few years away, the new brine processor on the ISS has increased the water recovery rate enough that this 98% goal is now in reach. However, more work is needed to develop a compact system that can be used in a space ship.

The journey to Mars is complex, not just because of the distance involved, but because Mars and Earth are constantly moving in their respective orbits around the Sun.

The distance between the two planets varies depending on their positions. On average, they’re about 140 million miles (225 million km) apart, with the shortest theoretical approach, when the two planets’ orbits bring them close together, taking 33.9 million miles (54.6 million km).

A typical crewed mission is expected to take about nine months one way. A round-trip mission to Mars, including surface operations and return trajectory planning, could take around three years. In addition, launch windows occur only every 26 months, when Earth and Mars align favorably.

As NASA prepares to send humans on multiyear expeditions to the red planet, space agencies around the world continue to focus on improving propulsion and perfecting life support systems. Advances in closed-loop systems, robotic support and autonomous operations are all inching the dream of putting humans on Mars closer to reality.

The Conversation

Berrin Tansel does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Water recycling is paramount for space stations and long-duration missions − an environmental engineer explains how the ISS does it – https://theconversation.com/water-recycling-is-paramount-for-space-stations-and-long-duration-missions-an-environmental-engineer-explains-how-the-iss-does-it-260171

To better detect chemical weapons, materials scientists are exploring new technologies

Source: The Conversation – USA – By Olamilekan Joseph Ibukun, Postdoctoral Research Associate in Chemistry, Washington University in St. Louis

German troops make their way through a cloud of smoke or gas during a gas training drill, circa 1916. Henry Guttmann/Hulton Archive via Getty Images

Chemical warfare is one of the most devastating forms of conflict. It leverages toxic chemicals to disable, harm or kill without any physical confrontation. Across various conflicts, it has caused tens of thousands of deaths and affected over a million people through injury and long-term health consequences.

Mustard gas is a class of chemical that isn’t a gas at room temperature – it’s a yellow-brown oily liquid that can vaporize into a toxic mist. Viktor Meyer refined the synthesis of mustard into a more stable form. Mustard gas gained international notoriety during World War I and has been used as a weapon many times.

A vintage photograph of a soldier poking a cylinder, which releases a cloud of smoke.
German soldiers release poison gas from cylinders during World War I.
Henry Guttmann Collection/Hulton Archive via Getty Images

It is nearly impossible to guarantee that mustard gas will never be used in the future, so the best way to prepare for the possibility is to develop a very easy way to detect it in the field.

My colleagues and I, who are chemists and materials science researchers, are keen on developing a rapid, easy and reliable way to detect toxic chemicals in the environment. But doing so will require overcoming several technological challenges.

Effects on human health and communities

Mustard gas damages the body at the cellular level. When it comes into contact with the skin or eyes or is inhaled, it dissolves easily in fats and tissues and quickly penetrates the body. Once inside the body, it changes into a highly reactive form that attaches to and damages DNA, proteins and other essential parts of cells. Once it reacts with DNA, the damage can’t be undone – it may stop cells from functioning properly and kill them.

Mustard gas exposure can trigger large, fluid-filled blisters on the skin. It can also severely irritate the eyes, leading to redness, swelling and even permanent blindness. When inhaled, it burns the lining of the airways, leading to coughing, difficulty breathing and long-term lung damage. Symptoms often don’t appear for several hours, which delays treatment.

Four photos of people holding out their forearms, which have large blisters.
The forearms of test subjects exposed to nitrogen mustard and lewisite, chemicals that cause large, fluid-filled blisters on the skin.
Naval Research Laboratory

Even small exposures can cause serious health problems. Over time, it can weaken the immune system and has been linked to an increased risk of cancers due to its effects on DNA.

The effect of just one-time exposure carries down to the next generation. For example, studies have reported physical abnormalities and disorders in the children of men who were exposed to mustard gas, while some of the men became infertile.

The best way to prevent serious health problems is to detect mustard gas early and keep people away from it.

Detecting mustard gas early

The current methods to detect mustard gas rely on sophisticated chemistry techniques. These require expensive, delicate instruments that are difficult to carry to the war front and are too fragile to be kept in the field as a tool for detecting toxic chemicals. These instruments are conventionally designed for the laboratory, where they stay in one location and are handled carefully.

Many researchers have attempted to improve detection techniques. While each offers a glimpse of hope, they also come with setbacks.

Some scientists have been working on a wearable electrochemical biosensor that could detect mustard gas in both liquid and vapor form. They succeeded in developing tiny devices that provide real-time alerts. But here, stability became a problem. The enzymes degrade, and environmental noise can cloud the signal. Because of this issue, these strips haven’t been used successfully in the field.

To simplify detection, others developed molecularly imprinted polymer test strips targeting thiodiglycol, a mustard gas breakdown product. These strips change color when they come into contact with the gas, and they’re cheap, portable and easy to use in the field. The main concern is that they detect a chemical present in the aftermath of mustard gas use, not the agent itself, which isn’t quite as effective.

One of the most promising breakthroughs came in 2023 in the form of fluorescent probes, which change color when they sense the chemical. This probe is a tiny detective tool that detects or measures the target chemical and generates a signal. But these probes remain vulnerable to environmental interference such as humidity and temperature, meaning they’re less reliable in rugged field conditions.

Some other examples under development include a chemical sensor device that families could have at home, or even a wearable device.

Wearable devices are tricky, however, since they need to be small. Researchers have been trying to integrate tiny nanomaterials into sensors. Other teams are looking at how to incorporate artificial intelligence. Artificial intelligence could help a device interpret data faster and respond more quickly.

Researchers bridging the gap

Now at Washington University in St Louis, Makenzie Walk and I are part of the team of researchers working on detecting these chemicals, led by Jennifer Heemstra and M.G. Finn. Another member is Seth Taylor, a postdoctoral researcher at Georgia Tech.

Our team of researchers hopes to use the lessons learned from prior sensors to develop an easy and reliable way to rapidly detect these chemicals in the field. Our approach will involve testing different molecular sensor designs on compounds modeled after specific chemical weapons. The sensors would initiate a cascade of reactions that generate a bright, colorful fluorescent signal in the laboratory.

We are figuring out to which compounds these chemicals react best, and which might make a good candidate for use in a detector. These tests allow us to determine how much of the chemical will need to be in the air to trigger a reaction that we can detect, as well as how long it will need to be in the air before we can detect it.

Additionally, we are investigating how the structure of the chemicals we work with influences how they react. Some react more quickly than others, and understanding their behavior will help us pick the right compounds for our detector. We want them to be sensitive enough to detect even small amounts of mustard gas quickly, but not so sensitive that they frequently give falsely positive results.

Eliminating the use of these chemicals would be the best approach to avoid future recurrence. The 1997 Chemical Weapons Convention bans the production, use and accumulation of chemical weapons. But countries such as Egypt, North Korea and South Sudan have not signed or officially adopted the international arms control treaty.

To discourage countries that don’t sign the treaty from using these weapons, other countries can use sanctions. For example, the U.S. learned that Sudan used chemical weapons in 2024 during a conflict, and in response it placed sanctions on the government.

Even without continued use of these chemical weapons, some traces of the chemical may still linger in the environment. Technology that can quickly identify the chemical threat in the environment could prevent more disasters from occurring.

As scientists and global leaders collectively strive for a safer world, the ability to detect when a dangerous chemical is released or is present in real time will improve a community’s preparedness, protection and peace of mind.

The Conversation

Mekenzie Walk and Jen Heemstra contributed to this article.

Heemstra lab receives funding from the Defense Threat Reduction Agency (DTRA).

ref. To better detect chemical weapons, materials scientists are exploring new technologies – https://theconversation.com/to-better-detect-chemical-weapons-materials-scientists-are-exploring-new-technologies-257296

Séisme au Kamtchatka : que sait-on de l’un des dix plus puissants tremblements de terre jamais enregistrés ?

Source: The Conversation – France in French (2) – By Dee Ninis, Earthquake Scientist, Monash University

Mercredi 30 juillet vers 11 h 30 heure locale, un séisme de magnitude 8,8 a frappé la côte de la péninsule du Kamtchatka à l’extrême est de la Russie. La région est le siège d’une activité sismique depuis plusieurs mois, et des dizaines de répliques ont déjà eu lieu autour de ce séisme. Des alertes au tsunami ont été lancées rapidement tout autour du Pacifique – et certaines ont déjà pu être levées.


Avec une profondeur d’environ 20 kilomètres, ce puissant séisme, qui figure parmi les dix plus forts jamais enregistrés et le plus important au monde depuis 2011, a causé des dégâts matériels et fait des blessés dans la plus grande ville voisine, Petropavlovsk-Kamtchatski, située à seulement 119 kilomètres de l’épicentre.

Des alertes au tsunami et des évacuations ont été déclenchées en Russie, au Japon et à Hawaï, et des avis ont été émis pour les Philippines, l’Indonésie et même la Nouvelle-Zélande et le Pérou.

Toute la région du Pacifique est très exposée à des séismes puissants et aux tsunamis qui en résultent, car elle est située dans la « ceinture de feu », une zone d’activité sismique et volcanique intense. Les dix séismes les plus puissants jamais enregistrés dans l’histoire moderne se sont tous produits dans la ceinture de feu.

Voici pourquoi la tectonique des plaques rend cette partie du monde si instable.

Pourquoi le Kamtchatka est-il touché par des séismes aussi violents ?

Au large de la péninsule du Kamtchatka se trouve la fosse des Kouriles, une frontière tectonique où la plaque Pacifique est poussée sous la plaque d’Okhotsk.

Alors que les plaques tectoniques se déplacent continuellement les unes par rapport aux autres, l’interface entre les plaques tectoniques est souvent « bloquée ». La tension liée au mouvement des plaques s’accumule jusqu’à dépasser la résistance de l’interface, puis se libère sous la forme d’une rupture soudaine : un séisme.

En raison de la grande superficie de l’interface aux frontières des plaques, tant en longueur qu’en profondeur, la rupture peut s’étendre sur de vastes zones à la frontière des plaques. Cela donne lieu à certains des séismes les plus importants et potentiellement les plus destructeurs au monde.

Un autre facteur qui influe sur la fréquence et l’intensité des séismes dans les zones de subduction est la vitesse à laquelle les deux plaques se déplacent l’une par rapport à l’autre.

Dans le cas du Kamtchatka, la plaque Pacifique se déplace à environ 75 millimètres par an par rapport à la plaque d’Okhotsk. Il s’agit d’une vitesse relativement élevée pour des plaques tectoniques, ce qui explique que les séismes y sont plus fréquents que dans d’autres zones de subduction. En 1952, un séisme de magnitude 9,0 s’est produit dans la même zone de subduction, à environ 30 kilomètres seulement du séisme de magnitude 8,8 d’aujourd’hui.

Parmi les autres exemples de séismes à la frontière d’une plaque en subduction, on peut citer le séisme de magnitude 9,1 qui a frappé la région de Tohoku au Japon en 2011 et le séisme de magnitude 9,3 qui a frappé Sumatra et les îles Andaman en Indonésie le 26 décembre 2004. Ces deux séismes ont débuté à une profondeur relativement faible et ont provoqué une rupture de la limite des plaques jusqu’à la surface.

Ils ont soulevé un côté du fond marin par rapport à l’autre, déplaçant toute la colonne d’eau de l’océan située au-dessus et provoquant des tsunamis dévastateurs. Dans le cas du séisme de Sumatra, la rupture du fond marin s’est produite sur une longueur d’environ 1 400 kilomètres.

Que va-t-il se passer maintenant ?

Au moment où nous écrivons ces lignes, environ six heures après le séisme, 35 répliques d’une magnitude supérieure à 5,0 ont déjà été enregistrées, selon le service de surveillance sismique états-unien (l’United States Geological Survey, USGS).

Les répliques se produisent lorsque les tensions dans la croûte terrestre se redistribuent après le séisme principal. Elles sont souvent d’une magnitude inférieure d’un point à celle du séisme principal. Dans le cas du séisme d’aujourd’hui, cela signifie que des répliques d’une magnitude supérieure à 7,5 sont possibles.




À lire aussi :
Pourquoi il y a des séismes en cascade en Turquie et en Syrie


Pour un séisme de cette ampleur, les répliques peuvent se poursuivre pendant des semaines, voire des mois, mais leur magnitude et leur fréquence diminuent généralement avec le temps.

Le séisme d’aujourd’hui a également provoqué un tsunami qui a déjà touché les communautés côtières de la péninsule du Kamtchatka, des îles Kouriles, et d’Hokkaido au Japon.

Au cours des prochaines heures, le tsunami se propagera à travers le Pacifique, atteignant Hawaï environ six heures après le séisme et se poursuivant jusqu’au Chili et au Pérou. [ndlt : à l’heure où nous effectuons cette traduction, les alertes à Hawaï ont été réduites, et annulées aux Philippines. Les vagues ont atteint la côte ouest des États-Unis, jusqu’à un mètre de hauteur en Californie et dans l’Oregon.]

Les spécialistes des tsunamis continueront d’affiner leurs modèles des effets du tsunami au fur et à mesure de sa propagation, et les autorités de la protection civile fourniront des conseils faisant autorité sur les effets locaux attendus.




À lire aussi :
Alertes aux séismes et tsunamis : comment gagner de précieuses secondes


Quelles leçons peut-on tirer de ce séisme pour d’autres régions du monde ?

Heureusement, les séismes d’une telle ampleur sont rares. Cependant, leurs effets au niveau local et à l’échelle mondiale peuvent être dévastateurs.

Outre sa magnitude, plusieurs aspects du séisme qui a frappé le Kamtchatka aujourd’hui en feront un sujet de recherche particulièrement important.

Par exemple, la région a connu une activité sismique très intense ces derniers mois et un séisme de magnitude 7,4 s’est produit le 20 juillet. L’influence de cette activité antérieure sur la localisation et le moment du séisme d’aujourd’hui sera un élément crucial de ces recherches.

Tout comme le Kamtchatka et le nord du Japon, la Nouvelle-Zélande est située au-dessus d’une zone de subduction, et même de deux zones de subduction. La plus grande, la zone de subduction de Hikurangi, s’étend au large de la côte est de l’île du Nord.

D’après les caractéristiques de cette interface tectonique et les archives géologiques des séismes passés, la zone de subduction de Hikurangi est susceptible de produire des séismes de magnitude 9. Cela ne s’est jamais produit dans l’histoire, mais si cela arrivait, cela provoquerait un tsunami.

La menace d’un séisme majeur dans une zone de subduction n’est jamais écartée. Le séisme qui s’est produit aujourd’hui au Kamtchatka est un rappel important pour tous ceux qui vivent dans des zones sismiques de rester prudents et de tenir compte des avertissements des autorités de protection civile.

The Conversation

Dee Ninis travaille au Seismology Research Centre, est vice-présidente de l’Australian Earthquake Engineering Society et membre du comité de la Geological Society of Australia – Victoria Division.

John Townend reçoit des financements des fonds Marsden et Catalyst de la Royal Society Te Apārangi, de la Natural Hazards Commission Toka Tū Ake et du ministère néo-zélandais des Entreprises, de l’Innovation et de l’Emploi. Il est ancien président et directeur de la Seismological Society of America ainsi que président de la New Zealand Geophysical Society.

ref. Séisme au Kamtchatka : que sait-on de l’un des dix plus puissants tremblements de terre jamais enregistrés ? – https://theconversation.com/seisme-au-kamtchatka-que-sait-on-de-lun-des-dix-plus-puissants-tremblements-de-terre-jamais-enregistres-262251

More than 50% of Detroit students regularly miss class – and schools alone can’t solve the problem

Source: The Conversation – USA – By Jeremy Singer, Assistant Professor of Education, Wayne State University

Nobody learns in an empty classroom. Jeffrey Basinger/Newsday RM via Getty Images

Thousands of K-12 students in Detroit consistently miss days of school.

Chronic absenteeism is defined as missing at least 10% of school days – or 18 in a 180-day academic year. In Detroit, chronic absenteeism rose during the COVID-19 pandemic and remains a persistent challenge.

To encourage attendance, the Detroit Public Schools Community District is getting creative. This past year, Michigan’s largest school district awarded US$200 gift cards to nearly 5,000 high schoolers for attending all their classes during a two-week period, and Superintendent Nikolai Vitti also floated the idea of providing bikes to help students get to class. Some district students lack access to reliable transportation.

To understand the consequences of kids regularly missing school, The Conversation U.S. spoke with Sarah Lenhoff, associate professor of education at Wayne State University and director of the Detroit Partnership for Education Equity & Research, an education-focused research collaborative, and Jeremy Singer, an assistant professor of education at Wayne State University. Lenhoff and Singer wrote a book published in March about the socioeconomic drivers of chronic absenteeism in K-12 schools and how policymakers and communities, not just educators, can help.

Is chronic absenteeism the same as truancy?

No. Truancy is how schools have thought about and dealt with student attendance problems since the early days of public education in the United States in the 19th century and is still defined in state law across the country. It focuses on “unexcused” absences and compliance with mandatory school attendance laws. By contrast, chronic absenteeism includes any absence – whether “excused” or “unexcused” – because each absence can be consequential for student learning and development.

Chronic absenteeism is usually defined as missing 10% or more school days. The 10% threshold is somewhat arbitrary, since researchers know that the consequences of missing school accumulate with each day missed. But the specific definition of chronic absenteeism has been solidified in research and by policymakers. Most states now include a measure of chronic absenteeism in their education accountability systems.

How big of a problem is chronic absenteeism in Detroit’s K-12 public schools?

Detroit has among the highest chronic absenteeism rates in the country: more than 50% in recent school years. Prior to the pandemic, the average rate of chronic absenteeism nationwide was about 15%, and it was around 24% in 2024.

In one of our prior studies, we found Detroit’s chronic absenteeism rate was much higher than other major cities – even others with high absenteeism rates such as Milwaukee or Philadelphia.

This is related to the depth of social and economic inequalities that Detroit families face. Compared to other major cities, Detroit has higher rates of poverty, unemployment and crime. It has worse public health conditions. And even its winters are some of the coldest of major U.S. cities. All of these factors make it harder for kids to attend school.

Rates of chronic absenteeism spiked in Detroit during the COVID-19 pandemic, as they did statewide. The Detroit Public Schools Community District has come close to returning to its pre-pandemic levels of absenteeism. The rates were 66% in the 2023-24 school year compared to 62% in the school year right before the pandemic began, 2018-19.

Detroit’s charter schools have struggled more to bring down their chronic absenteeism rates post-pandemic, but the numbers are lower overall – 54% in the 2023-24 school year compared to 36% in 2018-19.

A Black woman wearing a red T-shirt and sunglasses holds up a sign reading 'OUR FIGHT FOR DETROIT KIDS'
A school social worker from Noble Elementary-Middle School protests outside Detroit Public Schools headquarters.
Bill Pugliano/Getty Images

How does missing school affect students?

The connection between attendance and achievement is clear: Students who miss more school on average score worse on reading and math tests. As early as pre-K, being chronically absent is linked to lower levels of school readiness, both academically and behaviorally. By high school, students who miss more school tend to earn lower grades and GPAs and are less likely to graduate.

And it’s not just the absent students who are affected. When more kids in a class miss school regularly, that is associated with lower overall test scores and worse measures of skills such as executive functioning for other students in that class.

Does chronic absenteeism vary by family income or other factors?

Rates of chronic absenteeism are much higher among students from low-income families. In these cases, absenteeism is often driven by factors outside a student’s control such as unstable housing, unreliable transportation, health issues, lack of access to child care, or parents who work nontraditional hours. These challenges make it harder for students to get to school consistently, even when families are deeply committed to education.

School-based factors also influence attendance. Students are more likely to be chronically absent in schools with weaker relationships with families or a less positive school culture. However, even schools with strong practices may struggle if they serve communities facing deep socioeconomic hardship.

Ultimately, we don’t view chronic absenteeism as an issue of student motivation or family values. Rather, we see it as an issue related to the unequal conditions that shape students’ lives.

Does punishing absent kids or their parents work?

Many schools have suspended students for absences, or threatened their parents with fines or jail time. In some cases, families have lost social services due to their children’s chronic absenteeism.

Research shows these strategies are not only ineffective, they can make the problem worse.

For example, we found that when schools respond with punishment instead of support, they often alienate the very students and families who are already struggling to stay connected. Harsh responses can deepen mistrust between families and schools. When absences are treated as a personal failing caused by a lack of motivation or irresponsibility rather than symptoms of deeper challenges, students and parents may disengage further.

Instead, educators might ask: What’s getting in the way of consistent attendance, and how can we help? That shift from blame to understanding can help improve attendance.

What can policymakers, school districts and community organizations do to reduce chronic absenteeism?

Chronic absenteeism is a societal issue, not just a school problem. In other words, we need to recognize that chronic absenteeism is not a problem that schools can solve alone. While educators work to improve conditions within schools, policymakers and community leaders can take responsibility for the broader factors that influence attendance.

This could look like investing more resources and fostering collaboration across sectors such as health care, housing, transportation and social services to better support students and their families. Community organizations can play a role too, offering wraparound services such as mental health care, access to transportation, and after-school programming, all of which can support families. In the meantime, educators can focus on what they can control: strengthening communication with families, building supportive relationships and helping families connect with existing services that can remove attendance barriers.

The Conversation

Sarah Lenhoff receives funding from the Skillman Foundation, the Joyce Foundation, the Kresge Foundation, the William T. Grant Foundation, the American Institutes for Research, and the Urban Institute.

Jeremy Singer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. More than 50% of Detroit students regularly miss class – and schools alone can’t solve the problem – https://theconversation.com/more-than-50-of-detroit-students-regularly-miss-class-and-schools-alone-cant-solve-the-problem-260773

UK and France pledges won’t stop Netanyahu bombing Gaza – but Donald Trump or Israel’s military could

Source: The Conversation – UK – By Paul Rogers, Professor of Peace Studies, University of Bradford

Keir Starmer says unless there’s a ceasefire and a peace process leading to a two-state solution, Britain will recognise the state of Palestine at the UN in September. The UK prime minister is following a similar, alebit unconditional, pledge from the French president, Emmanuel Macron.

They are reacting to what Starmer referred to as the “intolerable situation” in Gaza. In Scotland, Donald Trump has also complained at the humanitarian catastrophe of people starving in Gaza, saying: “We’ve got to get the kids fed.”

Does this mean western politicians are finally prepared to act? Quite possibly. Will it have any discernible effect on Benjamin Netanyahu? Doubtful.

Trump still appears to trust Netanyahu to feed the people of Gaza, or so he told reporters as he flew back from his weekend of “golf buggy diplomacy” on July 29. And as long as the US president supports Netanyahu, the Israeli prime minister can act with few restraints.

True, the vigorous international reaction to the food crisis in Gaza has finally had some effect. But the Israeli response so far has been largely symbolic.

It has comprised air drops of aid by Israel and the United Arab Emirates, and some “tactical” or “humanitarian” pauses in the assault in parts of the Gaza Strip to allow for the delivery of aid. Air drops are good for publicity, but the amount of aid they actually deliver is very small and hugely expensive.

How did we get to this point? The current phase of the conflict started in mid-March, when the Israeli government began blocking all aid to Gaza.

That lasted two months until some shipments were allowed. In recent weeks, an average of about 70 trucks a day have crossed the border. But the reality is 500-600 trucks a day are required to support and restore heath to 2 million people.

Meanwhile, more than 1,000 Palestinians have been killed – mostly shot – since May while trying to get food at one of the four overcrowded distribution sites run by the private, US-backed Gaza Humanitarian Foundation (GHF). Before being replaed by the GHF system, UN agencies ran 400 distribution points across the territory.

What the daily pauses in some areas, which began on July 27, actually represent is far from clear, given that fighting continues in much of the Strip. There is little sign that Netanyahu’s government wants an early end to the war. From its perspective, there can only be peace when all the hostages are returned and Hamas has been destroyed.

But Hamas is proving far more resolute than expected. Its survival is little short of remarkable given the huge force the Israelis have used to try and destroy it.

The usual Israeli military priority in dealing with an insurgency is to follow what is known colloquially as the “Dahiya doctrine”. If an insurgency cannot be handled without serious casualties, then the Israel Defense Forces (IDF) directs its operations at civilian infrastructure and the general population to undermine support for the insurgents.

The tactic is so called because it was developed as a way of dealing with a Hezbollah stronghold in the Dahiya suburb of west Beirut in 1982. The reduction of much of Gaza to ruins is taking the doctrine to extremes, yet it is failing – Hamas is still there.

This is reportedly common knowledge in IDF circles, but rarely admitted in public. A notable exception is the senior retired IDF officer, Major-General Itzhak Brik.

Brik’s publicised view is that Hamas has already replaced its thousands of casualties with new recruits. They may not be trained in the conventional sense, but they have learnt their craft while surviving in a war zone and seeing so many of their friends and family killed and wounded.

No end in sight

Israel’s demands may be that it will end the war if Hamas surrenders and disarms, then goes into exile. The problem with this is Hamas doesn’t think Israel would end the war.

Instead, it believes Gaza would be forcibly cleared and resettled, and the occupied West Bank would see a huge increase in settlers. In this scenario, a two-state solution would be a pipe dream, and Israel would be the regional superpower able to rise to any future challenge.

So, is there any prospect of Israel being forced to compromise, to accept a UN-monitored ceasefire and seek a negotiated settlement? External political pressure is certainly rising, especially the potential formal recognition of the state of Palestine by the UK and France.

But in both cases, the conditions for the road to peace are such that they are effectively non-starters. Macron envisages a “demilitarised Palestine” living alongside Israel. Starmer has called for Hamas to disarm and play no role in the future governance of Palestinians. Neither plan has the slightest chance of getting off the ground.

In any case, without Trump’s full backing it would still mean little. Economic and social sanctions by a state or group will have little impact because there will always be states or organisations sufficiently supportive of Israel to bypass them.

We are left with two possible routes to a settlement. One is that Trump is sufficiently motivated to insist Netanyahu negotiates.

That is unlikely, unless the US president somehow gets the idea that his own reputation is being damaged. Even then, the influence of the Israel lobby in the US, especially the support for Israel of tens of millions of Christian Zionists, is formidable.

The other route to a peace deal is if the war is becoming problematic for the Israeli military. If more of the IDF’s top brass recognise that this war, right from the start, was always going to be unwinnable, this might yet move the conflict in the direction of a settlement.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

The Conversation

Paul Rogers does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. UK and France pledges won’t stop Netanyahu bombing Gaza – but Donald Trump or Israel’s military could – https://theconversation.com/uk-and-france-pledges-wont-stop-netanyahu-bombing-gaza-but-donald-trump-or-israels-military-could-262131