One of Hurricane Katrina’s most important lessons isn’t about storm preparations – it’s about injustice

Source: The Conversation – USA (2) – By Ivis García, Associate Professor of Landscape Architecture and Urban Planning, Texas A&M University

New Orleans residents wait to be rescued from a rooftop two days after Hurricane Katrina made landfall. AP Photo/David J. Phillipp

Twenty years after Hurricane Katrina swept through New Orleans, the images still haunt us: entire neighborhoods underwater, families stranded on rooftops and a city brought to its knees.

We study disaster planning at Texas A&M University and look for ways communities can improve storm safety for everyone, particularly low-income and minority neighborhoods.

Katrina made clear what many disaster researchers have long found: Hazards such as hurricanes may be natural, but the death and destruction is largely human-made.

A man with a destressed look walks in thigh-deep water while people watch from a doorway several steps above street level.
People walk out of their homes into New Orleans’ flooded streets after Hurricane Katrina on Aug. 29, 2005. In parts of the city, homes were underwater up to their roofs.
Mark Wilson/Getty Images

How New Orleans built inequality into its foundation

New Orleans was born unequal. As the city grew as a trade hub in the 1700s, wealthy residents claimed the best real estate, often on higher ground formed by river sediment. The city had little high ground, so everyone else was left in “back-of-town” areas, closer to swamps where land was cheap and flooding common.

In the early 1900s, new pumping technology enabled development in flood-prone swamplands and housing spread, but the pumping caused land subsidence that made flooding worse in neighborhoods such as Lakeview, Gentilly and Broadmoor.

Then redlining started in the 1930s. To guide federal loan decisions, government agencies began using maps that rated neighborhoods by financial risk. Predominantly Black neighborhoods were typically marked as “high risk,” regardless of the actual housing quality.

This created a vicious cycle: Black and low-income families were already stuck in flood-prone areas because that’s where cheap land was. Redlining kept their property values lower. Black Americans were also denied government-backed mortgages and GI Bill benefits that could have helped them move to safer neighborhoods on higher ground.

In this 1939 map prepared for the Federal Home Loan Bank Board, redlining separated New Orleans into grades. Green is an A, or first grade; followed by blue, yellow and red, which is last as a D, or fourth grade. The Lower Ninth Ward is the red block farthest to the right.
National Archives via Mapping Inequality/University of Richmond

Hurricane Katrina showed how those lines translate to vulnerability.

When history came calling

On Aug. 29, 2005, as Hurricane Katrina battered New Orleans, the levees protecting the city broke and water flooded about 80% of the city. The damage followed racial geography − the spatial patterns of where Black and white residents lived due to decades of segregation − like a blueprint.

About three-quarters of Black residents experienced serious flooding, compared with half of white residents.

People off all ages, including young children, stand in line to board a bus.
New Orleans residents who evacuated to the Superdome during Hurricane Katrina board buses to be taken to Houston on Sept. 1, 2005. Many of them lost their homes, and with much of New Orleans damaged, Houston took in evacuees.
Robert Sullivan/AFP via Getty Images

Between 100,000 and 150,000 people couldn’t evacuate. They were disproportionately people who were elderly, Black, poor and without cars. Among survivors who did not evacuate, 55% did not have a car or another way to get out, and 93% were Black. More than 1,800 people lost their lives.

This lack of transportation — what scholars call “transportation poverty” — left people stranded in the city’s bowl-shaped geography, unable to escape when the levees failed.

Recovery that made things worse

After Hurricane Katrina, the federal government created the Road Home program to help homeowners rebuild. But the program had a devastating design flaw: It calculated aid based on prehurricane home value or repair costs, whichever was less.

That meant low-income homeowners, who already lived in areas with lower property values due to the history of discrimination, received less money. A family whose US$50,000 home needed $80,000 in repairs would receive only $50,000, while a family whose $200,000 home needed the same $80,000 in repairs would receive the full repair amount. The average gap between damage estimates and rebuilding funds was $36,000.

As a result, people in poor and Black neighborhoods had to cover about 30% of rebuilding costs after all aid, while those in wealthy areas faced only about 20%. Families in the poorest areas had to pay thousands of dollars out-of-pocket to complete repairs, even after government help and insurance, and that slowed the recovery process.

A house missing it walls and with a torn-down fence.
Homes damaged by Hurricane Katrina still sat vacant in New Orleans’ Lower Ninth Ward in 2013.
AP Photo/Patrick Semansky

This pattern isn’t unique to New Orleans. A study examining data from Hurricane Andrew in Miami (1992) and Hurricane Ike in Galveston (2008) found that housing recovery was consistently slow and unequal in low-income and minority neighborhoods. Lower-income families are less likely to have adequate insurance or savings for quick rebuilding. Low-value homes with extensive damage still had not regained their prestorm value four years later, while higher-value homes sustaining even moderate damage gained value.

Ten years after Katrina, while 70% of white residents felt New Orleans had recovered, only 44% of Black residents could look around their neighborhood and say the same.

Community-led solutions for climate resilience

Katrina’s lessons in the inequality of disasters are important for communities today as climate change brings more extreme weather.

Federal Emergency Management Agency denial rates for disaster aid remain high due to bureaucratic obstacles such as complex application processes that bounce survivors among multiple agencies, often resulting in denials and delays of critical funds. These are the same systemic barriers that added to the reasons Black communities recovered more slowly after Hurricane Katrina. FEMA’s own advisory council reported that institutional assistance policies tend to enrich wealthier, predominantly white areas, while underserving low-income and minority communities throughout all stages of disaster response.

A 2021 photo showing the low-lying neighborhood and  the canal just across a flood wall.
Homes were rebuilt along the Industrial Canal, shown here in 2021, where a levee break flooded the Lower Ninth Ward during Hurricane Katrina.
Patrick T. Fallon/AFP via Getty Images

The lessons from New Orleans also point to ways communities can build disaster resilience across the entire population. In particular, as cities plan protective measures — elevating homes, buyout programs and flood-proofing assistance — Hurricane Katrina showed the need to pay attention to social vulnerabilities and focus aid where people need the most assistance.

The choice America faces

In our view, one of Katrina’s most important lessons is about social injustice. The disproportionate suffering in Black communities wasn’t a natural disaster but a predictable result of policies concentrating risk in marginalized neighborhoods.

In many American cities, policies still leave some communities facing a greater risk of disaster damage. To protect residents, cities can start by investing in vulnerable areas, empowering a community-led recovery and ensuring race, income or ZIP code never again determine who receives help with the recovery.

Natural disasters don’t have to become human catastrophes. Confronting the policies and other factors that leave some groups at greater risk can avoid a repeat of the devastation the world saw in Katrina.

The Conversation

Ivis García receives funding from National Science Foundation, U.S. Department of Housing and Urban Development, Ford Foundation, National Academy of Sciences, Fundación Comunitaria de Puerto Rico, UNIDOS, Texas Appleseed, Natural Hazard Center, Chicago Community Trust, American Planning Association, and Salt Lake City Corporation.

Deidra D Davis receives funding from the National Academy of Sciences, Engineering, and Medicine. The views expressed are those of Deidra D Davis and do not necessarily represent those of the National Academy of Sciences, Engineering, and Medicine.

Walter Gillis Peacock receives funding from the National Science Foundation to conduct research related to issues discussed in this article. The opinions expressed are those of Walter Gillis Peacock and do not necessarily reflect those of the National Science Foundation.

ref. One of Hurricane Katrina’s most important lessons isn’t about storm preparations – it’s about injustice – https://theconversation.com/one-of-hurricane-katrinas-most-important-lessons-isnt-about-storm-preparations-its-about-injustice-261936

Data centers consume massive amounts of water – companies rarely tell the public exactly how much

Source: The Conversation – USA (2) – By Peyton McCauley, Water Policy Specialist, Sea Grant UW Water Science-Policy Fellow, University of Wisconsin-Milwaukee

The Columbia River running through The Dalles, Oregon, supplies water to cool data centers. AP Photo/Andrew Selsky

As demand for artificial intelligence technology boosts construction and proposed construction of data centers around the world, those computers require not just electricity and land, but also a significant amount of water. Data centers use water directly, with cooling water pumped through pipes in and around the computer equipment. They also use water indirectly, through the water required to produce the electricity to power the facility. The amount of water used to produce electricity increases dramatically when the source is fossil fuels compared with solar or wind.

A 2024 report from the Lawrence Berkeley National Laboratory estimated that in 2023, U.S. data centers consumed 17 billion gallons (64 billion liters) of water, and projects that by 2028, those figures could double – or even quadruple. The same report estimated that in 2023, U.S. data centers consumed an additional 211 billion gallons (800 billion liters) of water indirectly through the electricity that powers them. But that is just an estimate in a fast-changing industry.

We are researchers in water law and policy based on the shores of Lake Michigan. Technology companies are eyeing the Great Lakes region to host data centers, including one proposed for Port Washington, Wisconsin, which could be one of the largest in the country. The Great Lakes region offers a relatively cool climate and an abundance of water, making the region an attractive location for hot and thirsty data centers.

The Great Lakes are an important, binational resource that more than 40 million people depend on for their drinking water and supports a US$6 trillion regional economy. Data centers compete with these existing uses and may deplete local groundwater aquifers.

Our analysis of public records, government documents and sustainability reports compiled by top data center companies has found that technology companies don’t always reveal how much water their data centers use. In a forthcoming Rutgers Computer and Technology Law Journal article, we walk through our methods and findings using these resources to uncover the water demands of data centers.

In general, corporate sustainability reports offered the most access and detail – including that in 2024, one data center in Iowa consumed 1 billion (3.8 billion liters) gallons of water – enough to supply all of Iowa’s residential water for five days.

The computer processors in data centers generate lots of heat while doing their work.

How do data centers use water?

The servers and routers in data centers work hard and generate a lot of heat. To cool them down, data centers use large amounts of water – in some cases over 25% of local community water supplies. In 2023, Google reported consuming over 6 billion gallons of water (nearly 23 billion liters) to cool all its data centers.

In some data centers, the water is used up in the cooling process. In an evaporative cooling system, pumps push cold water through pipes in the data center. The cold water absorbs the heat produced by the data center servers, turning into steam that is vented out of the facility. This system requires a constant supply of cold water.

In closed-loop cooling systems, the cooling process is similar, but rather than venting steam to the air, air-cooled chillers cool down the hot water. The cooled water is then recirculated to cool the facility again. This does not require constant addition of large volumes of water, but it uses a lot more energy to run the chillers. The actual numbers showing those differences, which likely vary by the facility, are not publicly available.

One key way to evaluate water use is the amount of water that is considered “consumed,” meaning it is withdrawn from the local water supply and used up – for instance, evaporated as steam – and not returned to the ecosystem.

For information, we first looked to government data, such as that kept by municipal water systems, but the process of getting all the necessary data can be onerous and time-consuming, with some denying data access due to confidentiality concerns. So we turned to other sources to uncover data center water use.

Sustainability reports provide insight

Many companies, especially those that prioritize sustainability, release publicly available reports about their environmental and sustainability practices, including water use. We focused on six top tech companies with data centers: Amazon, Google, Microsoft, Meta, Digital Realty and Equinix. Our findings revealed significant variability in both how much water the companies’ data centers used, and how much specific information the companies’ reports actually provided.

Sustainability reports offer a valuable glimpse into data center water use. But because the reports are voluntary, different companies report different statistics in ways that make them hard to combine or compare. Importantly, these disclosures do not consistently include the indirect water consumption from their electricity use, which the Lawrence Berkeley Lab estimated was 12 times greater than the direct use for cooling in 2023. Our estimates highlighting specific water consumption reports are all related to cooling.

Amazon releases annual sustainability reports, but those documents do not disclose how much water the company uses. Microsoft provides data on its water demands for its overall operations, but does not break down water use for its data centers. Meta does that breakdown, but only in a companywide aggregate figure. Google provides individual figures for each data center.

In general, the five companies we analyzed that do disclose water usage show a general trend of increasing direct water use each year. Researchers attribute this trend to data centers.

A closer look at Google and Meta

To take a deeper look, we focused on Google and Meta, as they provide some of the most detailed reports of data center water use.

Data centers make up significant proportions of both companies’ water use. In 2023, Meta consumed 813 million gallons of water globally (3.1 billion liters) – 95% of which, 776 million gallons (2.9 billion liters), was used by data centers.

For Google, the picture is similar, but with higher numbers. In 2023, Google operations worldwide consumed 6.4 billion gallons of water (24.2 billion liters), with 95%, 6.1 billion gallons (23.1 billion liters), used by data centers.

Google reports that in 2024, the company’s data center in Council Bluffs, Iowa, consumed 1 billion gallons of water (3.8 billion liters), the most of any of its data centers.

The Google data center using the least that year was in Pflugerville, Texas, which consumed 10,000 gallons (38,000 liters) – about as much as one Texas home would use in two months. That data center is air-cooled, not water-cooled, and consumes significantly less water than the 1.5 million gallons (5.7 million liters) at an air-cooled Google data center in Storey County, Nevada. Because Google’s disclosures do not pair water consumption data with the size of centers, technology used or indirect water consumption from power, these are simply partial views, with the big picture obscured.

Given society’s growing interest in AI, the data center industry will likely continue its rapid expansion. But without a consistent and transparent way to track water consumption over time, the public and government officials will be making decisions about locations, regulations and sustainability without complete information on how these massive companies’ hot and thirsty buildings will affect their communities and their environments.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Data centers consume massive amounts of water – companies rarely tell the public exactly how much – https://theconversation.com/data-centers-consume-massive-amounts-of-water-companies-rarely-tell-the-public-exactly-how-much-262901

Misspelled names may give brands a Lyft – if the spelling isn’t too weird

Source: The Conversation – USA (2) – By Annika Abell, Assistant Professor of Marketing, University of Tennessee

Misspelled brand names can be catchy – but don’t always connect with consumers. AP Photo/David Zalubowski

Consumers don’t mind when companies use misspelled words – think Lyft for “lift” or Froot Loops for “fruit loops” – as their brand names, as long as the alterations aren’t too extreme and the misspelling makes sense.

Those are the main findings of a new peer-reviewed paper I published with fellow marketing scholar Leah Warfield Smith. This builds on previous work that found that using misspelled brand names usually backfires.

Misspelled brand names like Kool-Aid, Reddi-wip and Crumbl seem to be everywhere. They are especially common in the names of smartphone apps and in certain industries, like fashion. Companies often do this to stand out or perhaps so they can use the misspelled word as their domain name.

Despite their popularity, we know little about how consumers respond to different types of misspelled names, especially when those names deviate significantly from correct or standard spelling. Our study aims to fill this gap.

In a series of six experiments, we tested consumer reactions to fictional and several real brand names with varying levels and types of misspellings.

Mild misspellings, such as combining two real words such as SoftSoap were perceived just as positively as correctly spelled names. When consumers saw different levels of misspellings – consider the brand names Eazy Clean, Eazy Klean and Eezy Kleen – they reacted more negatively the further the name deviated from the correct spelling “easy clean.”

However, we also found that relevance matters. A misspelled name that aligns with the product or brand identity can still be successful. For example, consumers responded just as well to Bloo Fog – a playful nod to Oolong tea – as to the correctly spelled “blue fog.” In contrast, Blewe Fog – a misspelling without a linguistic connection to the product’s name – performed worse.

Other experiments showed similar, more positive effects when the name related to the owner’s identity, for example Sintymental Moments by Joe Sinty, or a visual cue as in Toadal Fitness with a toad logo. In each case, the misspelling was more acceptable when it made conceptual sense to consumers.

Why it matters

The findings suggest that two main concepts play a role in how consumers process brand names: linguistic fluency – or how easily a name is pronounced and read – and conceptual fluency – how easily the meaning of a name is understood or how well it aligns with the product.

Linguistic fluency decreases with more severe misspellings, resulting in more negative responses. But if the misspelling adds some kind of meaning – related to the product, person or logo – these adverse effects can be easily mitigated.

For marketers and brand strategists, the takeaway is that misspellings can work, but only when they make sense. Naming a tea brand Bloo Fog might succeed where Blewe Fog fails, but only if consumers understand the name-product connection. Understanding when a misspelling helps or hurts a brand is crucial to crafting the right brand name; ideally, one that can be perceived positively while reaping the benefits of misspellings, such as increased memorability, uniqueness or trademark acquisition.

What still isn’t known

While this research uncovers how consumers react to different types of misspellings, it leaves open important questions about long-term effects. For example, do consumers still notice the misspelling in a 60-year-old brand name like Kwik Trip, a convenience store chain in the Midwest?

We also do not know how the effects of misspellings play out across different languages, cultures or product categories.

The Research Brief is a short take on interesting academic work.

The Conversation

Annika Abell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Misspelled names may give brands a Lyft – if the spelling isn’t too weird – https://theconversation.com/misspelled-names-may-give-brands-a-lyft-if-the-spelling-isnt-too-weird-256792

Reverse discrimination? In spite of the MAGA bluster over DEI, data shows white Americans are still advantaged

Source: The Conversation – USA (2) – By Fred L. Pincus, Emeritus Professor of Sociology, University of Maryland, Baltimore County

There’s no evidence of widespread racial discrimination against white people. Sebastian Gorczowski/iStock/Getty Images Plus

Two big assumptions underlie President Donald Trump’s attack on diversity, equity and inclusion policies. The first is that discrimination against people of color is a thing of the past. The second is that DEI policies and practices discriminate against white people – especially white men – in what’s sometimes called “reverse discrimination.”

I’m a sociologist who’s spent decades studying race and inequality, and when I read the documents and statements coming out of the Trump White House, these assumptions jump out at me again and again – usually implicitly, but always there.

The problem is that the evidence doesn’t back these assumptions up.

For one thing, if discrimination against white Americans were widespread, you might expect large numbers to report being treated unfairly. But polling data shows otherwise. A 2025 Pew survey found that 70% of white Americans think Black people face “some” or “a lot” of discrimination in general, and roughly two-thirds say the same of Asian and Hispanic people. Meanwhile, only 45% of white Americans believe that white people in general experience that degree of discrimination.

In other words, white Americans believe that people of color, as a group, face more discrimination than white people do. People of color agree – and so do Americans overall.

In a second national study, using data collected in 2023, Americans were asked if they had personally experienced discrimination within the past year. Thirty-eight percent of white people said they had, compared to 54% of Black Americans, 50% of Latinos and 42% of Asian Americans. In other words, white Americans are much less likely to say that they’ve been discriminated against than people of color.

The ‘hard’ numbers show persistent privilege

These statistics are sometimes called “soft” data because they reflect people’s perceptions rather than verified incidents. To broaden the picture, it’s worth looking at “hard” data on measures like income, education and employment outcomes. These indicators also suggest that white Americans as a group are advantaged relative to people of color.

For example, federal agencies have documented racial disparities in income for decades, with white Americans, as a group, generally outearning Black and Latino Americans. This is true even when you control for education. When the Census Bureau looked at median annual earnings for Americans between 25 and 64 with at least a bachelor’s degree, it found that Black Americans received only 81% of what comparably educated white Americans earned, while Latinos earned only 80%. Asian Americans, on the other hand, earned 119% of what white people earned.

These gaps persist even when you hold college major constant. In the highest-paying major, electrical engineering, Black Americans earned only 71% of what white people did, while Latinos earned just 73%. Asian Americans, in contrast, earned 104% of what white people earned. In the lowest-paid major, family and consumer sciences, African Americans earned 97% of what white people did, and Latinos earned 94%. Asian Americans earned 117% of what white people earned. The same general pattern of white income advantage existed in all majors with two exceptions: Black people earned more in elementary education and nursing.

Remember, this is comparing individuals with a bachelor’s degree or higher to people with the same college major. Again, white Americans are still advantaged in most career paths over Black Americans and Latinos.

Disparities persist in the job market

Unemployment data show similar patterns. The July 2025 figures for workers at all education levels show that Black people were 1.9 times more likely to be unemployed than white Americans. Latinos were 1.4 times more likely to be unemployed, and Asian Americans, 1.1 times.

This same white advantage still occurs when looking only at workers who have earned a bachelor’s degree or more. Black Americans who have earned bachelor’s degrees or higher were 1.3 times more likely to be unemployed than similarly educated white Americans as of 2021, the last year for which data is available. Latinos with college degrees were 1.4 times more likely to be unemployed than similar white Americans. The white advantage was even higher for those with only a high school degree or less. Unfortunately, data for Asian Americans weren’t available.

In another study, researchers sent 80,000 fake resumes in response to 10,000 job listings posted by 97 of the largest employers in the country. The credentials on the resumes were essentially the same, but the names signaled race: Some had Black-sounding names, like Lakisha or Leroy, while others had more “white-sounding” names like Todd or Allison. This method is known as an “audit study.”

This research, which was conducted between 2019 and 2021, found that employers were 9.5% more likely to contact the Todds and Allisons than the Lakishas and Leroys within 30 days of receiving a resume. Of the 28 audit studies that have been conducted since 1989, each one showed that applicants with Black- or Latino-sounding names were less likely to be contacted that those with white-sounding or racially neutral names.

Finally, a 2025 study analyzed 600,000 letters of recommendation for college-bound students who used the Common App form during the 2018-19 and 2019-20 academic years. Only students who applied to at least one selective college were included. The study found that letters for Black and Latino students were shorter and said less about their intellectual promise.

Similarly, letters in support of first-generation students – that is, whose parents hadn’t graduated from a four-year college, and who are disproportionately likely to be Black and Latino – had fewer sentences dedicated to their scientific, athletic and artistic abilities, or their overall academic potential.

These and other studies don’t provide evidence of massive anti-white discrimination. Although scattered cases of white people being discriminated against undoubtedly exist, the data suggest that white people are still advantaged relative to non-Asian people of color. White Americans may be less advantaged than they were, but they’re still advantaged.

While it’s true that many working-class white Americans are having a tough time in the current economy, it’s not because of their race. It’s because of their class. It’s because of automation and overseas outsourcing taking away good jobs. It’s because of high health care costs and cuts in the safety nets.

In other words, while many working-class white people are struggling now, there’s little evidence race is the problem.

The Conversation

Fred L. Pincus does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Reverse discrimination? In spite of the MAGA bluster over DEI, data shows white Americans are still advantaged – https://theconversation.com/reverse-discrimination-in-spite-of-the-maga-bluster-over-dei-data-shows-white-americans-are-still-advantaged-262394

Massacre de l’armée coloniale au Niger : derrière le mea culpa de Macron, la continuité d’un récit historique biaisé

Source: The Conversation – in French – By Frank Gerits, Research Fellow at the University of the Free State, South Africa and Assistant Professor in the History of International Relations, Utrecht University

Le 19 juin 2025, le représentant permanent de la France auprès des Nations unies a reconnu que Paris était ouvert au dialogue avec le Niger et prêt à collaborer avec des chercheurs spécialisés afin de mettre en place une coopération patrimoniale permettant la restitution au Niger des objets culturels volés.

Comme le concluait déjà le rapport Sarr-Savoy (du nom de ses auteurs Bénédicte Savoy et Felwine Sarr) de 2018, de nombreux objets volés ont probablement intégré des collections privées sans provenance, ou ont été considérés comme des objets ethnographiques génériques plutôt que comme des pièces de musée de grande valeur. Il est donc nécessaire de mener une enquête sur leur provenance avant même d’envisager leur restitution.

Cette lettre faisait suite à une plainte déposée par quatre communautés nigériennes représentant les descendants des victimes de la mission Voulet-Chanoine de 1899. Cette mission avait été mise en place pour unifier l’Afrique occidentale française face à une concurrence impériale accrue avec la Grande-Bretagne.

Le commandement était assuré par le capitaine Paul Voulet et son adjudant, le lieutenant Julien Chanoine, deux militaires déjà réputés pour leur violence. Mal préparée et dotée d’instructions vagues de la part de Paris, la mission a rapidement sombré dans le chaos, les soldats mourant de dysenterie tout en pillant et massacrant des milliers de personnes.

Les grandes villes du Niger actuel, dirigées par des souverains locaux à la stature quasi mythique, ont été réduites en cendres : Lougou, où la cheffe Sarraounia Mangou a résisté à l’assaut, et Zinder, la capitale du sultanat de Damagaram. La domination coloniale française a ainsi détruit les centres du pouvoir culturel et diplomatique du Niger actuel.

À l’époque, la nouvelle des atrocités est même parvenue jusqu’au ministère des Colonies à Paris, qui a ordonné au gouverneur de Tombouctou, Jean-François Klobb, de prendre le commandement de l’expédition. Après l’assassinat de Klobb par Voulet lors d’une confrontation, ce dernier déclara qu’il n’était plus français et qu’il voulait devenir chef africain. Cette décision provoqua une mutinerie parmi ses propres soldats qui, après un nouveau chaos, l’assassinèrent à leur tour.

Réparations

Les appels à des réparations se sont amplifiés au Niger et dans toute l’Afrique depuis que l’Union africaine a déclaré 2025 Année des réparations. Elle a également encouragé une définition plus large de ce qui constitue une injustice coloniale, afin d’y inclure la justice climatique et la justice économique.

Une série de coups d’État en Afrique de l’Ouest a également renforcé les sentiments anti-français. Elle a accéléré la politique de décolonisation d’Emmanuel Macron, dans laquelle la France reconnaît les atrocités coloniales. Dans le même temps, Paris maintient le discours selon lequel elle a accompli du bon travail.

Lors d’une visite en Algérie en 2022, Macron a exprimé ses regrets pour les atrocités commises pendant la guerre d’indépendance algérienne. Il a également visité le cimetière où sont enterrés les pieds-noirs français – Européens installés en Algérie pendant la période coloniale (1830-1962) –, soulignant l’idée que tous deux étaient victimes à leur manière.

Au Cameroun, après un rapport d’une commission franco-camerounaise d’histoire présidée par Karine Ramondy, Macron a reconnu que la France avait mené une guerre qui s’était poursuivie même après l’indépendance. Même si le cinéaste camerounais Jean Pierre Bekolo a raison de souligner que les conclusions du rapport n’ont rien de nouveau, puisqu’elles confirment des idées plus anciennes tirées d’ouvrages tels que Kamerun !, la reconnaissance officielle du néocolonialisme est nouvelle et quelque peu surprenante. Cette reconnaissance par Macron de ce qui était euphémiquement appelé « pacification » a été possible car elle jette le discrédit sur le prédécesseur et rival politique de Paul Biya, Ahmadou Ahidjo. La reconnaissance officielle de ce que les historiens savent depuis longtemps se fait donc au détriment des vérités historiques au Cameroun même, où la politique mémorielle a cherché à servir le parti au pouvoir.

Au Bénin, les trésors royaux d’Abomey ont été restitués. En exprimant des regrets pour des cas historiques spécifiques, la France ne reconnaît pas la nature structurelle de la violence coloniale, et donc sa pleine responsabilité.

Dans tous ces cas, la notion de mission civilisatrice est maintenue par les autorités françaises : si les gens du passé ont été mal avisés, ils avaient de bonnes intentions. Certains individus ont toutefois parfois abusé de leur pouvoir.

Conflit entre les mémoires nationales

Cette conception individualisée du colonialisme est particulièrement prononcée dans le cas du Niger. Des rumeurs sur la nature diabolique de Voulet et Chanoine circulaient déjà à Paris au moment même de la mission. La presse française s’est emportée contre la prétendue folie de Voulet, qui aurait perdu la raison sous la chaleur de l’Afrique occidentale.

Ces histoires ont exploité le cliché populaire de l’aventurier impérialiste fou. Le roman de l’écrivain britannique Joseph Conrad, Au cœur des ténèbres, publié en 1899, raconte l’histoire d’un marchand d’ivoire nommé Kurtz qui devient fou à cause de son séjour dans l’État libre du Congo. En 1943, le roman existentialiste du Français Albert Camus, L’Étranger dépeint la vie d’un homme dépourvu de tout sentiment qui tue un Algérien.

Il n’est donc pas surprenant qu’en 1976, l’écrivain français Jacques-Francis Rolland ait dépeint Paul Voulet comme un sadique dans son ouvrage Le Grand Capitaine. Le titre du film de Serge Moati, Capitaines des ténèbres sorti en 2005 et basé sur le roman, fait même écho au livre de Conrad. Ainsi, les atrocités ne sont pas considérées en France comme le résultat d’un système colonial qui encourageait ce type de comportement, mais comme le résultat d’une dépression psychologique individuelle.

Au Niger, cependant, cette mission est considérée comme un tournant dans l’histoire de l’exploitation coloniale. Hosseini Tahirou Amadou, professeur d’histoire et de géographie à Dioundiou, a peut-être lancé sa campagne menée en 2014 pour obtenir des excuses et une réparation de la part de l’État français pour les violences commises au Niger. Mais il s’est inspiré d’une production culturelle qui explorait les effets à long terme de la violence coloniale et de la destruction culturelle en cours.

En 1980, l’écrivain nigérien Abdoulaye Mamani a publié Sarraounia (1980), un roman historique qui raconte l’histoire d’une puissante reine haoussa qui résiste à l’invasion coloniale française au Niger. Mêlant tradition orale et critique politique, il dépeint Sarraounia comme un symbole de la force et de la défiance indigènes. Ce roman, qui est une pierre angulaire de la littérature nigérienne et de la littérature anticolonialiste, a été adapté au cinéma en 1986 par Med Hondo.

La stratégie de la France consistant à privilégier le dialogue sans assumer de responsabilité n’est donc pas tant l’expression d’une volonté délibérée d’esquive que le résultat d’un manque de compréhension. Le fait que Paris agisse sur cette question est une conséquence de la décision de la junte au pouvoir au Niger d’annoncer son intention de nationaliser la Société des mines de l’Aïr (Somair), filiale de la société française d’uranium Orano.

Une nouvelle alliance avec l’Afrique de l’Ouest

Au cours de la dernière décennie, le Niger a fourni à la France 20 % de son approvisionnement en uranium. Mais en 2022, le Niger était devenu un fournisseur secondaire, ne représentant plus que 2 % de la production mondiale.

Pourtant, la nationalisation est un autre symbole de la défaite de l’influence déclinante de la France en Afrique. Depuis juillet 2023, le général Abdourahamane Tchiani s’efforce de faire partir l’armée française tout en menaçant les entreprises françaises de nationalisation afin de lutter contre ce qu’il qualifie d’influence néocoloniale.

La discussion autour de Voulet-Chanoine doit donc être comprise comme un moyen, certes cynique, de garder la porte ouverte au Niger, d’autant plus que ce dernier s’est également retiré de l’Organisation internationale de la francophonie, l’un des symboles du pouvoir culturel français. Ce dialogue s’inscrit dans la stratégie globale de Macron, qui consiste à exprimer des remords pour le passé colonial. Il vise à construire de nouvelles alliances en Afrique de l’Ouest pour remplacer la perte d’influence dans une région secouée par des coups d’État.

Il semble peu probable que les relations entre les deux pays puissent s’améliorer sans reconnaître que les raids au Niger faisaient partie d’une stratégie impérialiste délibérée. Comme l’ont indiqué des militants nigériens lors d’un séminaire en 2021, en présence de Fabian Salvioli, rapporteur spécial des Nations unies sur la vérité, la justice et la réparation, le point de départ de la réconciliation devrait être des excuses publiques et une enquête approfondie de la part des autorités françaises.

Il n’existe aucun monument ni commémoration pour les vies africaines qui ont été perdues. Pourtant, la tombe de Voulet est toujours entretenue à Maïjirgui, au Niger, et le monument dédié à Klobb se trouve toujours à Tessaoua.

Une expression matérielle de regret sous la forme d’un monument dédié aux vies africaines perdues pourrait, à cet égard, être un bon point de départ.

The Conversation

Frank Gerits does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Massacre de l’armée coloniale au Niger : derrière le mea culpa de Macron, la continuité d’un récit historique biaisé – https://theconversation.com/massacre-de-larmee-coloniale-au-niger-derriere-le-mea-culpa-de-macron-la-continuite-dun-recit-historique-biaise-262297

Mécanismes scientifiques de la famine : voici ce qui arrive à votre corps lorsqu’il est privé de nourriture

Source: The Conversation – in French – By Ola Anabtawi, Assistant Professor – Department of Nutrition and Food Technology, An-Najah National University

La faim se manifeste sous différentes formes et évolue par étapes. Tout commence par l’insécurité alimentaire, quand on est obligé de s’adapter en réduisant le nombre de repas. À mesure que la nourriture se fait rare, le corps puise dans ses propres réserves. Le passage de la faim à la famine s’amorce par une baisse du niveau d’énergie, puis l’organisme brûle ses graisses avant d’attaquer les muscles. En phase terminale, les organes vitaux cessent de fonctionner.

De la sous-alimentation à la malnutrition aiguë, puis à la famine, le corps finit par ne plus pouvoir maintenir la vie. À Gaza aujourd’hui, des milliers d’enfants de moins de cinq ans et de femmes enceintes ou allaitantes souffrent de malnutrition aiguë. Au Soudan, les conflits et les restrictions à l’accès humanitaire ont poussé des millions de personnes au bord de la famine. Les alertes à la famine se font de plus en plus pressantes chaque jour.

Nous avons demandé aux nutritionnistes Ola Anabtawi et Berta Valente d’expliquer les mécanismes scientifiques de la famine et ce qui se passe dans le corps lorsqu’il est privé de nourriture.

Quelle est la quantité minimale de nutriments dont le corps a besoin pour survivre ?

Pour survivre, il ne suffit pas d’avoir de l’eau potable et d’être en sécurité. L’accès à une alimentation qui couvre les besoins quotidiens en énergie, en macronutriments et en micronutriments est essentiel pour rester en bonne santé, favoriser la récupération et prévenir la malnutrition.

Selon l’Organisation mondiale de la santé (OMS), les adultes ont des besoins énergétiques différents selon leur âge, leur sexe et leur niveau d’activité physique. Une kilocalorie (kcal) est une unité de mesure de l’énergie. En nutrition, elle indique la quantité d’énergie qu’une personne tire de son alimentation ou la quantité d’énergie dont le corps a besoin pour fonctionner. Techniquement, une kilocalorie représente l’énergie nécessaire pour augmenter d’un degré Celsius la température d’un litre d’eau. Notre corps utilise cette énergie pour respirer, digérer, maintenir sa température et, chez les enfants, grandir.

Les besoins énergétiques totaux proviennent de trois sources :

  • la dépense énergétique au repos : l’énergie utilisée au repos pour maintenir les fonctions vitales (respiration, circulation sanguine).

  • l’activité physique : varie en cas d’urgence en fonction de facteurs tels que les déplacements, les soins prodigués ou les tâches de survie

  • la thermogenèse : l’énergie pour digérer et transformer les aliments.

La dépense énergétique au repos représente généralement la plus grande partie des besoins énergétiques, en particulier lorsque l’activité physique est limitée. D’autres facteurs comme l’âge, le sexe, la taille, l’état de santé, la grossesse ou un environnement froid influencent également ces besoins.

Les besoins énergétiques varient tout au long de la vie. Les nourrissons ont besoin environ 95 kcal à 108 kcal par kilogramme de poids corporel par jour pendant les six premiers mois et entre 84 kcal et 98 kcal par kilogramme de six à douze mois. Pour les enfants de moins de dix ans, les besoins énergétiques sont basés sur des modèles de croissance normale, sans distinction entre les garçons et les filles.

Un enfant de deux ans a besoin d’environ 1 000 à 1 200 kcal par jour, un enfant de cinq ans de 1 300 à 1 500 kcal, et un enfant de dix ans de 1 800 à 2 000 kcal. À partir de dix ans, les besoins commencent à différer entre filles et garçons en raison des variations de croissance et d’activité. Et les apports sont ajustés en fonction du poids, de l’activité et du rythme de croissance.

Chez les adultes ayant une activité légère à modérée, les besoins quotidiens moyens sont d’environ 2 900 kcal pour les hommes âgés de 19 à 50 ans et de 2 200 kcal pour les femmes du même âge. Ces valeurs peuvent varier de plus ou moins 20 % selon le métabolisme et l’activité. Après 50 ans, les besoins diminuent légèrement, avec environ 2 300 kcal pour les hommes et 1 900 kcal pour les femmes.

Dans les situations d’urgence humanitaire, l’aide alimentaire doit garantir l’apport énergétique minimum largement accepté de 2 100 kcal par personne et par jour. Ce niveau vise à satisfaire les besoins physiologiques fondamentaux et à prévenir la malnutrition lorsque l’approvisionnement alimentaire est limité.

Cette énergie doit provenir d’un apport équilibré en macronutriments, les glucides représentant 50 à 60 % (comme le riz ou le pain), les protéines 10 à 35 % (comme les haricots ou la viande maigre) et les lipides 20 à 35 % (par exemple, l’huile de cuisson ou les noix). Les besoins en lipides sont plus élevés chez les jeunes enfants (30 à 40 %), ainsi que chez les femmes enceintes et allaitantes (au moins 20 %).

En plus de l’énergie, le corps a besoin de vitamines et de minéraux, tels que le fer, la vitamine A, l’iode et le zinc, qui sont essentiels au fonctionnement du système immunitaire, à la croissance et au développement du cerveau. Le fer se trouve dans des aliments tels que la viande rouge, les légumineuses et les céréales enrichies. La vitamine A provient des carottes, des patates douces et des légumes verts à feuilles foncées. L’iode est généralement obtenu à partir du sel iodé et des fruits de mer. Le zinc est présent dans la viande, les noix et les céréales complètes.

Lorsque les systèmes alimentaires s’effondrent, cet équilibre est rompu.

Que se passe-t-il physiquement lorsque votre corps est affamé ?

Lorsqu’on est privé de nourriture, le corps réagit en trois grandes étapes, qui se chevauchent. Chacune reflète les efforts du corps pour survivre sans nourriture. Mais ces adaptations ont un coût physiologique élevé.

Au cours de la première phase, dans les 48 heures suivant l’arrêt de l’alimentation, le corps utilise le glycogène stocké dans le foie pour maintenir un taux de sucre stable dans le sang. C’est la glycogénolyse. Mais cette réserve s’épuise vite.

Le corps passe alors à la gluconéogenèse. Il fabrique alors du glucose à partir d’autres sources : acides aminés issus des muscles, graisses, lactate. Ce processus nourrit les organes vitaux mais détruit peu à peu la masse musculaire et augmente la perte d’azote, en particulier au niveau des muscles squelettiques.

Dès le troisième jour, la cétogenèse devient le mode de survie dominant. En l’occurrence, le foie transforme les graisses en corps cétoniques, une autre source d’énergie pour le cerveau et les organes. Ce changement permet de préserver les tissus musculaires, mais révèle une crise métabolique plus grave.

Les changements hormonaux, notamment la diminution de l’insuline, de l’hormone thyroïdienne (T3) et de l’activité du système nerveux, ralentissent le métabolisme afin d’économiser l’énergie.
Quand les graisses sont épuisées, le corps attaque ses propres protéines pour survivre. Les muscles fondent, l’immunité chute, le risque d’infections mortelles augmente.

Le système immunitaire s’affaiblit, augmentant le risque d’infections graves, comme la pneumonie. La mort survient souvent après 60 à 70 jours sans nourriture.

À mesure que le corps entre dans une privation prolongée de nutriments, les signes visibles et invisibles de la famine s’intensifient. Sur le plan physique, on observe une perte de poids extrême, une fonte musculaire, une grande fatigue, un rythme cardiaque ralenti, une peau sèche, une chute de cheveux et des plaies qui cicatrisent mal.Le système immunitaire s’effondre et la pneumonie est une cause fréquente de décès.

Sur le plan psychologique, la famine provoque une profonde détresse. Les personnes touchées font état d’apathie, d’irritabilité, d’anxiété et d’une préoccupation constante pour la nourriture. Les capacités cognitives déclinent et la régulation émotionnelle se détériore, conduisant parfois à la dépression ou au repli sur soi.

Chez les enfants, la famine entraîne des effets à long terme, un retard de croissance et des troubles cérébraux parfois irréversibles.

La faim détruit aussi le tissu social. Les familles s’épuisent, les communautés se disloquent. Dans des crises comme à Gaza ou au Soudan, la famine aggrave le traumatisme causé par la violence et les déplacements, entraînant un effondrement total de la résilience sociale et biologique.

Comment briser ce cycle ?

Après une période de famine, le corps se trouve dans un état métabolique fragile. La réintroduction soudaine d’aliments, en particulier de glucides, provoque un pic d’insuline et un transfert rapide d’électrolytes tels que le phosphate, le potassium et le magnésium vers les cellules. Cela peut submerger l’organisme et entraîner ce que l’on appelle le syndrome de réalimentation, qui peut entraîner des complications graves telles qu’une insuffisance cardiaque, une détresse respiratoire, voire la mort si elle n’est pas prise en charge avec soin.

Le traitement standard commence par l’administration d’ un lait thérapeutique appelé F-75, spécialement conçu pour stabiliser les patients pendant la phase initiale de la prise en charge de la malnutrition aiguë sévère. Elle est suivie d’aliments thérapeutiques prêts à l’emploi, d’une pâte ou d’un biscuit souvent à base de pâte de cacahuète enrichie. En 4 à 8 semaines, un enfant sévèrement malnutri peut retrouver un état normal. On ajoute aussi des sels de réhydratation et des compléments en vitamines et minéraux.

L’aide doit être acheminée en sécurité. Les largages aériens ne suffisent pas. La survie nécessite des efforts soutenus et coordonnés pour rétablir les systèmes alimentaires, protéger les civils et faire respecter le droit humanitaire. Sans cela, le risque est grand de voir se répéter les cycles de famine et de souffrance.

Lorsque l’aide alimentaire est insuffisante en qualité ou en quantité, ou lorsque l’eau potable n’est pas disponible, la malnutrition s’aggrave rapidement.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Mécanismes scientifiques de la famine : voici ce qui arrive à votre corps lorsqu’il est privé de nourriture – https://theconversation.com/mecanismes-scientifiques-de-la-famine-voici-ce-qui-arrive-a-votre-corps-lorsquil-est-prive-de-nourriture-263207

Teenagers are choosing to study Stem subjects – it’s a sign of the times

Source: The Conversation – UK – By Mike Watts, Professor of Education, Brunel University of London

SpeedKingz/Shutterstock

A-level results in 2025 show the increasing popularity of Stem (science, technology, engineering and maths) among students. For students taking three A-levels – the majority – the most popular combination of subjects was biology, chemistry and maths.

The subject with the greatest rise in entries from 2024 is further maths, followed by economics, maths, physics and chemistry. Maths remains the most popular subject, with entries making up 12.7% of all A-level entries.

Conversely, subjects such as French, drama, history and English literature are falling in exam entry numbers.

There is considerable incentive for young people who may be looking beyond school and university to the job market to study Stem. Research has found that Stem undergraduate degrees bring higher financial benefits to people and to the public purse than non-Stem subjects.

Many of the world’s fastest-growing jobs need Stem skills. These include data analysts, AI specialists, renewable energy engineers, app developers, cybersecurity experts and financial technology experts.

Within Stem itself, science alone is a broad church that spans astronomy to zoology and all letters of the alphabet between. Add to this the many variations of technology, engineering and maths and the range of subjects and specialisms is enormous.

It might come as no surprise, then, that young people have considerable scope in the possible careers and employment they might follow in life. From accountancy to the environment, medical engineering to computer technology, etymology to vulcanology, the possibilities are vast. There is little doubt that this very broad arena is attractive as possible employment.

Group of students in science class
Young people are choosing to study science, technology, engineering and maths.
Rawpixel.com/Shutterstock

What’s more, maths, engineering and the sciences are now vital parts of careers that might have once seemed unrelated. It was once the case that the division between arts and science was seen as unbridgeable: you were firmly on one side or the other. Today this is far less evident.

Artists, in their many manifestations, are almost by default material scientists. Architects, photographers, musicians, video-makers, sound and lighting technicians are (arguably) technical engineers. Landscape gardeners are environmentalists, chefs are food scientists.

Everyday Stem

Stem affects everyday life at all levels. Wearing a smart watch to track our health and fitness, as so many of us do, requires analysis of data, averages and percentages. We need maths skills to navigate our personal finances. Following directions means programming a Satnav.

Young people take their attitudes, advice and directions from a multitude of sources. Concern about the environment may lead teens to consider careers in areas such as ecology or environmental engineering. The ubiquity of social media apps and the tech companies that run them raises awareness of the use of computer science or tech skills.

And leaving aside Instagram, TikTok and other social media, Sir David Attenborough’s TV series Blue Planet prompted a surge of interest in marine ecology and plastic pollution.

Nor are young people immune to social influences more broadly. In more diffuse ways, peers and parents are also influential in shaping career choices, as are science centres, museums, botanical gardens, planetariums, aquariums, environmental centres, city farms and such like.

Then there are teachers and schools. Positive experiences in school Stem prompt further study. There is increasing evidence that individual project work, industrial placements, role-model scientists, school outreach and class visits all play an important part in promoting career intentions and aspirations.

One important factor here is imbuing students with a positive Stem identity. When young people think they are good at Stem subjects and are able to be successful, they are much more likely to choose a Stem career.

The upshot here is that, as the world changes, and changes quickly, so does the realisation that Stem is an essential and invaluable dimension of life and that career prospects are varied and available at many, many levels. It seems little wonder that students have to come to see this and are enrolling in study and employment in greater numbers than before.

The Conversation

Mike Watts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Teenagers are choosing to study Stem subjects – it’s a sign of the times – https://theconversation.com/teenagers-are-choosing-to-study-stem-subjects-its-a-sign-of-the-times-263218

Laws are introduced globally to reduce ‘psychological harm’ online – but there’s no clear definition of what it is

Source: The Conversation – UK – By Magda Osman, Professor of Policy Impact, University of Leeds

myboys.me/Shutterstock

Several pieces of legislation across the world are coming into effect this year to tackle harms experienced online, such as the UK’s Online Safety Act and Australia’s Online Safety Act. There are also new standards, regulations, acts and laws related to digital products (including smart devices such as voice assistants, virtual headsets) and services such as social media platforms.

Of the many harms these types of legislation are designed to address, “psychological harm”, “mental distress” or similar terms are commonly included.

Unfortunately, when psychological harm and the like are referred to, there is typically no detailed corresponding definition of them. But while we might have an intuitive understanding of what psychological harm is, we still need precision on what it means in law. This means evidencing what it is, agreeing on how to measure it and designing the best methods to tackle it.

How do we do this? An obvious place to look is psychological science.

The origin story

The earliest reference to psychological harm was made in the 1940s. Back then, it was about the destabilising impact of war propaganda and the use of psychology to subvert people’s understanding of reality. Psychological harm was a broad term, which also applied to those witnessing the horrors of war on the front line.

In the 1950s and 1960s, psychological harm was more associated with advertising tactics that aggressively exploit people’s emotions and insecurities.

Fast forward to the early 2000s, and tools for assessing psychological harm emerged alongside clinical assessments of mental health disorders. For instance, research on abuses experienced online, such as cyberbullying and cyberstalking, documented several psychological impacts. These ranged from withdrawal from social groups, self-doubt and reduced self-esteem to mental health disorders such as depression, anxiety and PTSD.

More terms entered into clinical and forensic lexicons, such as “psychological distress”, “psychological damage” and “psychological injury”. All of them concern some form of mental adverse experience which may happen immediately or as a delayed reaction to traumatic events.

Where we are now

In reviewing the 80 years’ worth of work in clinical, forensic and cognitive psychology, here is what I see as the major issues concerning psychological harm.

There is no agreement as to where to draw the boundary between psychological harm or related concepts and mental disorders outlined in the diagnostic manual called DSM-5-TR (such as depression, anxiety or personality disorders).

There is also no standardised measure of psychological harm or psychological distress or damage. For instance, if we just take social media, there are different metrics that vary even on how they measure negative mental experiences on Tiktok, Instagram and Threads, Facebook, Youtube and Weibo.

Upset and depressed girl holding smartphone sitting on college campus floor holding head.
Cyberbullying can cause major harm.
Ground Picture/Shutterstock

Why does this matter? Take for example cyberbullying. There are 17 tools in existence to measure psychological harm. And because the tools don’t all align, we don’t have an accurate picture of rates of psychological harm. Some tools are too narrow in scope – they fail to include severe cases that require psychiatric treatment. And other assessments are too broad – failing to exclude those that are malingering.

What’s more, how we perceive and experience adverse events, which can be very serious and debilitating, vary – they are subjective in nature. Research in clinical and forensic psychological recognises this. These disciplines have spent time establishing standards of assessment when supporting legal decisions for ensuring appropriate punitive measures when we face terrible situations.

Three practical suggestions

For legislation to do the job of guarding against psychological harm from serious adverse experiences online and through digital technologies, forensic psychology offers a path forward.

The first thing is to have an agreed definition. For example, in 2025, the psychologist Amanda Heath proposed a viable general-purpose definition as “a sustained drop in stable functioning, negatively impacting wellbeing”.

This works in the same way as legal requirements for defining physical harm, which needs a baseline of functioning to show how an injurious event causes a change to it. The severity of the damage varies, based on, say the length of recovery (such as a week, a month, a year, never). In the same way, the length of recovery from exposure to illegal content online would indicate the severity of the psychological harm experienced.

Second, there should to be a process for demonstrating causality between a particular adverse event online and the harm itself. So far, there doesn’t appear to be any set criteria laid out in online safety or harm acts for establishing causality.

Again, legislators could learn from forensic research, which outlines two levels in psychological injury cases that establish causality – psychologically and legally. Forensic psychologists weigh the evidence for the relative ratio of pre-existing and event or post-event factors to determine causality using something called counterfactual analysis.

For example, sometimes people have pre-existing injuries, vulnerabilities, or psychopathologies. So in such cases there needs to be a baseline, where the evidence shows how an indiviudal’s conditions have been exacerbated by experiencing an injurious event. For example, if we applied this analysis to psychological harm experienced online it would work like this. Forensic psychologists would weight the evidence to determine that, in the absence of seeing the illegal content, an individual would not have experienced PTSD to the same extent that they are experiencing it currently.

Finally, there need to be standards for the evidence used to show causality between a particular adverse event online and the harm itself, which we don’t yet see in current online safety or harm acts.

In forensic psychology, on the other hand, the legal standards of evidence are high, requiring independent corroboration of psychological impacts. This is where psychiatric assessment tools of PTSD, depression and anxiety are used along with other sources of evidence. Physical outcomes (such as neurological damage) and behavioural outcomes (such as substance abuse, self-harm) are also required.

To serve the public, the law needs to improve. This can’t be achieved without a fleshed out definition of psychological harm, tools of assessment and a framework that traces a causal path from the injurious content to the harm it is considered to have caused.

The Conversation

Magda Osman receives funding from ESRC, EPSRC, Research England, UKRI Innovate UK, Wellcome Trust, Turing Institute, Food Standards Agency, DFG, British Academy, DSTL, Counterterrorism Policing.

ref. Laws are introduced globally to reduce ‘psychological harm’ online – but there’s no clear definition of what it is – https://theconversation.com/laws-are-introduced-globally-to-reduce-psychological-harm-online-but-theres-no-clear-definition-of-what-it-is-263061

Bolivia election: voters bring two decades of leftist politics to an end

Source: The Conversation – UK – By Amalendu Misra, Professor of International Politics, Lancaster University

A seismic political shift has taken place in Bolivia. The country’s leftist Movimiento al Socialismo (Mas) party, which has dominated Bolivian politics for nearly 20 years, was voted out of power in a general election on August 17.

Centre-right Rodrigo Paz Pereira and rightwing Jorge “Tuto” Quiroga, who briefly led the country in 2001, will now compete for the presidency in a run-off vote in October. According to the electoral court’s preliminary tally, Paz Pereira won 32.2% of the vote and Quiroga won 26.9%.

Bolivia’s deeply unpopular president, Luis Arce, who was the Mas presidential candidate in 2020, chose not to run. And his pick, current interior minister Eduardo del Castillo, only won 3.16% of the vote. That is just enough for the party to avoid losing its legal status.

Beyond exhaustion with the rule of Mas, the election was dominated by two critical issues. First was the dire state of the economy. Bolivia is enduring its worst economic crisis in four decades, with US dollars and fuel in short supply. Inflation also jumped from 12% in January to 23% in June. Many Bolivians are struggling to make ends meet.

Second, voters were confronted with a decision to continue Bolivia’s old style status quo politics of patronage or opt for a new political ideology. Bolivians have long experienced divisive politics under the old order. The voters were clear; they wanted change.

Speaking after the results were announced, Paz Pereira said: “Bolivia is not only calling for a change of government, it is also calling for a change to the political system. This is the beginning of a great victory and a great transformation.”

End of an era

The results are likely to put an end to the political career of Evo Morales, Bolivia’s three-time former president and the founder of Mas. Morales first entered office in 2006 as part of the “pink tide” of leftist leaders that swept into power across Latin America during the commodities boom of the early 2000s.

He was long seen as the shining light of the Latin American left. His policies lifted millions of people, particularly Bolivian indigenous communities who have suffered centuries of marginalisation and discrimination, out of poverty.

But his critics accuse him of undermining Bolivia’s political and legal institutions by, for example, appointing loyalists to the judiciary and electoral bodies.

In 2016, when a referendum narrowly failed to lift restrictions on presidential term limits, Morales appointed a constitutional court to circumvent the rules and scrap term limits altogether. This gave him the power to run for office indefinitely.

Then, in 2019, widespread protests over a disputed election resulted in Morales losing the support of the military and police. He fled Bolivia, with his supporters saying he was forced out in a coup. Morales has remained highly active in Bolivian politics since then, though this has morphed into a contentious struggle for influence.

Mas has fallen victim to bitter infighting. Arce and Morales, who initially both wanted to be the Mas 2025 presidential candidate, became locked in a fight for control of Mas. And when a constitutional court ruling barred Morales from running, he accused the government of trying to disqualify his candidacy.




Read more:
Bolivia slides towards anarchy as two bitter rivals prepare for showdown 2025 election


Morales called for his supporters to boycott the vote. Preliminary results suggest 19.1% of ballots were null and void, an unusually high proportion in Bolivia’s electoral history. This followed months of regular violent protests, which were most intense in rural areas where support for Morales is concentrated.

The election outcome can be seen as representing the resolve of Bolivian citizens to prevent the further erosion of their democratic institutions and put a stop to the politics of populism. While waiting to vote at polling stations across La Paz, several people said they were choosing to vote for el menos peor, the lesser evil.

Paz Pereira was a surprise vote leader. Opinion polls had suggested Samuel Doria Medina, one Bolivia’s most successful businessman, was the frontrunner. But support for Paz Pereira seems to have surged after he teamed up with Edman Lara, a TikTok-savvy former police captain who went viral for denouncing corruption within the police.

Quiroga and Doria Medina, who has now announced he will back Paz Pereira in the run-off, used their election campaigns to warn of the need for a fiscal adjustment to save Bolivia from insolvency. This may include the elimination of food and fuel subsidies, which some analysts say risks sparking social unrest.

The road ahead for Bolivia’s incoming president will be hard and bumpy. His first task will be to rein in runaway inflation. Then he will have to put back together a fractured nation marked by racial and ideological divides.

He will also have to work on realigning Bolivia’s relationship with rest of the world – by extricating itself from its strong association with pariah regimes like Iran, Venezuela and Russia. The new leader has a mountain to climb.

The Conversation

Amalendu Misra does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Bolivia election: voters bring two decades of leftist politics to an end – https://theconversation.com/bolivia-election-voters-bring-two-decades-of-leftist-politics-to-an-end-263238

L’endettement de l’État sous Chirac, Sarkozy, Hollande, Macron… ce que nous apprend l’histoire récente

Source: The Conversation – in French – By François Langot, Professeur d’économie, Directeur adjoint de l’i-MIP (PSE-CEPREMAP), Le Mans Université

Jacques Chirac, Nicolas Sarkozy, François Hollande puis Emmanuel Macron ont été confrontés à la problématique de la dette et de ses intérêts. Comment la conjoncture économique (inflation et croissance) agissent sur cette dette ? Qui a bénéficié d’une bonne ou d’une mauvaise conjoncture ?


La dette n’a cessé de croître au cours de ces trente dernières années. Elle est la somme de tous les déficits publics accumulés depuis le milieu des années 1970. Afin de comparer le montant de cette dette à une capacité de financement, elle est exprimée en pourcentage du produit intérieur brut (PIB) – ratio dette/PIB, ce qui indique combien d’années de création de richesses (le PIB) sont nécessaires à son remboursement.

Sous Jacques Chirac, elle est passée de 663,5 milliards d’euros à 1 211,4 milliards d’euros, soit de 55,5 % à 64,1 % du PIB. Sous Nicolas Sarkozy, à 1 833,8 milliards d’euros, soit à 90,2 % du PIB. Sous Hollande, à 2 258,7 milliards d’euros, soit 98,4 % du PIB.

À la fin du premier trimestre 2025, la dette de la France représente 3 345,4 milliards d’euros, soit 113,9 % du PIB. Si cet endettement résulte évidemment de choix politiques, déterminant les recettes et les dépenses du pays, il dépend également de la conjoncture économique… qui peut plus ou moins faciliter la gestion de cette dette.

Crise des subprimes en 2008, pandémie de Covid-19, zone euro en récession, bulle Internet, embellie des années 2000, les gouvernements de Jacques Chirac, Nicolas Sarkozy, François Hollande et Emmanuel Macron ont connu des conjonctures économiques aussi assombries que radieuses. Avec quels arbitrages ? Explication en graphiques.

Influences de la conjoncture sur la dette

La conjoncture économique peut être analysée à travers deux paramètres, qui sont tous les deux des taux : le taux d’intérêt (r), fixé par la Banque centrale européenne (BCE) et qui détermine la charge d’intérêt à payer sur la dette, et les taux de croissance (g comme growth) qui mesurent l’accroissement annuel de richesses créées (le PIB). La conjoncture économique est à l’origine de deux effets :

Un premier effet est défavorable aux finances publiques. Il se produit lorsque la conjoncture conduit le taux d’intérêt (r) à être supérieur au taux de croissance (g), soit r-g > 0. Dans ce contexte, le surplus de richesse créée induit par la croissance est inférieur aux intérêts à payer sur la dette. De facto, la dette croît, même si les choix politiques conduisent les recettes de l’État à financer ses dépenses (hors charges des intérêts de cette dette), c’est-à-dire si le déficit primaire est nul.




À lire aussi :
« La crise politique est plus inquiétante pour l’économie française que la crise budgétaire seule »


Le schéma (Figure 1) indique que cette conjoncture défavorable s’est produite sous le mandat de Jacques Chirac. En cette période, la somme des déficits primaires, soit les dépenses de l’État hors charge de la dette, et les recettes, est quasiment stable (courbe bleue). La dette est en hausse à cause d’intérêts élevés (r entre 2,5 % et 5 %), conjugués avec une croissance modérée (g est autour de 4 %) qui font croître cet endettement (courbe rouge).

Un deuxième effet est favorable aux finances publiques. Si le taux d’intérêt réel est inférieur au taux de croissance (r-g < 0), alors la dette (ratio dette/PIB) peut être stabilisée, même si les dépenses, hors charges des intérêts, sont supérieures aux recettes, c’est-à-dire même si les choix politiques induisent un déficit primaire. En effet, dans ce cas, l’accroissement annuel de la richesse créée (la croissance du PIB) est supérieure à la charge des intérêts.

Le schéma (Figure 1) indique qu’une telle conjoncture s’est produite sous les mandats d’Emmanuel Macron. Pendant cette période, la somme des déficits primaires a fortement crû (courbe bleue) : les choix politiques ont conduit les dépenses de l’État (hors charges des intérêts sur la dette) à être supérieures à ses recettes. Toutefois, la dette a augmenté plus faiblement (courbe rouge), car les taux d’intérêts sont restés plus faibles que la croissance (moins de 2 % pour les taux d’intérêt, r, contre plus de 2,5 % pour la croissance, g).

Figure 1 : L’écart entre la ligne rouge et la ligne bleue mesure la contribution des charges d’intérêt nette de la croissance (r-g) à l’évolution du ratio dette/PIB. Données Insee.
Fourni par l’auteur

Contribution de la conjoncture à la dette

L’histoire récente classe en deux groupes les mandats présidentiels. Celui où une « mauvaise » conjoncture explique majoritairement la hausse de la dette (ratio dette/PIB) – dans la figure 1, la courbe rouge croît davantage que la courbe bleue. Celui où les déficits primaires contribuent majoritairement à sa hausse – dans la figure 1, la courbe bleue croît davantage que la courbe rouge.

Le premier regroupe les mandats de Jacques Chirac et Nicolas Sarkozy. Le second, ceux de François Hollande et d’Emmanuel Macron.

Les données montrent que sous les deux mandats de Jacques Chirac (1995-2007), le ratio dette/PIB a augmenté de 8,99 points (0,75 point par an). Cette augmentation est due à une « mauvaise » conjoncture pour les finances publiques (effet de r-g > 0) qui a fait croître le ratio dette/PIB de 10,07 points, la dynamique des déficits primaires ayant contribué à le réduire de 1,08 point. Pendant cette période, les taux d’intérêt sur la dette publique étaient très élevés – entre 4 et 6 %.

Sous le mandat de Nicolas Sarkozy (2007-2012), le ratio dette/PIB a crû de 22,76 points (4,55 points par an), dont 11,01 points induits par les déficits primaires, soit 48 % de la hausse totale, et 11,75 points à la conjoncture (52 % du total). Les taux d’intérêt ont continué à être élevés – entre 3 et 4 %. Les déficits primaires importants ont suivi les choix politiques visant à amortir la crise des subprimes.

A contrario, pendant le mandat de François Hollande, c’est la hausse des déficits primaires qui expliquent à 71,5 % de la hausse totale du ratio dette/PIB (9,13 points parmi les 12,74 points de hausse totale, soit 2,55 points par année). Les taux d’intérêt ont continué à baisser, passant de 3 % à moins de 2 %, alors que les déficits primaires n’ont pas été contrôlés, même si les crises des subprimes puis des dettes souveraines étaient passées.

Déficits primaires sous Emmanuel Macron

Les mandats d’Emmanuel Macron, jusqu’en 2024, accentuent encore le trait. La dette n’a augmenté que de 10,8 points (1,35 point par an), car la conjoncture l’a fait baisser de 15,31 points, les taux d’intérêt devenant très faibles, passant sous les 1 % en 2020. La hausse de la dette s’explique uniquement par la très forte hausse des déficits primaires qui l’ont fait croître de 26,11 points, pendant une période où la pandémie de Covid-19 et la crise de l’énergie ont conduit l’État à assurer les Français contre de trop forte baisses de pouvoir d’achat.




À lire aussi :
Quand commence un krach boursier ? Et qu’appelle-t-on ainsi ?


La période future, allant de 2025-2029, se classe dans la seconde configuration où la conjoncture facilitera de moins en moins la gestion de la dette publique (r-g < 0). Même avec un objectif politique de maîtrise de l’endettement, la réduction des déficits primaires pourra alors se faire graduellement. Toutefois, avec ces déficits qui continueront à peser sur la dette, la conjoncture facilitera de moins en moins la gestion de la dette publique, car la croissance compensera de moins en moins un taux d’intérêt en hausse.

Le budget présenté par François Bayrou, le 25 juillet dernier, fera croître le ratio dette/PIB de 4,6 points (0,92 point par an), dans un contexte où la conjoncture le réduira de 1,7 point. Les déficits primaires l’augmenteront donc de 6,3 points. Dans ce contexte, l’effort budgétaire proposé par le gouvernement Bayrou permettra de stabiliser le ratio dette/PIB autour de 117 %, certes loin de la stabilisation autour de 60 % des mandats de Jacques Chirac…

Équilibre entre dépenses et recettes

L’évolution du déficit primaire (écart entre les dépenses, hors charges d’intérêt, et les recettes) indique que sur les vingt-neuf dernières années, il y a eu dix années où il s’est accru. Trois hausses majeures se dégagent : en 2002, de 1,82 point avec le krach boursier, en 2009 de 4,2 points, avec la crise des subprimes et, en 2020, de 6,1 points, avec la pandémie de Covid-19.

En 2002, la hausse du déficit était partagée avec 1,1 point lié aux hausses des dépenses et 0,72 point aux réductions des recettes. Les fortes hausses de 2008 et de 2020 sont majoritairement dues à des hausses de dépenses : 95 % des 4,2 points de 2009 et 97 % des 6,1 points de 2020. Afin de contenir la dette, les recettes ont fini par augmenter après les crises, entre 2004 et 2006, puis entre 2011 et 2013 et, enfin, entre 2021 et 2022. Mais il n’y a jamais eu de réduction des dépenses ni après 2011 ni après 2023.

C’est donc leur persistance à un niveau élevé qui explique l’accroissement du ratio dette/PIB. Seule la période très récente (en 2023) avec la crise ukrainienne a conduit l’État à réduire les recettes afin de préserver le pouvoir d’achat dans un contexte de forte inflation.

Contrôle des dépenses publiques

Le plan du gouvernement Bayrou, en faisant peser les trois quarts de l’ajustement sur les dépenses, propose de reprendre le contrôle des dépenses publiques afin qu’elles représentent 54,4 % du PIB en 2029 – ce que l’on observait avant la crise de 2007. Au-delà de stabiliser le ratio dette/PIB, ce choix politique permet aussi d’envisager la possibilité de gérer une éventuelle crise future. La question qui se pose alors est : quels postes de dépenses réduire en priorité ?

Variation d’un type de dépense par mandat. La variation mesure l’écart en point de PIB entre la dépense en fin de mandat (2023 pour Emmanuel Macron) et la dépense en début de mandat. Données Insee.
Fourni par l’auteur

Les postes de dépenses qui ont crû depuis 1995 sont ceux liés à l’environnement (+0,8 point de PIB), à la santé (+3,2 points de PIB), aux loisirs, à la culture et au culte (+0,6 point de PIB) et à la protection sociale (+1,3 point de PIB). Ceux qui ont baissé sont ceux liés aux services généraux des administrations publiques (-4,1 points de PIB), à la défense (-1,1 point de PIB) et à l’enseignement (-1,5 points de PIB). À l’avenir, un budget réallouant les dépenses en faveur de la défense et l’enseignement via un meilleur contrôle des dépenses de santé et de protection sociale devra donc être perçu comme un simple rééquilibrage.

The Conversation

François Langot ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. L’endettement de l’État sous Chirac, Sarkozy, Hollande, Macron… ce que nous apprend l’histoire récente – https://theconversation.com/lendettement-de-letat-sous-chirac-sarkozy-hollande-macron-ce-que-nous-apprend-lhistoire-recente-261478