Recreational fishing in the US catches far more fish than previously estimated

Source: The Conversation – USA (2) – By Matthew Robertson, Research Scientist, Fisheries and Marine Institute, Memorial University of Newfoundland

Fishing is recreational, but it’s also an inexpensive way to add protein to people’s diets. Allen J. Schaben / Los Angeles Times via Getty Images

One of the United States’ largest fisheries is hiding in plain sight. Recreational freshwater anglers in the lower 48 states catch – and keep – far more fish than any official body has estimated, according to new research from our team of North American fishery scientists.

Specifically, our analysis, which integrated thousands of recreational fishing surveys across the U.S., found that people who engage in recreational fishing in the country’s lakes, ponds and reservoirs catch between 2 billion and 6 billion fish each year. Many of them practice catch-and-release fishing, but even after accounting for all the fish released, we estimated that they keep between 230,000 and 670,000 metric tons of fish in the U.S. alone.

That’s between 17 and 48 times more fish than prior U.S. estimates that have been reported to the United Nations’ Food and Agriculture Organization.

And it’s about 20% of the United States’ total recorded annual consumption of fresh fish that has not been frozen. We estimated the value of the recreational fish catch is roughly US$3 billion a year. By contrast, domestic commercial processed fishery products are valued at about US$12 billion a year.

Not just for fun

Historically, most researchers and policymakers viewed recreational fishing as a leisure activity rather than a significant part of the nation’s food supply.

However, for many households, recreationally harvested fish – fish that people catch and keep, often to eat – represent a meaningful source of protein at very low cost. By recognizing this unseen harvest as a significant food source, policymakers can recognize that changes in recreational fishing opportunities don’t just affect anglers’ enjoyment, but also millions of households’ food security.

The immensity of recreational fishing also likely has effects on freshwater ecosystems that have gone unrecognized by fisheries managers.

For example, a 2019 analysis of nearly 200 lakes in northern Wisconsin found that around 40% of walleye recreational fisheries were overfished. Even when fish are released and not kept for eating, they can die shortly after release or be injured or stressed from having been caught. Injured and stressed fish may produce fewer offspring, be more vulnerable to predators and be less capable of catching prey.

Together, these effects on fish populations and the act of fishing can substantially change how freshwater ecosystems function. For example, removing top predators like walleye can lead to an increase in small fish, which eat tiny zooplankton, which feed on phytoplankton. If zooplankton populations fall, that can ultimately lead to more frequent algal blooms.

Effective fisheries management requires accurate estimates of fishing activity. Without that information, officials may overestimate fish population size, which could lead to unexpected population collapses and new fishery regulations and closures.

Why the numbers don’t add up

Official harvest statistics for fisheries, which are collected by the U.N. from national governments, usually focus on ocean fisheries, which are typically the largest and most lucrative.

As a result, the only official statistics for the U.S. freshwater fisheries harvest cover commercial fisheries that primarily operate in the Great Lakes.

Collecting data on recreational fisheries is challenging. Unlike commercial fisheries that unload their catch at centralized ports, it is impossible to know where recreational fishers are and what they are catching across the entire country. With an estimated 35 million people fishing across millions of rivers, lakes, ponds and reservoirs, the amount of recreational fishing makes it an extremely difficult activity to track.

A person stands on the shore of a lake with a fishing pole as swan-shaped boats pass by.
A person fishes in Echo Lake in Los Angeles.
Jason Armond / Los Angeles Times via Getty Images

Recreational fisheries data tends to be collected by state agencies that conduct angler surveys. Angler surveys involve counting and interviewing anglers at specific rivers, lakes, ponds and reservoirs to provide snapshots of who is fishing, how they fish and what they catch. Each state collects data differently, and surveys typically focus on a few locations rather than the entire state.

Without a coordinated national effort, the total recreational catch has remained effectively invisible because one state’s questions and findings do not always align with those in other states.

From local surveys to national statistics

Our new research, a collaborative effort between myself and four colleagues from the U.S. Geological Survey, the University of Missouri and Louisiana State University, sought to improve the quality of recreational fishing data. Over the past several years, our team has worked to compile angler surveys from across the country into a single database.

We have not received data from every river, lake, pond and reservoir; in fact, we have not even collected data from every state. But we have collected over 15,000 surveys from 40 states, and we are collecting more surveys every day.

To calculate our estimates, we combined three major factors:

  • Nationwide numbers of fish caught and hours spent fishing.

  • Assumptions about how many lakes, ponds and reservoirs people fish based on the relationships between water body size and known fishing locations.

  • The proportion of caught fish that aren’t thrown back.

We arrived at an estimate of 2 billion to 6 billion fish caught.

Rethinking recreational fisheries

Even our most conservative assumption of harvested fish – 236,000 metric tons – is much higher than the prior U.N. estimates of 13,388 metric tons. We hope these new numbers will serve as initial estimates that will be continually refined as we and other researchers collect more data and better understand where and how people fish.

Getting this first estimate provides a baseline for fisheries managers to ensure fishing policies line up with the actual effects of recreational fishing.

We also note that recreational freshwater fishing happens across the globe. If the actual recreational fish harvest is significantly higher than has previously been estimated in the U.S., the same is likely true worldwide.

The Conversation

Matthew Robertson receives funding from a Marine Institute of Memorial University Start-Up Fund, the Canadian Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, the Newfoundland and Labrador Innovation and Business Investment Corporation’s Research and Development program, the Atlantic Groundfish Council, the Environment and Climate Chance Canada (ECCC) Environmental Damages Fund, and the Robert and Edith Skinner Wildlife Management Fund.

This research was funded by a grant for the U.S. Geological Survey Climate Adaptation Science Center.

ref. Recreational fishing in the US catches far more fish than previously estimated – https://theconversation.com/recreational-fishing-in-the-us-catches-far-more-fish-than-previously-estimated-276812

Countries must back commitments to transition from fossil fuels with action

Source: The Conversation – Canada – By Philippe Le Billon, Professor, Geography Department and School of Public Policy & Global Affairs, University of British Columbia

The first international Conference on Transitioning Away from Fossil Fuels concluded on April 29 in Santa Marta, Colombia. Stemming from the failure of the last COP meeting in Belèm, Brazil, to address the necessity of reducing reliance on fossil fuels, the intergovernmental event was co-organized by Colombia and the Netherlands and gathered delegations from 59 countries.

The conference included countries advocating for a phase-down of fossil fuels, such as Costa Rica, Denmark, Spain, France and Kenya within the Beyond Oil and Gas Alliance.

It also included countries highly exposed to climate change, such as Tuvalu, Vanuatu, Trinidad and Tobago, Uganda and Bangladesh, all members of the Climate Vulnerable Forum, as well as major oil-producing countries like Brazil, Canada, Norway and Nigeria. Canada carefully refrained from using the word fossil fuels in its very cautious plenary declaration.

As expected, the conference did not produce binding commitments or a negotiated agreement. Its goal, for now, is more modest: to provide a space more flexible than United Nations climate COPs. That aims to enable frank discussions on the practical realities of phasing down fossil fuels and foster a coalition capable of pushing future COPs toward more concrete action, as states agreed during COP28 in 2023.

Many participants framed the Santa Marta conference as a historic turning point, echoing the optimism that followed the Paris Agreement and COP28’s call for transitioning away from fossil fuels. However, the limited tangible outcomes of these past commitments suggest caution.

Will Santa Marta mark the beginning of a genuine transformation, or remain another symbolic milestone?




Read more:
Here’s what to expect from the first Conference on Transitioning Away from Fossil Fuels


A turning point or familiar rhetoric?

The very existence of a summit dedicated to phasing down fossil fuels is unprecedented — particularly one hosted by a Global South country like Colombia, with an economy that remains significantly dependent on oil and coal export.

However, many proposals discussed in Santa Marta have already been raised at previous conferences, such as the 2025 African Climate Summit and the 2023 Summit for a New Global Financial Pact. Since then, calls for global action to address climate change and help climate-vulnerable countries have largely failed to translate into concrete policies.

This raises the risk that Santa Marta may reproduce what scholars describe as “incantatory governance” — a model that combines ambitious global goals with flexible, largely voluntary instruments and an optimistic narrative designed to mobilize international consensus without necessarily delivering structural change.

Four dynamics to watch

Whether Santa Marta becomes more than rhetoric will depend on four key dynamics.

  1. Will participating countries remain in informal, non-binding coalitions, or will they form more structured and co-ordinated groups focused on phasing down fossil fuels, similar to how OPEC organizes oil-producing states? Recent academic work suggests that such coalitions could play a decisive role in shaping global efforts to phase down fossil fuels.

  2. The credibility of this process will hinge on whether countries adopt binding national measures, such as bans on new exploration licenses, rather than relying solely on voluntary commitments. A meaningful transition will require combining incentives (like subsidies for renewables and electrification) with constraints (taxation, regulation and prohibitions), all within the framework of a just transition. Tools like the Fossil Fuel Non-Proliferation Tracker can help monitor and assess fossil fuel-related policies around the world.

  3. The momentum generated in Santa Marta must withstand domestic political shifts that could weaken commitments. This is particularly relevant for oil-producing countries like Colombia, where future governments may adopt different positions. At the same time, persistent distrust remains due to the gap between climate finance promises made by wealthy countries and the funds actually delivered.

  4. Effective action will depend on stronger co-ordination among governments, civil society and the scientific community. Notably, the parallel academic conference and the People’s Summit for a Fossil-Free Future at Santa Marta produced detailed and actionable proposals aligned with the scale of the climate crisis. Bridging these initiatives with formal policy processes will be essential to move from rhetoric to implementation.

From commitments to action

The conference in Santa Marta is an important step toward building political coalitions to phase out fossil fuels. But its long-term significance remains uncertain. Without binding commitments, political continuity and co-ordinated action, it risks becoming another instance of empty climate diplomacy.

Turning this moment into a movement will demand structural reforms, credible policy tools and sustained political will. International negotiations and clear roadmaps are crucial.

A follow-up summit is planned for 2027 in the South Pacific, co-chaired by Tuvalu and Ireland. In the meantime, three working groups have been established. One to develop national and regional phase-down roadmaps. Another is to address macroeconomic dependence on fossil fuels and strengthen public financial capacity. And a third is focused on decarbonizing international trade.

Santa Marta could mark the beginning of a major shift in climate negotiations, one clearly focused on ultimately phasing out fossil fuels.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Countries must back commitments to transition from fossil fuels with action – https://theconversation.com/countries-must-back-commitments-to-transition-from-fossil-fuels-with-action-282118

Canada’s first Inuit-led university is coming to Nunavut — here’s why it matters

Source: The Conversation – Canada – By Daniel Sims, Associate Professor of First Nations Studies; Adjunct Professor of Education, University of Northern British Columbia

The small community of Arviat, Nvt., has reportedly been selected to host the main campus of Inuit Nunangat University, the first Inuit-led university in Canada. The institution is expected to open in 2030.

Inuit Tapiriit Kanatami (ITK), which represents Canada’s 70,000 Inuit, passed a resolution to develop the university in 2017, “marking a significant step toward self-determination in higher education.”

The vision and plans for the university reflect a common saying among the Indigenous Peoples of the Prairies: “Education is the new buffalo.” It alludes to the importance of buffalo to Indigenous Peoples prior to the animal’s near-extinction in the late 19th century, and the importance placed on education today.

This emphasis on education is partly a response to colonial policies that systematically denied Indigenous Peoples access to quality education for generations.

The consequences of that history are still seen today. While there is a gap in employment rates between Indigenous and non-Indigenous adults overall, the gap essentially disappears for those with a bachelor’s degree or higher.

In this context, the establishment of a university is more than the creation of educational institution. It’s a way to combat the injustices of the past and develop the Indigenous economy, which also helps fund Indigenous self-determination.

Not the first Indigenous university

Inuit Nunangat University will not be the first Indigenous-led university in Canada. That distinction is most often attributed to the First Nations University of Canada in Saskatchewan, which started as the Saskatchewan Indian Federated College in 1976.

The university itself does not appear to claim the distinction on its website, perhaps because of the long history of Indigenous-led post-secondary institutions that predate or parallel it across Canada, from the Wilp Wilxo’oskwhl Nisga’a Institute in Gitwinksihlkw, B.C., which is federated with the University of Northern British Columbia, to Kiuna College in Odanak, Que.

The existence of these institutions reinforces the value Indigenous Peoples see in education — a statement that may surprise those who associate Indigenous education primarily with the residential school system.




Read more:
National Day for Truth and Reconciliation: Universities need to revisit their founding stories


Yet, as the Truth and Reconciliation Commission’s final report mdae clear, the schools were incredibly poor at actually educating Indigenous children; those who succeeded academically did so despite the system rather than because of it. The schools were designed primarily around assimilation and labour, not academic learning.

That failure, and the determination to correct it, is one of the reasons why members of Saddle Lake Cree Nation occupied the Blue Quills Indian Residential School in Alberta in 1970 and demanded the right to run it themselves

Elder Louis Lapatack from Saddle Lake Cree Nation speaks about life at the Blue Quills Residential School. (City of Edmonton)

After a 17-day sit-in, then-minister of Indian Affairs Jean Chrétien transferred operations to the Blue Quills Native Education Council. The council eventually transformed it into the Indigenous-run and operated University nuhelot’įne thaiyot’į nistameyimâkanah Blue Quills.

Education as a form of investment

Many First Nations, Métis nations and Inuit communities fund post-secondary education for their members, often through partnerships with Indigenous Services Canada. There is a broad recognition that investing in education benefits the nation and community, and the number of Indigenous Peoples obtaining a bachelor’s degree or higher has been increasing.

That is one reason for the numerous Indigenous led post-secondary institutions across Canada. Another is that, while Indigenous Peoples are theoretically free to attend any post-secondary institution in the world, many institutions are not located near their communities.

This matters more than it might initially appear. According to the 2021 Census, there is a clear correlation between remoteness and lower levels of post-secondary education. The share of Indigenous adults with a post-secondary qualification was significantly higher in areas closer to economic centres.

Building schools to be closer to home, rather than expecting Indigenous students to travel or move away from home, is the logic behind Inuit Nunangat University.

Designing from the inside out

There are also benefits to having institutions under Indigenous control. Indigenous-led post-secondary institutions can develop curriculum and programs that are directly tailored to the needs and desires of their communities.

They also treat Indigenous knowledge systems as foundational rather than supplementary. For generations, Indigenous ways of knowing were delegitimized. Western disciplines defined what counted as knowledge, and Indigenous Peoples who entered those institutions were expected to set aside their own epistemologies.

Most Canadian universities are attempting to address this through changes grouped under the term “Indigenization,” but questions remain about whether such changes actually address underlying colonial structures or simply work around them.

Indigenous post-secondary institutions are, in principle, better positioned to make more fundamental changes. Nowhere is this better seen than in the six proposed faculties of Inuit Nunangat University, which reflects an Inuit take on programs and courses that differs from the standard structure of Canadian universities. This includes Inuktut language immersion.

Other Indigenous institutions have already led the way on language-based degrees. The Nicola Valley Institute of Technology and the aforementioned Wilp Wilxo’oskwhl Nisga’a Institute have created language-based degrees for Nłe?kepmx and Nisga’a in partnership with the University of British Columbia and University of Northern British Columbia respectively.

A barrier dismantled

Between 1876, when the Indian Act was first passed into law, until its 1920 amendment, status Indians lost their Indian status if they earned a degree and/or worked in certain professions.

For decades after, the most significant barrier to education was the failure of the Indian Residential School system to actually educate Indigenous children. Both forms of exclusion have now been formally dismantled, though their effects persist in the gaps that remain.

More and more Indigenous Peoples are pursuing post-secondary education, and institutions designed specifically to support that pursuit are a central part of how those gaps close. The Inuit Nunangat University, opening in Arviat in 2030, will be part of that process.

The Conversation

Daniel Sims is a member of the Tsay Keh Dene First Nations. Currently he holds an Insight Grant as well as an Explore Grant from the Social Sciences and Humanities Research Council (SSHRC) to research failed economic developments and concepts of wilderness in Tsek’ehne traditional territory (the Finlay-Parsnip watershed).

ref. Canada’s first Inuit-led university is coming to Nunavut — here’s why it matters – https://theconversation.com/canadas-first-inuit-led-university-is-coming-to-nunavut-heres-why-it-matters-281616

The Conversation Africa: 11 years of impact

Source: The Conversation – Africa – By Jabulani Sikhakhane, Editor, The Conversation

Over the past 11 years, The Conversation Africa has published 12,961 articles by 8,257 authors, making the expertise of academics and researchers in Africa and other parts of the world accessible to the public, national and global policymakers, and other stakeholders. These articles are also republished by other media, making our work an important pillar of the media ecosystem.

It’s sometimes tough to gauge the true impact of the articles we publish. Replication by other news outlets – and readership on our site – help put numbers on their reach, but not how they might influence policy and opinion.

So it’s very gratifying when authors share stories that illustrate the ripple effect their articles have had. Here are some.

After the publication of her article on the pressures facing families that rely on social grants in South Africa, Nokukhanya Ndhlovu was invited by the country’s Public Protector to consult on hearings about child support and social assistance.

In Kenya, Joseph Ogutu’s analysis of a wildlife conservation policy fed directly into high-level discussions. The author was invited to make a presentation at an annual stakeholder meeting organised by the local governor’s office.

In west Africa, an article by Ifesinachi Okafor-Yarwood and Sayra van den Berg Bhagwandas on the central role women play in informal cross-border trade helped shift thinking among policymakers, helping gain broader recognition of women’s economic contributions. Following the article, the authors were invited to consult with policymakers at the United Nations and the World Trade Organisation.

Other stories demonstrate how impact can unfold through shifts in awareness and accountability. Coverage of issues ranging from social justice to agriculture have triggered consultations between researchers and policymakers, opening pathways for longer-term reform.

The impact of these articles, and thousands of others, is a reminder of why The Conversation Africa exists: to ensure that evidence informs debate, that African expertise shapes decisions, and that knowledge can help build better policy outcomes across the continent.

The Conversation

ref. The Conversation Africa: 11 years of impact – https://theconversation.com/the-conversation-africa-11-years-of-impact-282317

The peptide problem: Hype is outrunning the evidence

Source: The Conversation – Canada – By Stuart Phillips, Professor, Kinesiology, Tier 1 Canada Research Chair in Skeletal Muscle Health, McMaster University

Health Canada recently warned Canadians not to buy or inject unauthorized peptide drugs sold online, naming products that include BPC-157, CJC-1295, ipamorelin, TB-500 and retatrutide.

The advisory notes these products are being marketed online and on social media for anti-aging, weight loss, injury recovery, sleep, mental focus and general “wellness,” and that Health Canada has already seized several of them.

Peptides, short chains of amino acids (the building blocks of protein), are no longer marketed only to bodybuilders and elite athletes.

A scroll on Instagram and TikTok quickly reveals a broader wellness market in which influencers, including medical doctors, naturopaths and personal trainers, pitch compounds such as BPC-157 and TB-500. The hook? These self-injected compounds are recovery shortcuts, reduce wrinkles, “melt” belly fat and are “anti-aging” with strong and incredible effects.

The problem? Few, if any, of these substances have been tested in human trials.

As a case example, body protective compound 157 (BPC-157) is scientifically interesting. Reviews published in 2025 describe a body of research dominated by animal and cell studies, with signals suggesting effects on angiogenesis (the growth of blood vessels), growth-factor signalling (mainly growth hormone) and musculoskeletal healing.

In one systematic review, 544 papers were screened, 36 met the inclusion criteria, and 35 of those were in rodents or cells; only one involved humans in a musculoskeletal context.

Plausible hypotheses

That is the tension at the heart of the current peptide boom: plausible biology can generate excitement long before it generates reliable clinical evidence. Caution is warranted because animal findings do not reliably map onto what happens in people. Molecular pathway diagrams and rodent healing results are useful for generating scientific hypotheses, but they aren’t evidence that a product improves outcomes in human patients.

Most potential products tested in rodents do not make it to market. The “translational squeeze” — the number of products that begin rodent trials compared to the number that successfully progress from rodent trials to human trials, and from human trials to regulatory approval — is estimated to be greater than 20 to one.

Published human evidence for BPC-157 remains trivial. A retrospective knee-pain report included 16 patients; an interstitial cystitis pilot trial enrolled 12 women; and a recent intravenous safety pilot involved just two healthy adults.

These studies are too small and poorly controlled to establish whether the peptide outperforms natural recovery, the placebo effect or conventional rehabilitation. A randomized, double-blind, placebo-controlled hamstring-strain trial has now been registered, which is exactly the kind of study still missing from the evidence and efficacy base.

Placebo effect and regression to mean

The social power of peptide testimonials is easy to understand. Pain, soreness and recovery are subjective and highly variable outcomes.

The U.S. National Center for Complementary and Integrative Health notes that randomized, placebo-controlled trials are the gold standard because they help determine whether apparent improvement is due to the treatment or to chance. Harvard Health puts the related point bluntly: placebo effects can ease symptoms like pain, fatigue and nausea, but they do not shrink tumours or lower cholesterol.

Symptoms that are severe when people first seek help often improve by the time they are next measured, simply because of natural fluctuation, a phenomenon known as regression to the mean. So when someone injects BPC-157 and feels better two weeks later, several explanations compete: time, rehabilitation, expectation (the person has just spent money on a peptide and perhaps publicly committed to trying it), and regression to the mean. A testimonial saying “it worked,” can generate a hypothesis; it cannot settle causation.

For these reasons, regulatory warnings deserve more attention than influencer enthusiasm. Health Canada states that unauthorized injectable peptides are illegal in Canada, have not been assessed for safety, efficacy or quality, and may contain too much, too little or none of the claimed ingredient.

Notably, labels such as “For Research Use Only, Not for Human Consumption” do not make these products legal for human use.

In the United States, the Food and Drug Administration (FDA) classified BPC-157 as Category 2 for compounding due to adverse immune system reactions, peptide-related impurities and insufficient safety information to determine whether it would cause harm when administered to humans.

Purity certificates and conspiracy theories

A common rejoinder from people buying peptides online is that third-party certificates of analysis show the powder they receive is pure and free of contaminants. That reassurance does not survive scrutiny.

The “third-party” labs that produce these reports are often the vendors themselves and offer assurances of 98 per cent purity, which might seem impressive but would not meet any reasonable drug standards. And what exactly is the other two per cent? The consumer is asked to take the claims of purity as proof, while the peptide-related impurities that concern regulators remain invisible to the end user.

There is a deeper irony embedded in this practice: if buyers believed these products were safe, properly characterized and manufactured to the standards expected of pharmaceutical-grade products, they would not need to commission independent purity tests. The reliance on outside certificates of analysis is itself an admission that the normal guardrails of identity, potency, sterility and quality control are absent.

The conspiracy theory is that useful peptides are ignored by pharma companies because peptide drugs cannot be patented and become real medicines. The facts do not support that.

Semaglutide (used in GLP-1 medications like Ozempic and Wegovy) is a peptide drug, and tesamorelin is an FDA-approved synthetic growth hormone-releasing factor analogue. Peptide therapeutics are not an exotic category that mainstream drug development cannot handle.

What makes BPC-157 different is not that peptide medicine is impossible. But it’s been more than three decades since researchers began studying BPC-157, and public evidence remains dominated by animal- and cell-based papers and small human pilot studies. Journalistic investigation has also noted that much of the BPC-157 literature traces back to a single Croatian research group, another reason to be careful about mistaking repetition for independent confirmation.

Safety concerns

Jurisdictional and approval rules vary across regulators, but a global scan reveals that only a scant few peptides in BPC-157’s broader therapeutic class have achieved any clinically approved use.

A recent FDA 503A update in the U.S. should not be mistaken for a change in that picture. The FDA’s current safety page continues to cite concerns about immunogenicity, peptide impurities and limited safety data, and the agency has stated that a substance may still pose significant safety risks.

BPC-157 or other peptides may yet prove useful for a specific condition, at a specific dose and route of administration. The right response is not to dismiss that possibility, but to insist on the blinded, placebo-controlled human trials that could actually settle the question.

Until then, buying vials of dry powder, reconstituting it in sterile water, and injecting the cocktail with online-purchased needles will not provide proof of anything. It is high-risk, uncontrolled human self-experimentation.

The Conversation

Stuart Phillips owns shares in Exerkine. He receives funding from Nestle, Optimum Nutrition, Danone, and Nutricia. He is affiliated with WndrHlth, Liquid IV, and Myomar.

ref. The peptide problem: Hype is outrunning the evidence – https://theconversation.com/the-peptide-problem-hype-is-outrunning-the-evidence-280715

Une méthode innovante pour détecter l’Alzheimer de manière précoce, grâce à l’analyse du langage et à l’IA

Source: The Conversation – in French – By Sylvie Ratté, Professeure titulaire en Génie logiciel et des TI, École de technologie supérieure (ÉTS)

L’analyse du langage permet d’identifier précocement les signes de déclin cognitif. Un projet en cours pourrait transformer la prise en charge de la maladie d’Alzheimer.


Pour améliorer le dépistage précoce de cette maladie neurodégénérative qui touche le tiers des personnes âgées de 80 ans et plus au Canada, mon équipe de recherche et moi, à l’ÉTS, cherchons à élaborer une méthode innovante basée sur l’intelligence artificielle (IA). Notre approche, non invasive et accessible, repose sur l’analyse du langage des patients.


Cet article fait partie de notre série La Révolution grise. La Conversation vous propose d’analyser sous toutes ses facettes l’impact du vieillissement de l’imposante cohorte des boomers sur notre société. Manières de se loger, de travailler, de consommer la culture, de s’alimenter, de voyager, de se soigner, de vivre… découvrez avec nous les bouleversements en cours, et à venir.


Le langage comme indicateur de troubles cognitifs

L’un des premiers signes de la maladie d’Alzheimer est la modification subtile du langage. Les personnes peuvent, par exemple, avoir du mal à trouver leurs mots, utiliser des pronoms à la place de noms ou multiplier les pauses. L’expression de leurs idées est moins dense.

Une première étude sur le sujet a procédé à une analyse manuelle des écrits autobiographiques de religieuses âgée dans la vingtaine. Elle a conclu que la densité des idées exprimées était un bon prédicteur de la maladie d’Alzheimer, même si la maladie se présentait 50 ans plus tard.

L’outil que nous avons conçu analyse ces indices linguistiques avec un test simple : la description d’images. Ce test fait partie de l’outil diagnostique BDAE (Boston Diagnostic Aphasia Examination). Il consiste à demander au patient de décrire une image. Ce test permet d’évaluer divers aspects du langage oral. La figure suivante présente une des images utilisée.

Cookie Theft Image
Une image utilisée lors de l’application du Boston Diagnostic Aphasia Examination pour l’évaluation du langage.
CC BY-NC

Cette approche nous permet de mesurer la richesse lexicale, la complexité syntaxique et les marqueurs d’hésitation, autant de signes précurseurs du déclin cognitif.

Nos travaux montrent qu’en analysant les changements subtils dans la structure du discours, nous pouvons identifier jusqu’à 85 % des patients atteints d’Alzheimer, même à un stade très précoce.

Une solution de rechange aux outils cliniques traditionnels

Contrairement aux méthodes classiques de diagnostic, qui reposent souvent sur des tests cognitifs (qui incluent notamment ceux de description d’images) ou des techniques d’imagerie lourdes et coûteuses, notre approche permet une détection accessible et un suivi fréquent.

Actuellement, les tests cliniques de description d’image exigent une évaluation manuelle des réponses des patients, ce qui est long et imprécis. Les cliniciens doivent retranscrire mot pour mot les réponses des patients, ce qui est une tâche pratiquement impossible dans un cadre hospitalier.

Avec notre technologie, nous automatisons cette étape et extrayons des centaines de caractéristiques du langage pour affiner l’analyse.

Notre procédé permet non seulement de détecter la maladie, mais aussi de suivre son évolution et d’analyser l’effet des traitements. Nos outils peuvent mesurer les progrès des patients au fil du temps, ce qui est essentiel pour évaluer l’efficacité des traitements.

Des défis techniques et éthiques à relever

Malgré les avancées prometteuses, l’intégration de l’IA dans le domaine médical pose encore des défis.

Un des enjeux majeurs est l’acceptabilité par les professionnels de la santé et les patients. Il faut non seulement prouver l’efficacité de ces outils, mais aussi rassurer quant à la protection des données personnelles et l’éthique de leur utilisation.

De plus, l’analyse du langage repose sur des algorithmes qui doivent être entraînés sur des bases de données représentatives. Nous devons nous assurer que notre modèle fonctionne pour des personnes de différentes origines linguistiques et socioculturelles. Dans le cadre de notre recherche, nous nous assurons de collecter des données diversifiées au Canada mais aussi en Équateur et au Mexique.

Un programme (ou un logiciel) classique fonctionne sur la base d’un processus logique dans lequel s’enchaîne un ensemble d’instructions qui traite les données d’entrée pour produire un résultat en sortie.

L’IA utilise des données autrement : elle s’en sert pour y détecter des patrons récurrents. Le processus est similaire pour une autre IA qui détecterait d’autres types de problèmes. Seules les données changent. Si celles-ci ne sont pas assez variées, l’IA se collera à cette réalité, ce qui peut engendrer des biais culturels ou linguistiques et nuire à la fiabilité des diagnostics.

Ainsi, dans une de nos expériences, l’IA n’était pas adaptée au fait qu’à une certaine époque, les femmes portaient souvent des tabliers dans la cuisine. Bien que ce mot s’avère très pertinent pour évaluer la qualité des descriptions, l’IA avait évacué ce mot, car il n’apparaissait pas avec suffisamment de fréquence.

L’autre problème rencontré concerne la qualité des IA intermédiaires utilisées pour transformer le signal de la parole en texte écrit (transcriptions). Les IA qui résolvent cette transformation sont moins performantes pour le français (et plus particulièrement le français parlé du Québec et du Canada en général) et encore moins pour l’espagnol.

Une technologie aux multiples applications

Les implications de cette recherche dépassent le cadre de la maladie d’Alzheimer. Nous travaillons également sur l’aphasie qui affecte la communication, tant dans la compréhension que dans le langage. Cette condition peut être le résultat d’un accident vasculaire cérébral ou un traumatisme crânien.

Notre équipe de recherche explore aussi l’utilisation de ces outils pour les enfants autistes non verbaux, qui apprennent souvent le langage différemment. Par exemple, nous avons constaté que certains de ces enfants acquièrent une langue grâce à l’exposition à des vidéos sur YouTube, ce qui ouvre de nouvelles pistes d’exploration sur l’apprentissage du langage.

Ces travaux s’inscrivent dans une perspective plus large visant à mieux comprendre les liens entre langage et cognition. L’intelligence artificielle nous permet d’extraire une quantité d’informations que l’humain ne pourrait pas analyser à grande échelle. L’objectif final est de développer des outils adaptés aux besoins des cliniciens et des patients, pour améliorer leur qualité de vie.

Un avenir où IA et santé convergent

L’intégration de cette technologie dans les pratiques médicales pourrait révolutionner la prise en charge des troubles cognitifs. Notre but est de rendre ces outils accessibles à tous, sans que cela requière du matériel sophistiqué.

Cette approche pourrait ainsi permettre une détection précoce et un suivi plus personnalisé, bénéficiant à des millions de patients partout dans le monde.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


En combinant l’IA et les sciences cognitives, notre équipe de recherche pave la voie à une médecine plus prédictive, mieux adaptée aux besoins des patients et plus efficace dans la lutte contre les maladies neurodégénératives.

Bien que nous soyons encore au début de cette révolution technologique, les avancées actuelles montrent déjà un potentiel considérable. En rendant ces outils accessibles, nous espérons transformer la manière dont nous abordons le diagnostic et le suivi des troubles du langage.

En parallèle, notre équipe travaille déjà à de nouvelles collaborations avec des établissements médicaux pour tester ces technologies en milieu clinique.

Nous espérons qu’à terme, nos outils puissent être intégrés directement dans les protocoles de soins, afin d’offrir un suivi plus précis et adapté aux besoins individuels des patients.

La Conversation Canada

Sylvie Ratté a reçu des financements du CRSNG et de MITACS.

ref. Une méthode innovante pour détecter l’Alzheimer de manière précoce, grâce à l’analyse du langage et à l’IA – https://theconversation.com/une-methode-innovante-pour-detecter-lalzheimer-de-maniere-precoce-grace-a-lanalyse-du-langage-et-a-lia-254338

Self-destructive behaviour among Hermann’s tortoises on a Macedonian island is leading to ‘demographic suicide’

Source: The Conversation – France – By Xavier Bonnet, Directeur de Recherche CNRS à l’UMR 7372 en biologie et écologie des reptiles, Centre d’Etudes Biologiques de Chizé; La Rochelle Université

Golem Grad island in North Macedonia is full of Hermann’s tortoises. However, demographic projections suggest the last female could be wiped out by 2083. A female (pictured) that had fallen down a sheer drop of more than 20 metres. Fourni par l’auteur

On the strictly protected island of Golem Grad in North Macedonia, the tortoises are destroying their own population. During prolonged courtship, aggressive males are exhausting the females and frequently pushing them off the cliffs. Consequently, there are now one hundred males for every female capable of laying eggs. This is the only known example of demographic suicide in the wild to date.

Under favourable, stable and protected environments, large animal populations have no reason to die out. This should not happen unless a catastrophe, such as a devastating fire or the destruction of their habitat, or over-exploitation, wipes out all individuals or weakens the population, making it vulnerable to disease and other disturbances and hazards.

Well sheltered by the steep cliffs that line the island of Golem Grad on Lake Prespa in North Macedonia, Hermann’s tortoises (Testudo hermanni boettgeri) thrive on the wooded plateau.

After basking in the morning sun, they graze in the meadows, rest, and court, with the males emitting high-pitched sounds during mating. At first glance, nothing seems to threaten this population.

As is the case with other long-living species, maintaining populations requires high survival rates among adults. On Golem Grad, the adults have no predators, as wild boars, dogs, rats and humans are absent from this strictly protected island. The mild Mediterranean climate of this lake, situated at an altitude of 850 metres, is also favourable for reptiles.

All these factors explain the extraordinary population density, which stands at around 50 individuals per hectare – the highest ever recorded for tortoises. The ease with which these tortoises can be observed and studied led to the establishment of the field-monitoring programme in 2008. This was the result of a fruitful scientific collaboration between North Macedonia, Serbia and France, and this long-term monitoring programme was awarded the CNRS’s SEE-Life label in 2023.

But appearances can be deceiving: this population is in a critical state.

The extensive demographic, behavioural, physiological and experimental data collected over nearly 20 years show that, although highly active sexually and reproductively, this population is effectively committing suicide!

Demographic suicide

Demographic suicide is a strange and counter-intuitive theoretical process. The conditions under which it may arise are quite specific. For a given species, one must imagine a high-density population in which violent sexual behaviour is so prevalent that it threatens the survival of females. This would gradually lead to an imbalance in the sex ratio (the proportion of males and females in a population), in this case an excess of males. This would put increasing pressure on females, who would become fewer in number and more harassed as a result. This would eventually create a vicious circle that leads to the disappearance of females and, ultimately, the extinction of the population.

The cliffs of Golem Grad
Golem Grad is an 18-hectare island in a lake perched at an altitude of 850 metres. On its plateau there is a forest of Greek junipers that can reach heights of up to 10 metres, as well as numerous reptiles, snakes, lizards and birds. The steep cliffs are particularly dangerous for female tortoises when they are harassed by the males exhibiting violent sexual behaviour. Provided by the author.
Fourni par l’auteur

Coercive and violent mating behaviours are fairly common in nature. Typically, males harass females until they mate, sometimes injuring them in the process. In some cases, such behaviour can result in the death of the female, as has been observed in elephant seals (where the males are considerably stronger than the females), as well as in wild sheep, grey squirrels, otters, deer, toads, fruit flies, humans… However, such fatal outcomes do not benefit the males, as they will have no offspring if the females die during mating. Therefore, such excessively violent behaviours are maladaptive and remain marginal.

Furthermore, in wild populations, various regulatory mechanisms prevent this type of vicious circle, or extinction vortex, from emerging. Females can employ a wide range of avoidance and defence strategies. For example, they can hide, seek the protection of a dominant male, or form alliances. Excessively violent males generally produce fewer offspring than those who spare the females, meaning their behavioural traits are less likely to persist over time. Furthermore, when males become overcrowded, they tend to emigrate in search of better mating opportunities, thereby reducing the pressure on females. Thus, conflicts between the sexes in coercive mating systems are resolved through effective equilibria, without a harmful escalation for either sex.

However, rare experiments conducted on animals studied in captivity have shown that males can have a strong negative impact on populations when the sex ratio and population density are artificially skewed in favour of males. For example, in a species of Japanese shrimp, an excess of males reduces female fertility and mating opportunities. In the common lizard, an excess of males leads to increased aggression, reducing both the fertility and survival of females. This theory has thus received partial confirmation through experimentation.

What is causing disruption to the population in Golem Grad?

Information on the sexual behaviour of terrestrial tortoises, alongside a comparison with a control population, would be useful for understanding the situation in Golem Grad. The mating system of tortoises is coercive: males chase females, bump into them (like bumper cars) and sometimes bite them until they bleed and, in the case of Eastern Hermann’s tortoises, press on the females’ cloaca with their sharp tail spurs until they yield.

Mating
Before successfully mounting the female, the male persists for a long time by chasing her, biting her legs and bumping her shell, until she yields. Provided by the author.
Fourni par l’auteur

Hermann’s tortoises are still abundant in North Macedonia. We were therefore able to study another dense population located on the shores of the lake, just 4 kilometres from the island. Genetically very similar to the Golem Grad population, this population lives in a protected environment without cliffs. The females are large and heavy, with many weighing between 2.5 and 2.9 kg, and highly fertile, as shown by X-rays. They are slightly more numerous than the males and larger than them, and they effectively resist their intermittent sexual assaults. No demographic problems have been detected; population forecasts suggest an increase in numbers.

A caudal spur
Males use the long horny tip of their tail to jab the females’ cloaca. On Golem Grad, this often results in injury. Provided by the author.
Fourni par l’auteur

However, the situation at Golem Grad is quite different. On the plateau, over 700 adult males roam around looking for the forty or so adult females.

Furthermore, if physiological and environmental conditions are unfavourable, a Hermann’s tortoise may fail to lay eggs after mating. For example, if they are too thin or stressed, they are unable to build up reserves in the ovarian follicles and the eggs do not develop. In reality, therefore, there are more than 100 males for every female capable of laying eggs. However, our analysis of neonate and juvenile cohorts shows that the sex ratio is balanced at birth and during the first years of life, becoming imbalanced later on.

The surplus males often act in groups of three to eight. They harass the females all day long and injure them. They then lay down beside the females in the evening, ready to start again the next day. The females have little respite and do not have enough time to feed. They are thin, very few exceed 1.6 kg, with a maximum of 1.75 kg and when they lay eggs, they produce half as many as those in the control population.

Unable to escape, the females are regularly driven to the cliff edges, where the obstinate and clumsy males sometimes push them over. On July 18 2023, a GPS device fitted with an accelerometer, attached to a female, recorded her fall of over 20 metres; she died, broken in two, along with her three eggs.

A female broken in two
This female lived on the plateau; she fell from a height of over 20 metres. She was probably pushed by persistent males. Females, who are becoming increasingly rare, are being harassed more and more, creating a vicious circle or
Fourni par l’auteur

Since the start of the study, we have identified almost all the turtles that have been found dead in the field, where their shells remain intact for a long time. Of the females that died, 22% suffered a fatal fall, compared to 7% of the males.

In collaboration with British colleagues, we have also developed an epigenetic clock to estimate the age of individuals from a blood sample.

The oldest males are over 60 years old and the oldest female is 35. These results are consistent with morphological, growth, and demographic analyses. The survival rate is abnormally low among females, due to male aggression.

The vicious circle of extinction

Over time, the decline in the number of adult females, coupled with a drop in their fertility, slows down the renewal of the population, both relatively (the proportion of females) and absolutely (the total number of females). In 2009, we captured 45 adult females in the field, compared to 37 in 2010, 20 in 2024, and just 15 in 2025.

However, it takes a female around fifteen years to reach adulthood. Frustrated by the lack of sexual partners, males mate with other males, carcasses, stones, and immature females. Through this latter behaviour, they prematurely compromise the survival of females and exacerbate their demographic problem.

Population dynamics can be modelled by incorporating the above parameters and others. It is also possible to make predictions. The last female could die in 2083. The males, now deprived of females, will survive for decades, as these tortoises can live for over eighty years and will eventually die out. This is a prediction; perhaps the population, which is currently on the brink of extinction, will recover, even if we cannot see how. While the tortoises’ very slow pace of life has given us the opportunity to observe an extinction vortex in the wild and test a strange theory, intensive field monitoring has provided us with the data and inspiration above all else.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Xavier Bonnet has benefited from funding from SEE-Life CNRS.

ref. Self-destructive behaviour among Hermann’s tortoises on a Macedonian island is leading to ‘demographic suicide’ – https://theconversation.com/self-destructive-behaviour-among-hermanns-tortoises-on-a-macedonian-island-is-leading-to-demographic-suicide-282081

College students are noticing their AI-smoothed writing sounds strong — and not like them

Source: The Conversation – Canada – By Nurul Hassan Mohammad, PhD Candidate, Ontario Institute for Studies in Education, University of Toronto

Generative AI has become a part of everyday student life in Canada. While institutions focus on misconduct and detection, a deeper shift is happening, one that concerns identity.

A recent KPMG Canada report finds that 73 per cent of students use generative AI for schoolwork, and nearly half say it is their “first instinct.” Also significant is the finding that many students also report feeling uneasy, worried that their use may be seen as cheating.

The study is based on a survey of 684 university, college, vocational and high school students within a larger sample of 3,804 Canadians (aged 18+), on how people are adopting generative AI.

In my doctoral research on STEM education in Ontario colleges, I’m exploring how AI is transforming not only how students write but also how they perceive voice, legitimacy and what it means to be themselves.

Academic policies can define what constitutes cheating, but they do not address a more subtle concern: if AI helped write my assignment, will I still be seen as capable, and will my work represent me?




Read more:
What are the key purposes of human writing? How we name AI-generated text confuses things


Identity takes shape through writing

Writing is more than a technical skill. It is one of the primary ways students structure and elaborate ideas, demonstrate competence and position themselves as emerging professionals.

This is particularly significant in STEM, where programs are often closely linked to specific career paths. Students are expected to begin positioning themselves as future professionals through how they communicate and present knowledge.

At the same time, STEM fields are often seen as primarily technical or data-driven, with writing treated as secondary. Yet research shows that communication is central to scientific practice, shaping how knowledge is constructed, interpreted and shared.

A Black person's hands seen on a laptop keyboard.
Communication shapes how knowledge is constructed, interpreted and shared.
(Allison Shelley/EDUimages), CC BY-NC

AI is part of envisioning career paths

Even beyond this, when science students write assignments, they also undertake what social and cultural theorists describe as “identity work.”

Through writing, students build narratives that let them explore how they might belong in particular worlds or professional fields. In my research, I examine how STEM programs operate as cultural worlds with implicit rules about what counts as smart, credible and legitimate participation.

Students interpret rules and adjust how they portray themselves in their work. This identity work is shaped by prior experiences, confidence with disciplinary language and alignment between personal interests and the STEM career paths they see as being available to them. AI is now part of that process.

‘Kinda generic’

In my research, I have observed college STEM classes, taken field notes and spoken with a cohort of students multiple times over a two year period about their work.

I often hear a version of the same concern: the AI-generated draft is technically strong, but “it does not sound like me.” This concern reflects the insight that “voice” or “sound” in writing is a signal of legitimacy.

In my collaborative work on cultivating student agency, I use the idea of “becoming alive within science education” to describe moments when students can bring more of themselves — their perspectives, ways of thinking and experiences — into how they learn and express ideas.

Yet institutions often favour more standardized forms of writing. AI can intensify this by making a fluent, generic style instantly available. For some students, this lowers barriers and supports access. For others, it feels like self-erasure.

One student put it this way:

“It’s better writing, yeah, it sounds good and helps get a better grade. But it’s kinda generic. Like anyone could’ve written it, not just me.”

This recurring pattern in the data points to a broader tension: phrasing, structure and tone in writing carry traces of identity, traces AI can smooth or erase.

How we think about ourselves

Many of us have likely noticed that AI tools can improve the quality and efficiency of writing and may also lead to more uniform outputs, reducing variation in how ideas are expressed. These concerns are echoed in education guidance.




Read more:
Slanguage: Why AI’s stylistic negation — ‘it’s not X, it’s Y’ — is both annoying and doesn’t work


UNESCO warns that AI systems can shape how knowledge is produced and expressed, raising questions about human agency and originality. Canadian policy discussions similarly highlight both the opportunities and risks of AI for student learning and authorship.

Taken together, these insights suggest how beyond only assisting human writing, AI shapes how voice is expressed and how we think about ourselves.

Policy catching up

Canadian post-secondary institutions are still determining their approach to AI.

Many policies aim to balance flexibility with oversight, allowing limited AI use while emphasizing disclosure and addressing risks such as fabricated citations, bias and privacy issues.

Yet institutions also acknowledge challenges in enforcement.

As policies evolve, uncertainty remains. Students must navigate what is permitted, what constitutes their work and whether it truly reflects who they are.

STEM and belonging

In Canada, participation in STEM fields remains uneven across gender and other social dimensions such as race, Indigenous identity, socioeconomic status and immigrant background.

Many students already question whether they belong, making recognition deeply consequential.

If AI-generated writing becomes the implicit standard for “good work,” students may begin to locate competence in the tool rather than in themselves.

Students who rely on AI may question the authenticity of their success, while those who avoid it may feel at a disadvantage.

What can educators do?

Rethinking learning design is important. Students should not have to guess what is acceptable. Assessments should focus on process that makes students’ thinking visible, not just product.

Significantly, writing in one’s own voice must be treated as a skill worth developing.




Read more:
ChatGPT is in classrooms. How should educators now assess student learning?


In practice, this can be as simple as asking students to explain how they used AI in an assignment, or compare an AI-generated paragraph with their own and discuss what changed in tone, clarity and reasoning.

Instructors might also ask students to revise AI-polished text so it reflects their own thinking, or to identify where their interpretation and uncertainty matter. These and other small shifts help foreground not only what students produce but also how they think and position themselves in their work.

AI is here to stay. The question is whether STEM classrooms will help students use these tools without losing their voice, their agency and their sense of belonging.

The Conversation

Nurul Hassan Mohammad does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. College students are noticing their AI-smoothed writing sounds strong — and not like them – https://theconversation.com/college-students-are-noticing-their-ai-smoothed-writing-sounds-strong-and-not-like-them-279436

Crèches, écoles… les bâtiments publics au défi d’un nettoyage moins polluant

Source: The Conversation – France (in French) – By Guillaume Christen, Maître de conférences à l’Université de Strasbourg, docteur en sociologie de l’environnement, Université de Strasbourg

Les biocides utilisés dans les produits de nettoyage, en particulier pour la désinfection des bâtiments publics, participent à la pollution de l’eau. Pourtant, des alternatives existent, mais faire évoluer les pratiques est difficile. En cause, la méconnaissance des risques au sein de la filière ainsi que l’organisation du travail actuelle.


Nous manipulons au quotidien des produits contenant des biocides, par exemple pour l’entretien de la maison, l’hygiène personnelle, la lutte contre les nuisibles, etc. En milieu urbain, leur utilisation s’est diffusée dans de nombreuses activités : construction, nettoyage, entretien des espaces verts, etc. On y recourt, par exemple, pour la protection des revêtements de façade afin de lutter contre les mousses et moisissures ou dans des usages domestiques tels que l’hygiène et l’entretien des bâtiments (sols, sanitaires…).

Or, l’utilisation de ces substances contribue à une dégradation de la qualité des eaux. Les solutions envisagées privilégient généralement un traitement des micropolluants après usage. Toutefois, les stations d’épuration ne retiennent que partiellement les molécules polluantes.

Ce constat invite à repenser les usages et à étudier la possibilité d’une réduction à la source. Mais renoncer aux biocides n’est pas une évidence. En effet, il s’agit d’un enjeu environnemental mal identifié par la population. Il requiert aussi des changements de pratiques susceptibles de se heurter à des formes d’inertie.

La question du renoncement aux biocides pour la désinfection (bactéricides, virucides, fongicides) est particulièrement sensible. C’est encore plus vrai dans les secteurs de la petite enfance et de la maintenance de bâtiments municipaux.

Les pratiques en place pour nettoyer les bâtiments publics

Pour comprendre ces enjeux, nous avons mené une enquête sociologique à l’échelle de l’eurométropole de Strasbourg. Celle-ci se fonde sur une trentaine d’entretiens semi-directifs réalisés auprès de la chaîne des acteurs du secteur du nettoyage, en particulier le personnel d’entretien, les responsables techniques et les fournisseurs.

Nous nous sommes appuyés sur l’idée d’une réduction des biocides à la source, à partir du concept de « redirection écologique ». Il ne s’agit pas de simplement optimiser les pratiques existantes, mais bien d’abandonner l’utilisation de certains produits susceptibles de contenir des biocides nocifs.

Dans l’enquête, nous avons cherché à comprendre comment les professionnels de la filière du nettoyage peuvent innover différemment, en utilisant moins de produits chimiques, soit en considérant des alternatives comme la désinfection à la vapeur, soit en privilégiant des composés plus naturels tels que l’acide lactique.




À lire aussi :
Comment limiter le recours aux produits biocides dans les enduits de façades ?


Des micropolluants omniprésents, en particulier les ammoniums quaternaires

Les biocides, dont l’étymologie désigne l’action de neutraliser « les vivants indésirables », sont qualifiés de « micropolluants » en raison de leur faible concentration dans les systèmes aquatiques urbains. Bien que présents en petite quantité, leur impact toxicologique notable sur l’environnement et la santé humaine nécessite notre attention.

Les sources sont, en effet, nombreuses : traitements antibiotiques, PFAS, peintures de façade ou encore désinfectants domestiques, objet de notre propos.

Afin d’assurer la désinfection de certaines surfaces (sols, tables, sanitaires, points de contact), professionnels et particuliers ont recours à l’utilisation de biocides nommés « ammoniums quaternaires » (ou « QUATS »). Or, leur persistance dans l’environnement est préoccupante. Bien qu’il existe d’autres méthodes de désinfection (acide lactique, nettoyage à la vapeur), l’usage de QUATS est la pratique dominante, en partie en raison de leur utilisation reconnue dans le milieu médical. S’il n’est pas possible de quantifier le volume de QUATS introduits dans l’environnement, leur prédominance dans les usages en fait un sujet de santé environnementale, dans une démarche One Health).

Le nettoyage étant une compétence municipale, une commune peut renoncer à l’usage de ces composés. Toutefois, le risque « biocide » est mal identifié par les différents acteurs de la chaîne, ce qui complique sa transmission vers les agents d’entretien qui les manipulent au quotidien.

Un risque invisibilisé par l’« effet de filière »

Il existe en effet un « effet de filière » qui entraîne la délégation de la confiance à des acteurs spécialisés. Cela se traduit par une mise à distance, pouvant être à l’origine d’une « capacité empêchée » des professionnels à formuler une préoccupation environnementale quant à l’usage d’un produit.

Cela se manifeste par une méconnaissance de la composition des produits utilisés, tout au long de la chaîne d’acteurs. Cette perte de savoir s’observe dès les fournisseurs, incapables de transmettre l’information aux responsables techniques et eux-mêmes aux agents. Après usage, les eaux contenant les produits de nettoyage sont collectées et traitées en stations d’épuration. Leur gestion discrète invisibilise le devenir des eaux de nettoyage et la compréhension courante de l’enjeu.

Dans ce contexte, les services techniques de nettoyage, notamment ceux des municipalités, ne font pas de la composition du produit un critère de choix. Ils font confiance aux fournisseurs-producteurs, explique un coordinateur des agents d’entretien d’un établissement scolaire :

« Nous, on est l’utilisateur, on n’est pas le fabricant, donc nous, on arrive à la fin de la chaîne. Nous, on arrive, on dit : “Moi, je veux un produit pour nettoyer.” Après c’est lui, le concepteur, qui dit : “Bah, voilà pour cette tâche, il faut tel grammage, il faut ceci, cela.” Moi, ça, ce n’est pas mon souci. »

Acteurs clés de la filière, fabricants et fournisseurs interviennent dans la définition des pratiques de nettoyage. Ces derniers vendent non seulement des produits d’entretien, mais aussi les préconisations d’application (le type de surface, la fréquence d’utilisation, la préparation, le dosage pour dilution ou encore la méthode de nettoyage), que les agents d’entretien respectent.

Les professionnels perdent ainsi la « trace » et la « mémoire » de la composition chimique des produits d’entretien ainsi que de leurs impacts possibles sur les milieux.

Des alternatives à s’approprier

Désinfecter avec des biocides nocifs (QUATS) n’est pourtant pas inéluctable. L’acide lactique et la vapeur sont deux alternatives déjà utilisées par certains professionnels engagés dans une démarche écologique plus générale. Elles incarnent l’idée d’innover par le « retrait » en incitant à « faire sans » (vapeur) ou « avec moins » (acide lactique).

Ces alternatives qui innovent sans apporter de technologie supplémentaire peinent toutefois à susciter la confiance, dans un contexte où l’innovation technique apparaît comme la clé de lecture légitime. Ces réticences s’expliquent par le fait que l’efficacité est le critère principal servant à juger de la qualité d’un protocole de nettoyage. Or, selon le cercle de Sinner, cette efficacité dépend de quatre facteurs : le temps d’application, l’action mécanique (AM), la température et la chimie utilisée.

Le cercle de Sinner permet de décrire les différentes solutions de nettoyage, en fonction du rôle qu’y jouent quatre composantes : le temps, l’action mécanique, la température et la chimie.
ManonM12/Wikimédia

L’utilisation d’une chimie moins « agressive » se traduit par une augmentation d’au moins un des trois autres facteurs. Ce qui entraîne la modification des pratiques historiques ou même de l’organisation du travail (planning horaire, achat de machines). En d’autres termes, pour nos interlocuteurs, simplifier la chimie du nettoyage revient à complexifier l’organisation quotidienne, freinant l’adhésion aux alternatives.

Le renoncement aux biocides pour l’hygiène suppose de cerner les principaux acteurs impliqués, leur niveau de conscience des impacts (notamment sur la santé et sur l’eau) et les obstacles à l’utilisation d’alternatives. Retrouver des eaux urbaines de qualité et sans pollution est aussi un enjeu d’adaptation dans un contexte de crise écologique : leur réemploi constitue un levier majeur face aux aléas climatiques, par exemple lors des canicules et sécheresses.


ReactiveCity est financé par le programme Interreg VI Rhin supérieur. La recherche associe des chercheurs du département d’hydrologie de l’Université Albert-Ludwig de Fribourg-en-Brisgau (Allemagne), du Groupe de travail sur l’écotoxicologie fonctionnelle aquatique (Université de Coblence-Landau), de l’Institut de chimie durable et de chimie de l’environnement (Université de Leuphana Lüneburg), de l’Institut Terre et environnement de Strasbourg (Ites, porteur du projet) ainsi que du laboratoire Sociétés, acteurs, gouvernement en Europe (Université de Strasbourg).

The Conversation

Guillaume Christen a reçu des financements de l’Union européenne dans le cadre du projet Interreg 6 Rhin supérieur « ReactiveCity : vers une ville pro-active sans biocides » (sept. 2023 – août 2027).

Louise Negri a reçu des financements de l’Union européenne dans le cadre du projet Interreg 6 Rhin supérieur « ReactiveCity : vers une ville pro-active sans biocides » (sept. 2023 – août 2027).

Philippe Hamman a reçu des financements de l’Union européenne dans le cadre du projet Interreg 6 Rhin supérieur « ReactiveCity : vers une ville pro-active sans biocides » (sept. 2023 – août 2027).

ref. Crèches, écoles… les bâtiments publics au défi d’un nettoyage moins polluant – https://theconversation.com/creches-ecoles-les-batiments-publics-au-defi-dun-nettoyage-moins-polluant-278630

Protestant leaders once championed birth control – not to liberate women, but as part of ‘responsible parenthood’

Source: The Conversation – USA (3) – By Samira Mehta, Associate Professor of Women and Gender Studies & Jewish Studies, University of Colorado Boulder

Birth control pills have helped American women control their own bodies, but that wasn’t the main reason for religious leaders’ support. Hulton Archive/Getty Images

Mother’s Day seems like a strange time to celebrate birth control, which, on its most basic level, is about helping people to not become mothers – or not become mothers again.

But in the mid-20th century, much of birth control’s growing support came from attempts to support American women not as feminists, but as mothers. This is the story that I focus on in my 2026 book, “God Bless the Pill: The Surprising History of Contraception and Sexuality in American Religion.” Many religious leaders and U.S. politicians were looking for ways to strengthen the nuclear family, based around a homemaker mother and working father. Expanding legal access to contraception served as a way to make that happen.

Thought leaders who pushed to make birth control more available did not necessarily do so out of a desire to help women control their own bodies. They wanted to protect children and families and believed they were stronger when parents, particularly mothers, could devote intensive time to raising their children – ideally full time. Those views dovetailed with both political needs and Protestant beliefs of the moment.

‘Nuclear Family in the Nuclear Age’

The Cold War may have sprung from geopolitics and nuclear fears, but it was also a form of culture war, with American politicians pitting images of a “godly” United States against “godless communism.”

The nuclear family was a central piece of that propaganda. As historian Elaine Tyler May wrote, politicians, journalists and other public figures trumpeted the ideal of a mother, father and their children living in their own home: the “nuclear family in the nuclear age.” In their depiction, the American family was based on a sexually charged marriage between a beautiful – and fashionable – homemaker mother and a handsome father who could provide for his white, middle-class family.

A man in a suit and a woman with curled hair and red lipstick smile at each other as she holds a white pair of baby shoes.
Birth control made it easier for families to run a household on just one income – many religious leaders’ ideal.
H. Armstrong Roberts/ClassicStock/Getty Images

This idealized family could own a suburban home, one or two cars, and a constantly revolving selection of modern conveniences. Mothers were expected to invest in their appearance, presenting fathers with a delectable wife when they came home from work – plus a sparkling house and a home-cooked meal. In theory, this perfect mother had time, emotional energy and economic resources to parent their children in a very hands-on way.

Some middle- and upper-class Americans could afford this lifestyle, but it was out of reach for many, including many families that were not white. In addition, as Betty Friedan, one of the mothers of second-wave feminism, would articulate in “The Feminine Mystique,” many women who did live that life were not actually happy. That said, the idealized family was a central piece of American rhetoric in the middle of the 20th century – as was religion.

In the 1950s, more Americans attended church and synagogue than in any other decade that century. Around World War II, American figures started to often invoke the phrase “Judeo-Christian” to describe the country – a belated nod to Catholic and Jewish citizens in the still mostly Protestant nation. Nuclear families’ faith was considered a key piece of American defense against a “godless” Soviet Union.

American propaganda contrasted these ideal U.S. families against a vision of communism in which both parents worked. Soviet families were depicted in apartments with a shared kitchen and bathroom down the hall, without the material wonders of capitalism – from a brand new Frigidaire to a Kitchen Aid stand mixer and a Cadillac in the driveway.

In U.S. political rhetoric, the American family lived in technicolor, and the Soviet family lived in black and white.

‘Responsible parenthood’

But affording that vision of the American dream would be easier with fewer children.

Basic birth control methods had been part of American life for a long time – as evidenced by a declining birth rate among the middle and upper class, starting in the middle of the 19th century. “Scientific” birth control that required medical visits, such as diaphragms, had been around since the early 20th century.

A black-and-white photo shows a few women and lots of baby carriages outside a city storefront.
Women with children outside the first birth control clinic in the U.S., in Brooklyn, New York, in 1916.
Circa Images/GHI/Universal History Archive via Getty Images

Diaphragms became more accepted, and in 1936, a U.S. appeals court formally classified birth control as medical equipment. The birth control pill, which had been developed throughout the 1950s, was formally approved by the Food and Drug Administration in 1960.

Different Protestant denominations had slowly come to accept birth control, though the Catholic Church remained staunchly opposed to all except the rhythm method. Contraception turned procreation into a new place where Christians could live morally: not having more children than they could afford, nurture, educate and raise with knowledge of God. Denominational statements from groups as diverse as the Lutherans and the Quakers articulated a Christian form of planned parenthood that they would call “responsible parenthood.” In many ways, it was primarily about motherhood.

In 1960, the Rev. Richard Fagley published “Population Explosion and Christian Responsibility,” the first pan-Protestant theory of responsible parenthood. Fagley, a Congregational minister, called the medical knowledge that led to the contraceptive pill “a liberating gift from God, to be used to the glory of God, in accordance with his will for men.” He went on to say that godly scientific knowledge “affects deeply the size of the family … and therefore has created a new area for responsible decisions.”

While Fagley was the first person to collect various denominations’ views into a cohesive theology, his position represented a Protestant consensus, and his argument was adopted by the National Council of Churches the following year.

Birth control, in this formulation, was not about being child-free, or being able to engage in sex outside marriage. Rather, it allowed couples to decide, prayerfully, how many children they could have, and when they would have them. “Responsible parenthood” framed family size around “Christian duty.”

‘Mother-wife’

The theology of responsible parenthood makes clear that it is not about feminist autonomy for women.

For instance, when the National Council of Churches released a statement on responsible parenthood, the reasons listed for limiting the number of children in the family included “The right of the child to be wanted, loved, cared for, educated, and trained in the ‘discipline and instruction of the Lord’ (Eph. 6:4). The rights of existing children to parental care have a proper claim.” In the 1960s, the person assumed to do the majority of the work to raise a child was the mother.

A woman in a dress and apron with curled hair smiles as she hands a bowl to a seated man in a suit, as two children sit with him at a kitchen table.
Mid-century ideals for women imagined them as full-time mothers and homemakers.
Lambert/Hulton Archive/Getty Images

Religious leaders’ rationale included concern for the woman herself, but in her role as “the mother-wife,” as the statement said – framing women in relationship to the men and children in their lives. Birth control was important inasmuch as it preserved her body and mind to fill those roles. And the occasion for more widespread acceptance of “responsible parenthood” was the advent of the birth control pill, for which women were primarily responsible.

In other words, birth control gained acceptance as a way to perfect married motherhood. But in 1972, the Supreme Court case Eisenstadt v. Baird expanded the right of contraception from married people to single people, including teenage girls.

The religious consensus supporting birth control soon fractured among evangelicals and other conservative Protestants. Not only did they start to see birth control as supporting sex outside of marriage, but also as undermining a mother’s moral guidance of her daughters, who could now access contraception without parental consent. Many more liberal Protestants got quieter as well.

That early, vocal support for birth control has come back in recent years. Battles over the Affordable Care Act and the Supreme Court’s 2022 Dobbs v. Jackson decision have caused liberal Protestant denominations to reaffirm their commitment to reproductive healthcare, including birth control and abortion. That commitment has a long history – even if it is not a strictly feminist history.

The Conversation

Samira Mehta receives funding from the Henry Luce Foundation.

ref. Protestant leaders once championed birth control – not to liberate women, but as part of ‘responsible parenthood’ – https://theconversation.com/protestant-leaders-once-championed-birth-control-not-to-liberate-women-but-as-part-of-responsible-parenthood-280980