How poisoned data can trick AI − and how to stop it

Source: The Conversation – USA – By M. Hadi Amini, Associate Professor of Computing and Information Sciences, Florida International University

Data poisoning can make an AI system dangerous to use, potentially posing threats such as chemically poisoning a food or water supply. ArtemisDiana/iStock via Getty Images

Imagine a busy train station. Cameras monitor everything, from how clean the platforms are to whether a docking bay is empty or occupied. These cameras feed into an AI system that helps manage station operations and sends signals to incoming trains, letting them know when they can enter the station.

The quality of the information that the AI offers depends on the quality of the data it learns from. If everything is happening as it should, the systems in the station will provide adequate service.

But if someone tries to interfere with those systems by tampering with their training data – either the initial data used to build the system or data the system collects as it’s operating to improve – trouble could ensue.

An attacker could use a red laser to trick the cameras that determine when a train is coming. Each time the laser flashes, the system incorrectly labels the docking bay as “occupied,” because the laser resembles a brake light on a train. Before long, the AI might interpret this as a valid signal and begin to respond accordingly, delaying other incoming trains on the false rationale that all tracks are occupied. An attack like this related to the status of train tracks could even have fatal consequences.

We are computer scientists who study machine learning, and we research how to defend against this type of attack.

Data poisoning explained

This scenario, where attackers intentionally feed wrong or misleading data into an automated system, is known as data poisoning. Over time, the AI begins to learn the wrong patterns, leading it to take actions based on bad data. This can lead to dangerous outcomes.

In the train station example, suppose a sophisticated attacker wants to disrupt public transportation while also gathering intelligence. For 30 days, they use a red laser to trick the cameras. Left undetected, such attacks can slowly corrupt an entire system, opening the way for worse outcomes such as backdoor attacks into secure systems, data leaks and even espionage. While data poisoning in physical infrastructure is rare, it is already a significant concern in online systems, especially those powered by large language models trained on social media and web content.

A famous example of data poisoning in the field of computer science came in 2016, when Microsoft debuted a chatbot known as Tay. Within hours of its public release, malicious users online began feeding the bot reams of inappropriate comments. Tay soon began parroting the same inappropriate terms as users on X (then Twitter), and horrifying millions of onlookers. Within 24 hours, Microsoft had disabled the tool and issued a public apology soon after.

Data poisoning explained.

The social media data poisoning of the Microsoft Tay model underlines the vast distance that lies between artificial and actual human intelligence. It also highlights the degree to which data poisoning can make or break a technology and its intended use.

Data poisoning might not be entirely preventable. But there are commonsense measures that can help guard against it, such as placing limits on data processing volume and vetting data inputs against a strict checklist to keep control of the training process. Mechanisms that can help to detect poisonous attacks before they become too powerful are also critical for reducing their effects.

Fighting back with the blockchain

At Florida International University’s solid lab, we are working to defend against data poisoning attacks by focusing on decentralized approaches to building technology. One such approach, known as federated learning, allows AI models to learn from decentralized data sources without collecting raw data in one place. Centralized systems have a single point of failure vulnerability, but decentralized ones cannot be brought down by way of a single target.

Federated learning offers a valuable layer of protection, because poisoned data from one device doesn’t immediately affect the model as a whole. However, damage can still occur if the process the model uses to aggregate data is compromised.

This is where another more popular potential solution – blockchain – comes into play. A blockchain is a shared, unalterable digital ledger for recording transactions and tracking assets. Blockchains provide secure and transparent records of how data and updates to AI models are shared and verified.

By using automated consensus mechanisms, AI systems with blockchain-protected training can validate updates more reliably and help identify the kinds of anomalies that sometimes indicate data poisoning before it spreads.

Blockchains also have a time-stamped structure that allows practitioners to trace poisoned inputs back to their origins, making it easier to reverse damage and strengthen future defenses. Blockchains are also interoperable – in other words, they can “talk” to each other. This means that if one network detects a poisoned data pattern, it can send a warning to others.

At solid lab, we have built a new tool that leverages both federated learning and blockchain as a bulwark against data poisoning. Other solutions are coming from researchers who are using prescreening filters to vet data before it reaches the training process, or simply training their machine learning systems to be extra sensitive to potential cyberattacks.

Ultimately, AI systems that rely on data from the real world will always be vulnerable to manipulation. Whether it’s a red laser pointer or misleading social media content, the threat is real. Using defense tools such as federated learning and blockchain can help researchers and developers build more resilient, accountable AI systems that can detect when they’re being deceived and alert system administrators to intervene.

The Conversation

M. Hadi Amini has received funding for researching security of transportation systems from U.S. Department of Transportation. Opinions expressed represent his personal or professional opinions and do not represent or reflect the position of Florida International University.

This work was partly supported by the National Center for Transportation Cybersecurity and Resiliency (TraCR). Any opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of TraCR, and the U.S. Government assumes no liability for the contents or use thereof.

Ervin Moore has received funding for researching security of transportation systems from U.S. Department of Transportation. Opinions expressed represent his personal or professional opinions and do not represent or reflect the position of Florida International University.

This work was partly supported by the National Center for Transportation Cybersecurity and Resiliency (TraCR). Any opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of TraCR, and the U.S. Government assumes no liability for the contents or use thereof.

ref. How poisoned data can trick AI − and how to stop it – https://theconversation.com/how-poisoned-data-can-trick-ai-and-how-to-stop-it-256423

Spiderweb silks and architectures reveal millions of years of evolutionary ingenuity

Source: The Conversation – USA – By Ella Kellner, Ph.D. Student in Biological Sciences, University of North Carolina – Charlotte

An orchard orb weaver spider rests in the center of her web. Daniela Duncan/Moment via Getty Images

Have you ever walked face-first into a spiderweb while on a hike? Or swept away cobwebs in your garage?

You may recognize the orb web as the classic Halloween decoration or cobwebs as close neighbors with your dust bunnies. These are just two among the many types of spiderweb architectures, each with a unique structure specially attuned to the spider’s environment and the web’s intended job.

While many spiders use their webs to catch prey, they have also evolved unusual ways to use their silk, from wrapping their eggs to acting as safety lines that catch them when they fall.

As a materials scientist who studies spiders and their silks, I am curious about the relationship between spiderweb architecture and the strength of the silks spiders use. How do the design of a web and the properties of the silk used affect a spider’s ability to catch its next meal?

Webs’ ancient origins

Spider silk has a long evolutionary history. Researchers believe that it first evolved around 400 million years ago. These ancestral spiders used silk to line their burrows, protect their vulnerable eggs and create sensory paths and guidelines as they navigated their environment.

To understand what ancient spiderwebs could have looked like, scientists look to the lampshade spider. This spider lives in rock outcroppings in the Appalachian and Rocky mountains. It is a living relative of some of the most ancient spiders to ever make webs, and it hasn’t changed much at all since web-building first evolved.

A black and brown spider camouflaged over a mossy rock, with a circular, flat web around it, stuck to the rock
A lampshade spider in its distinctive web between rocks.
Tyler Brown, CC BY-SA

Aptly named for its web shape, the lampshade spider makes a web with a narrow base that widens outward. These webs fill the cracks between rocks where the spider can be camouflaged against the rough surface. It’s hard for a prospective meal to traverse this rugged landscape without being ensnared.

Web diversity

Today, all spider species produce silk. Each species creates its own specific web architecture that is uniquely suited to the type of prey it eats and the environment it lives in.

Take the orb web, for example. These are aerial, two-dimensional webs featuring a distinctive spiral. They mostly catch flying or jumping prey, such as flies and grasshoppers. Orb webs are found in open areas, such as on treelines, in tall grasses or between your tomato plants.

Image of a black spider spinning an an irregular web
A black widow spider builds three-dimensional cobwebs.
Karen Sloane-Williams/500Px Plus via Getty Images

Compare that to the cobweb, a structure that is most often seen by the baseboards in your home. While the term cobweb is commonly used to refer to any dusty, abandoned spiderweb, it is actually a specific web shape typically designed by spiders in the family Theridiidae. This spiderweb has a complex, three-dimensional architecture. Lines of silk extend downwards from the 3D tangle and are held affixed to the ground under high tension. These lines act as a sticky, spring-loaded booby trap to capture crawling prey such as ants and beetles. When an insect makes contact with the glue at the base of the line, the silk detaches from the ground, sometimes with enough force to lift the meal into the air.

Watch a redback spider build the high-tension lines of a cobweb and ensnare unsuspecting ants.

Web weirdos

Imagine you are an unsuspecting beetle, navigating your way between strands of grass when you come upon a tightly woven silken floor. As you begin to walk across the mat, you see eight eyes peeking out of a silken funnel – just before you’re quickly snatched up as a meal.

Spiders such as funnel-web weavers construct thick silk mats on the ground that they use as an extension of their sensory systems. The spider waits patiently in its funnel-shaped retreat. Prey that come in contact with the web create vibrations that alert the spider a tasty treat is walking across the welcome mat and it’s time to pounce.

A light-brown spider facing the camera, with a funnel shaped web surrounding it
A funnel-web spider peeks out of its web in the ground.
sandra standbridge/Moment via Getty Images

Jumping spiders are another unusual web spinner. They are well known for their varied colorations, elaborate courtship dances and being some of the most charismatic arachnids. Their cuteness has made them popular, thanks to Lucas the Spider, an adorable cartoon jumping spider animated by Joshua Slice. With two huge front eyes giving them depth perception, these spiders are fantastic hunters, capable of jumping in any direction to navigate their environment and hunt.

But what happens when they misjudge a jump, or worse, need to escape a predator? Jumpers use their silk as a safety tether to anchor themselves to surfaces before leaping through the air. If the jump goes wrong, they can climb back up their tether, allowing them to try again. Not only does this safety line of silk give them a chance for a redo, it also helps with making the jump. The tether helps them control the direction and speed of their jump in midair. By changing how fast they release the silk, they can land exactly where they want to.

A brown spider with green iridescence in mid-air, tethered to a leaf behind it with a thin strand of silk
A jumping spider uses a safety tether of silk as it makes a risky jump.
Fresnelwiki/Wikimedia Commons, CC BY-SA

To weave a web

All webs, from the orb web to the seemingly chaotic cobweb, are built through a series of basic, distinct steps.

Orb-weaving spiders usually start with a proto-web. Scientists think this initial construction is an exploratory stage, when the spider assesses the space available and finds anchor points for its silk. Once the spider is ready to build its main web, it will use the proto-web as a scaffold to create the frame, spokes and spiral that will help with absorbing energy and capturing prey. These structures are vital for ensuring that their next meal won’t rip right through the web, especially insects such as dragonflies that have an average cruising speed of 10 mph. When complete, the orb weaver will return to the center of the web to wait for its next meal.

The diversity in a spider’s web can’t all be achieved with one material. In fact, spiders can create up to seven types of silk, and orb weavers make them all. Each silk type has different material and mechanical properties, serving a specific use within the spider’s life. All spider silk is created in the silk glands, and each different type of silk is created by its own specialized gland.

A pale brown spider at the center of its spiral patterned  orb-web
A European garden spider builds a two-dimensional orb web.
Massimiliano Finzi/Moment via Getty Images

Orb weavers rely on the stiff nature of the strongest fibers in their arsenal for framing webs and as a safety line. Conversely, the capture spiral of the orb web is made with extremely stretchy silk. When a prey item gets caught in the spiral, the impact pulls on the silk lines. These fibers stretch to dissipate the energy to ensure the prey doesn’t just tear through the web.

Spider glue is a modified silk type with adhesive properties and the only part of the spiderweb that is actually sticky. This gluey silk, located on the capture spiral, helps make sure that the prey stays stuck in the web long enough for the spider to deliver a venomous bite.

To wrap up

Spiders and their webs are incredibly varied. Each spider species has adapted to live within its environmental niche and capture certain types of prey. Next time you see a spiderweb, take a moment to observe it rather than brushing it away or squishing the spider inside.

Notice the differences in web structure, and see whether you can spot the glue droplets. Look for the way that the spider is sitting in its web. Is it currently eating, or are there discarded remains of the insects it has prevented from wandering into your home?

Observing these arachnid architects can reveal a lot about design, architecture and innovation.

The Conversation

Ella Kellner does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Spiderweb silks and architectures reveal millions of years of evolutionary ingenuity – https://theconversation.com/spiderweb-silks-and-architectures-reveal-millions-of-years-of-evolutionary-ingenuity-261928

Vasectomie, douleur et regrets : ce que le forum en ligne Reddit révèle sur l’expérience des hommes

Source: The Conversation – in French – By Kevin Pimbblet, Professor and Director of the Centre of Excellence for Data Science, AI and Modelling, University of Hull

La vasectomie est depuis longtemps considérée comme une méthode contraceptive permanente, sûre et efficace. Parmi ses avantages, on mentionne souvent qu’elle est peu invasive et sans risque majeur.

Mais ce n’est peut-être pas tout.

Ces dernières années, le taux de vasectomie au Royaume-Uni a considérablement diminué. Cette tendance est surprenante, étant donné que l’efficacité de l’intervention n’a pas changé. Ce qui a sans doute changé, c’est la façon dont les hommes en parlent. Pas dans les cabinets médicaux, mais en ligne.

En tant que chercheur en IA travaillant avec des données publiques à grande échelle, j’ai dirigé une étude en 2025 utilisant le traitement du langage naturel (NLP) – une branche de l’intelligence artificielle qui analyse les schémas du langage humain – pour examiner des milliers de messages publiés sur r/vasectomy et r/postvasectomypain, deux subreddits (forums de discussion thématiques) sur Reddit, une plate-forme de médias sociaux où les utilisateurs partagent et commentent des contenus au sein de communautés thématiques.

Mon objectif n’était pas de me prononcer sur l’urologie (ce n’est pas ma spécialité), mais d’explorer le ton émotionnel et les résultats auto-déclarés dans des espaces numériques où les utilisateurs s’expriment franchement et en temps réel.

Les résultats sont révélateurs et soulèvent des questions importantes sur le consentement éclairé, le discours sur la santé en ligne et l’influence croissante des données sociales sur la communication en matière de santé.

Peur, regret et douleur ?

La réaction émotionnelle la plus courante face à la vasectomie, qu’elle soit envisagée ou déjà pratiquée, est la peur. Pour évaluer cela, nous avons utilisé un outil appelé NRClex, un classificateur d’émotions formé par le public. Il s’agit d’un modèle d’IA formé à partir de milliers d’exemples étiquetés afin de détecter le ton émotionnel d’un texte. Il a révélé que la « peur » dominait plus de 70 % des contenus générés par les utilisateurs.

Ce n’est pas surprenant. Les hommes sur Reddit posent des questions telles que « La douleur est-elle forte ? », « Combien de temps dure-t-elle ? » et « Vais-je le regretter ? » Ces préoccupations ne sont pas rares : elles sont au cœur de la conversation.

Si l’analyse globale des sentiments montre que la plupart des utilisateurs font état de résultats positifs, une minorité significative exprime de profonds regrets et des douleurs persistantes, parfois pendant des années après l’opération.

Cette douleur est souvent décrite comme un syndrome douloureux post-vasectomie (PVPS), une affection relativement peu connue qui se caractérise par une douleur nouvelle ou chronique au niveau du scrotum qui persiste pendant plus de trois mois après l’intervention.

Le PVPS est mal compris et peut avoir plusieurs causes, certaines anatomiques, d’autres neurologiques et d’autres encore inconnues. Bien que certaines autorités sanitaires le qualifient de « rare », nos données Reddit suggèrent qu’il pourrait être plus fréquent, ou du moins plus perturbateur, qu’on ne le pense actuellement.

Nous avons analysé plus de 11 000 messages Reddit et avons constaté que le mot « douleur » apparaissait dans plus de 3 700 d’entre eux, soit environ un tiers. Dans de nombreux cas, la douleur décrite persistait bien au-delà de la période de récupération prévue. Le mot « mois » apparaissait dans près de 900 messages liés à la douleur, tandis que le mot « année » apparaissait dans plus de 600 messages.

Des résultats qui étonnent

Ceci est remarquable. On s’attend généralement à ce que la douleur postopératoire disparaisse en quelques jours ou quelques semaines. Pourtant, notre ensemble de données suggère que 6 à 8 % des utilisateurs de Reddit qui discutent de la vasectomie signalent une gêne à plus long terme, un taux qui correspond aux estimations les plus élevées des études urologiques. Des recherches plus récentes, notamment une étude postopératoire à grande échelle, affirment que l’incidence est probablement beaucoup plus faible, peut-être inférieure à 1 %.

Bien sûr, nous devons souligner qu’il s’agit d’expériences rapportées par les utilisateurs eux-mêmes. Toutes les mentions de « douleur » ne correspondent pas à un diagnostic officiel de PVPS. Il est également important de reconnaître que les personnes insatisfaites d’une intervention médicale sont généralement plus enclines à en parler en ligne, ce qui constitue un biais bien connu dans les données sociales. Malgré tout, le volume, la cohérence et l’intensité émotionnelle de ces messages suggèrent que cette question mérite une attention particulière de la part des cliniciens et des chercheurs.

Plus frappant encore, environ 2 % des messages mentionnent à la fois « douleur » et « regret », ce qui implique des conséquences graves, potentiellement bouleversantes, pour un groupe restreint, mais significatif de personnes.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Sur r/postvasectomypain, un subreddit dédié à la discussion du PVPS, le ton est encore plus grave. Sans surprise, 74 % des messages décrivent une douleur persistante et chronique. En outre, 23 % mentionnent des douleurs pendant les rapports sexuels et 27 % signalent des changements de sensibilité.

Les messages publiés sur ce forum font également beaucoup plus souvent référence à la chirurgie de renversement de la vasectomie qu’à des interventions plus spécialisées telles que la dénervation microchirurgicale : une procédure complexe d’ablation des nerfs utilisée dans les cas graves de douleurs testiculaires chroniques, généralement lorsque les autres traitements ont échoué.

De l’IA à l’andrologie : un carrefour éthique

Pourquoi un professeur d’IA et de physique analyse-t-il la douleur sur des forums d’urologie ?

Parce que dans le monde numérique d’aujourd’hui, les gens se tournent de plus en plus vers des plates-formes en ligne telles que Reddit pour obtenir des conseils de santé, du soutien par des pairs et aider à la prise de décision, souvent avant de consulter un médecin. En tant que chercheur en IA, je pense que nous avons la responsabilité d’examiner comment ces discussions façonnent la compréhension du public et ce qu’elles peuvent nous apprendre sur les défis réels en matière de soins de santé.

Dans ce cas précis, il est possible que la baisse du recours à la vasectomie soit liée, au moins en partie, au partage ouvert et émotionnel des résultats négatifs en ligne. Ces messages ne sont pas alarmistes. Ils sont détaillés, francs et souvent très précis. Ils constituent un type de données concrètes que les essais cliniques et les études formelles ne permettent pas toujours de saisir.

Que devons-nous retenir de tout cela ?

Des termes tels que « rare », souvent utilisés dans les formulaires de consentement et les conversations cliniques, peuvent masquer la complexité et la variabilité des résultats pour les patients. La douleur après une vasectomie, qu’elle soit légère, temporaire, chronique ou invalidante, semble suffisamment courante pour justifier une communication plus transparente et nuancée.

Il ne s’agit pas d’un argument contre la vasectomie. Elle reste une option sûre, efficace et émancipatrice. Mais un consentement véritablement éclairé doit refléter à la fois la littérature clinique et les expériences des personnes qui subissent l’intervention, d’autant plus que ces expériences sont désormais accessibles au public en grand nombre.

Dans un monde où les forums en ligne font office de journaux de santé, de réseaux de soutien et de registres de recherche informels, nous devons les prendre au sérieux. Le langage médical a son importance. Des termes tels que « rare », « peu fréquent », ou « faible risque » ont un réel poids émotionnel et moral. Ils façonnent les attentes et influencent les décisions.

Si même un faible pourcentage d’hommes souffre de douleurs à long terme après une vasectomie, ce risque doit être communiqué clairement, dans un langage simple, idéalement avec une fourchette de pourcentages tirés d’études publiées.

La Conversation Canada

Kevin Pimbblet bénéficie actuellement d’un financement de la part du STFC, de l’EPSRC, de la British Academy, de la Royal Astronomical Society, de la British Ecological Society et de l’Office for Students. Aucun de ces organismes n’est directement lié à ce travail.

ref. Vasectomie, douleur et regrets : ce que le forum en ligne Reddit révèle sur l’expérience des hommes – https://theconversation.com/vasectomie-douleur-et-regrets-ce-que-le-forum-en-ligne-reddit-revele-sur-lexperience-des-hommes-262413

How to improve the monitoring of chemical contaminants in the human body

Source: The Conversation – France – By Chang He, Professor of environmental sciences, The University of Queensland

From pesticides in our food to hormone disruptors in our kitchen pans, modern life is saturated with chemicals, exposing us to unknown long-term health impacts.

One of the surest routes to quantifying these impacts is the scientific method of biomonitoring, which consists of measuring the concentration of chemicals in biological specimens such as blood, hair or breastmilk. These measurable indicators are known as biomarkers.

Currently, very few biomarkers are available to assess the impact of chemicals on human health, even though 10 million new substances are developed and introduced to the market each year.

My research aims to bridge this gap by identifying new biomarkers of chemicals of emerging concern in order to assess their health effects.

What makes a good biomarker

One of the difficulties of biomonitoring is that once absorbed in our bodies, chemical pollutants are typically processed into one or more breakdown substances, known as metabolites. As a result, many chemicals go under the radar.

In order to understand what happens to a chemical once it has entered a living organism, researchers can use various techniques, including approaches based on computer modelling (in silico models), tests carried out on cell cultures (in vitro approaches), and animal tests (in vivo) to identify potential biomarkers.

The challenge is to find biomarkers that allow us to draw a link between contamination by a toxic chemical and the potential health effects. These biomarkers may be the toxic product itself or the metabolites left in its wake.

But what is a “good” biomarker? In order to be effective in human biomonitoring, it must meet several criteria.

First, it should directly reflect the type of chemical to which people are exposed. This means it must be a direct product of the chemical and help pinpoint the level of exposure to it.

Second, a good biomarker should be stable enough to be detectable in the body for a sufficient period without further metabolization. This stability ensures that the biomarker can be measured reliably in biological samples, thus providing an accurate assessment of exposure levels.

Third, a good biomarker should enable precise evaluation. It must be specific to the chemical of interest without interference from other substances. This specificity is critical for accurately interpreting biomonitoring data and making informed decisions about health risks and regulatory measures.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


Two examples of ‘bad’ biomarkers

One example of a “bad” biomarker involves the diester metabolites of organophosphate esters. These compounds are high-production-volume chemicals widely used in household products as flame retardants and plasticizers, and are suspected to have adverse effects on the environment and human health.

Recent findings showed the coexistence of both organophosphate esters and their diester metabolites in the environment. This indicates that the use of diesters as biomarkers to estimate human contamination by organophosphate esters leads to an overestimation.

Using an inappropriate biomarker may also lead to an underestimation of the concentration of a compound. An example relates to chlorinated paraffins, persistent organic pollutants that are also used as flame retardants in household products. In biomonitoring, researchers use the original form of chlorinated paraffins due to their persistence in humans. However, their levels in human samples are much lower than those in the environment, which seems to indicate underestimation in human biomonitoring.

Recently, my team has found the potential for biodegradation of chlorinated paraffins. This could explain the difference between measurements taken in the environment and those taken in living organisms. We are currently working on the identification of appropriate biomarkers of these chemicals.

Current limitations in human biomonitoring

Despite the critical importance of biomarkers, several limitations hinder their effective use in human biomonitoring.

A significant challenge is the limited number of human biomarkers available compared to the vast number of chemicals we are exposed to daily. Existing biomonitoring programmes designed to assess contamination in humans are only capable of tracking a few hundred biomarkers at best, a small fraction of the tens of thousands of markers that environmental monitoring programmes use to report pollution.

Moreover, humans are exposed to a cocktail of chemicals daily, enhancing their adverse effects and complicating the assessment of cumulative effects. The pathways of exposure, such as inhalation, ingestion and dermal contact, add another layer of complexity.

Another limitation of current biomarkers is the reliance on extrapolation from in vitro and in vivo models to human contexts. While these models provide valuable insights, they do not always accurately reflect human metabolism and exposure scenarios, leading to uncertainties in risk assessment and management.

To address these challenges, my research aims to establish a workflow for the systematic identification and quantification of chemical biomarkers. The goal is to improve the accuracy and applicability of biomonitoring in terms of human health.

Innovative approaches in biomarker research

We aim to develop a framework for biomarker identification that could be used to ensure that newly identified biomarkers are relevant, stable and specific.

This framework includes advanced sampling methods, state-of-the-art analytical techniques, and robust systems for data interpretation. For instance, by combining advanced chromatographic techniques, which enable the various components of a biological sample to be separated very efficiently, with highly accurate methods of analysis (high-resolution mass spectrometry), we can detect and quantify biomarkers with greater sensitivity and specificity.

This allows for the identification of previously undetectable or poorly understood biomarkers, expanding the scope of human biomonitoring.

Additionally, the development of standardized protocols for sample collection and analysis ensures consistency and reliability across different studies and monitoring programmes, which is crucial for comparing data and drawing meaningful conclusions about exposure trends and health risks.

This multidisciplinary approach will hopefully be providing a more comprehensive understanding of human exposure to hazardous chemicals. This new data could form a basis for improving prevention and adapting regulations in order to limit harmful exposure.


Created in 2007 to help accelerate and share scientific knowledge on key societal issues, the Axa Research Fund has supported nearly 700 projects around the world conducted by researchers in 38 countries. To learn more, visit the website of the Axa Research Fund or follow @AXAResearchFund on X.

The Conversation

Chang He received funding from the AXA Research Fund.

ref. How to improve the monitoring of chemical contaminants in the human body – https://theconversation.com/how-to-improve-the-monitoring-of-chemical-contaminants-in-the-human-body-233255

4 out of 5 US troops surveyed understand the duty to disobey illegal orders

Source: The Conversation – USA – By Charli Carpenter, Professor of political science, UMass Amherst

National Guard members arrive at the Guard’s headquarters at D.C. Armory on Aug. 12, 2025 in Washington. Anna Moneymaker/Getty Images

With his Aug. 11, 2025, announcement that he was sending the National Guard – along with federal law enforcement – into Washington, D.C. to fight crime, President Donald Trump edged U.S. troops closer to the kind of military-civilian confrontations that can cross ethical and legal lines.

Indeed, since Trump returned to office, many of his actions have alarmed international human rights observers. His administration has deported immigrants without due process, held detainees in inhumane conditions, threatened the forcible removal of Palestinians from the Gaza Strip and deployed both the National Guard and federal military troops to Los Angeles to quell largely peaceful protests.

When a sitting commander in chief authorizes acts like these, which many assert are clear violations of the law, men and women in uniform face an ethical dilemma: How should they respond to an order they believe is illegal?

The question may already be affecting troop morale. “The moral injuries of this operation, I think, will be enduring,” a National Guard member who had been deployed to quell public unrest over immigration arrests in Los Angeles told The New York Times. “This is not what the military of our country was designed to do, at all.”

Troops who are ordered to do something illegal are put in a bind – so much so that some argue that troops themselves are harmed when given such orders. They are not trained in legal nuances, and they are conditioned to obey. Yet if they obey “manifestly unlawful” orders, they can be prosecuted. Some analysts fear that U.S. troops are ill-equipped to recognize this threshold.

We are scholars of international relations and international law. We conducted survey research at the University of Massachusetts Amherst’s Human Security Lab and discovered that many service members do understand the distinction between legal and illegal orders, the duty to disobey certain orders, and when they should do so.

A man in a blue jacket, white shirt and red tie at a lectern, speaking.
President Donald Trump, flanked by Secretary of Defense Pete Hegseth and Attorney General Pam Biondi, announced at a White House news conference on Aug. 11, 2025, that he was deploying the National Guard to assist in restoring law and order in Washington.
Hu Yousong/Xinhua via Getty Images

Compelled to disobey

U.S. service members take an oath to uphold the Constitution. In addition, under Article 92 of the Uniform Code of Military Justice and the U.S. Manual for Courts-Martial, service members must obey lawful orders and disobey unlawful orders. Unlawful orders are those that clearly violate the U.S. Constitution, international human rights standards or the Geneva Conventions.

Service members who follow an illegal order can be held liable and court-martialed or subject to prosecution by international tribunals. Following orders from a superior is no defense.

Our poll, fielded between June 13 and June 30, 2025, shows that service members understand these rules. Of the 818 active-duty troops we surveyed, just 9% stated that they would “obey any order.” Only 9% “didn’t know,” and only 2% had “no comment.”

When asked to describe unlawful orders in their own words, about 25% of respondents wrote about their duty to disobey orders that were “obviously wrong,” “obviously criminal” or “obviously unconstitutional.”

Another 8% spoke of immoral orders. One respondent wrote that “orders that clearly break international law, such as targeting non-combatants, are not just illegal — they’re immoral. As military personnel, we have a duty to uphold the law and refuse commands that betray that duty.”

Just over 40% of respondents listed specific examples of orders they would feel compelled to disobey.

The most common unprompted response, cited by 26% of those surveyed, was “harming civilians,” while another 15% of respondents gave a variety of other examples of violations of duty and law, such as “torturing prisoners” and “harming U.S. troops.”

One wrote that “an order would be obviously unlawful if it involved harming civilians, using torture, targeting people based on identity, or punishing others without legal process.”

An illustration of responses such as 'I'd disobey if illegal' and 'I'd disobey if immoral.'
A tag cloud of responses to UMass-Amherst’s Human Security Lab survey of active-duty service members about when they would disobey an order from a superior.
UMass-Amherst’s Human Security Lab, CC BY

Soldiers, not lawyers

But the open-ended answers pointed to another struggle troops face: Some no longer trust U.S. law as useful guidance.

Writing in their own words about how they would know an illegal order when they saw it, more troops emphasized international law as a standard of illegality than emphasized U.S. law.

Others implied that acts that are illegal under international law might become legal in the U.S.

“Trump will issue illegal orders,” wrote one respondent. “The new laws will allow it,” wrote another. A third wrote, “We are not required to obey such laws.”

Several emphasized the U.S. political situation directly in their remarks, stating they’d disobey “oppression or harming U.S. civilians that clearly goes against the Constitution” or an order for “use of the military to carry out deportations.”

Still, the percentage of respondents who said they would disobey specific orders – such as torture – is lower than the percentage of respondents who recognized the responsibility to disobey in general.

This is not surprising: Troops are trained to obey and face numerous social, psychological and institutional pressures to do so. By contrast, most troops receive relatively little training in the laws of war or human rights law.

Political scientists have found, however, that having information on international law affects attitudes about the use of force among the general public. It can also affect decision-making by military personnel.

This finding was also borne out in our survey.

When we explicitly reminded troops that shooting civilians was a violation of international law, their willingness to disobey increased 8 percentage points.

Drawing the line

As my research with another scholar showed in 2020, even thinking about law and morality can make a difference in opposition to certain war crimes.

The preliminary results from our survey led to a similar conclusion. Troops who answered questions on “manifestly unlawful orders” before they were asked questions on specific scenarios were much more likely to say they would refuse those specific illegal orders.

When asked if they would follow an order to drop a nuclear bomb on a civilian city, for example, 69% of troops who received that question first said they would obey the order.

But when the respondents were asked to think about and comment on the duty to disobey unlawful orders before being asked if they would follow the order to bomb, the percentage who would obey the order dropped 13 points to 56%.

While many troops said they might obey questionable orders, the large number who would not is remarkable.

Military culture makes disobedience difficult: Soldiers can be court-martialed for obeying an unlawful order, or for disobeying a lawful one.

Yet between one-third to half of the U.S. troops we surveyed would be willing to disobey if ordered to shoot or starve civilians, torture prisoners or drop a nuclear bomb on a city.

The service members described the methods they would use. Some would confront their superiors directly. Others imagined indirect methods: asking questions, creating diversions, going AWOL, “becoming violently ill.”

Criminologist Eva Whitehead researched actual cases of troop disobedience of illegal orders and found that when some troops disobey – even indirectly – others can more easily find the courage to do the same.

Whitehead’s research showed that those who refuse to follow illegal or immoral orders are most effective when they stand up for their actions openly.

The initial results of our survey – coupled with a recent spike in calls to the GI Rights Hotline – suggest American men and women in uniform don’t want to obey unlawful orders.

Some are standing up loudly. Many are thinking ahead to what they might do if confronted with unlawful orders. And those we surveyed are looking for guidance from the Constitution and international law to determine where they may have to draw that line.

Zahra Marashi, an undergraduate research assistant at the University of Massachusetts Amherst, contributed to the research for this article.

The Conversation

Charli Carpenter directs Human Security Lab which has received funding from University of Massachusetts College of Social and Behavioral Sciences, the National Science Foundation, and the Lex International Fund of the Swiss Philanthropy Foundation.

Geraldine Santoso and Laura K Bradshaw-Tucker do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. 4 out of 5 US troops surveyed understand the duty to disobey illegal orders – https://theconversation.com/4-out-of-5-us-troops-surveyed-understand-the-duty-to-disobey-illegal-orders-261929

Where America’s CO emissions come from – what you need to know, in charts

Source: The Conversation – USA (2) – By Kenneth J. Davis, Professor of Atmospheric and Climate Science, Penn State

Vehicles, energy production and industry are the largest emissions sources in the U.S. David McNew/Getty Images

Earth’s atmosphere contains carbon dioxide, which is good for life on Earth – in moderation. Plants use CO2 as the source of the carbon they build into leaves and wood via photosynthesis. In combination with water vapor, CO2 insulates the Earth, keeping it from turning into a frozen world. Life as we know it on Earth would not exist without CO2 in the atmosphere.

Since the industrial revolution began, however, humans have been adding more and more carbon dioxide to the Earth’s atmosphere, and it has become a problem.

The atmospheric concentration of CO2 has risen by more than 50% since industries began burning coal and other fossil fuels in the late 1700s, reaching concentrations that haven’t been found in the Earth’s atmosphere in at least a million years. And the concentration continues to rise.

A line chart shows atmospheric carbon dioxide concentrations mostly stable for hundreds of years and then rising with the start of the industrial revolution, and accelerating their rise starting in the mid-1900s.

Chart from Scripps Institution of Oceanography at UC San Diego, CC BY

Excess CO2 drives global warming

Who cares? Everyone should.

More CO2 in the air means temperatures at the Earth’s surface rise. As temperature rises, the water cycle accelerates, leading to more floods and droughts. Glaciers melt, and warmer ocean water expands, raising sea levels.

We are living with an increasing frequency or intensity of wildfires, heat waves, flooding and hurricanes, all influenced by increasing CO2 concentrations in the atmosphere.

The ocean also absorbs some of that CO2, making the water increasingly acidic, which can harm species crucial to the marine food chain.

Where is this additional CO2 coming from?

The biggest source of additional CO2 is the combustion of fossil fuels – oil, natural gas and coal – to power vehicles, electricity generation and industries. Each of these fuels consists of hydrocarbons built by plants that grew on the Earth over the past few hundred million years.

These plants took CO2 out of the planet’s atmosphere, died, and their biomass was buried in water and sediments.

Today, humans are reversing hundreds of millions of years of carbon accumulation by digging these fuels out of the Earth and burning them to provide energy.

Let’s dig a little deeper.

Where do CO2 emissions come from in the US?

The Environmental Protection Agency has tracked U.S. greenhouse gas emissions for years.

The U.S. emitted 5,053 million metric tons of CO2 into the atmosphere in 2022, the last year for which a complete emissions inventory is available. We also emit other greenhouse gases, including methane, from natural gas production and animal agriculture, and nitrous oxide, created when microbes digest nitrogen fertilizer. But carbon dioxide is about 80% of all U.S. greenhouse gas emissions.

Of those 5,053 million metric tons of CO2 emitted by the U.S. in 2022, 93% came from the combustion of fossil fuels.

More specifically: about 35% of the CO2 emissions were from transportation, 30% from the generation of electric power, and 16%, 7% and 5% from on-site consumption of fossil fuels by industrial, residential and commercial buildings, respectively. Electric power generation served industrial, residential and commercial buildings roughly equally.

What fossil fuels are being burned?

Transportation is dominated by petroleum products, or oil – think gasoline and diesel fuel.

Nationwide, power plants consume roughly equal fractions of coal and natural gas. Natural gas use has been increasing and coal decreasing in this sector, with this trend driven by the rapid expansion of the shale gas industry in the U.S.

U.S. forests are removing CO2 from the atmosphere, but not rapidly enough to offset human emissions. U.S. forests removed and stored about 920 million metric tons of CO2 in 2022.

How US CO2 emissions have changed

Emissions from the U.S. peaked around 2005 at 6,217 million metric tons of CO2. Since then, emissions have been decreasing slowly, largely driven by the replacement of coal by natural gas in electricity production.

Some additional notable trends will impact the future:

First, the U.S. economy has become more energy efficient over time, increasing productivity while decreasing emissions.

Second, solar and wind energy generation, while still a modest fraction of total energy production, has grown steadily in recent years and emits essentially no CO2 into the atmosphere. If the nation increasingly relies on renewable energy sources and reduces burning of fossil fuels, it will dramatically reduce its CO2 emissions.

Solar and wind energy became cheaper as a new energy source than natural gas and coal, but the Trump administration is cutting federal support for renewable energy and is doubling down on subsidies for fossil fuels. The growth of data centers is also expected to increase demand for electricity. How the U.S. meets that demand will impact national CO2 emissions in future years.

How US emissions compare globally

The U.S. ranked second in CO2 emissions worldwide in 2022, behind China, which emitted about 12,000 million metric tons of CO2. China’s annual CO2 emissions surpassed U.S. emissions in 2005 or 2006.

Added up over time, however, the U.S. has emitted more CO2 into the atmosphere than any other nation, and we still emit more CO2 per person than most other industrialized nations. Chinese and European emissions are both roughly half of U.S. emissions on a per capita basis.

Greenhouse gases in the atmosphere mix evenly around the globe, so emissions from industrialized nations affect the climate in developing countries that have benefited very little from the energy created by burning fossil fuels.

The takeaway

There have been some promising downward trends in U.S. CO2 emissions and upward trends in renewable energy sources, but political winds and increasing energy demands threaten progress in reducing emissions.

Reducing emissions in all sectors is needed to slow and eventually stop the rise of atmospheric CO2 concentrations. The world has the technological means to make large reductions in emissions. CO2 emitted into the atmosphere today lingers in the atmosphere for hundreds to thousands of years. The decisions we make today will influence the Earth’s climate for a very long time.

The Conversation

Kenneth J. Davis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Where America’s CO emissions come from – what you need to know, in charts – https://theconversation.com/where-americas-co-sub-2-sub-emissions-come-from-what-you-need-to-know-in-charts-258904

Mindfulness is gaining traction in American schools – but it isn’t clear what students are learning

Source: The Conversation – USA (2) – By Deborah L. Schussler, Professor of Education Policy and Leadership, University at Albany, State University of New York

Sixth grade students start their science class with five minutes of meditation at George Washington Middle School in Alexandria, Va., in February 2020. Jahi Chikwendiu/The Washington Post via Getty Images

Writing, reading, math and mindfulness? That last subject is increasingly joining the three classic courses, as more young students in the United States are practicing mindfulness, meaning focusing on paying attention to the present moment without judgment.

In the past 20 years in the U.S., mindfulness transitioned from being a new-age curiosity to becoming a more mainstream part of American culture, as people learned more about how mindfulness can reduce their stress and improve their well-being.

Researchers estimate that over 1 million children in the U.S. have been exposed to mindfulness in their schools, mostly at the elementary level, often taught by classroom teachers or school counselors.

I have been researching mindfulness in K-12 American schools for 15 years. I have investigated the impact of mindfulness on students, explored the experiences of teachers who teach mindfulness in K-12 schools, and examined the challenges and benefits of implementing mindfulness in these settings.

I have noticed that mindfulness programs vary in what particular mindfulness skills are taught and what lesson objectives are. This makes it difficult to compare across studies and draw conclusions about how mindfulness helps students in schools.

A young girl with dark brown skin closes her eyes and has a peaceful expression on her face.
A student practices mindfulness during a session at Roberta T. Smith Elementary School in May 2024 in Rex, Ga.
AP Photo/Sharon Johnson

What is mindfulness?

Different definitions of mindfulness exist.

Some people might think mindfulness means simply practicing breathing, for example.

A common definition from Jon Kabat-Zinn, a mindfulness expert who helped popularize mindfulness in Western countries, says mindfulness is about “paying attention in a particular way, on purpose, nonjudgmentally, in the present moment.”

Essentially, mindfulness is a way of being. It is a person’s approach to each moment and their orientation to both inner and outer experience, the pleasant and the unpleasant. Fundamental to mindfulness is how a person chooses to direct their attention.

In practice, mindfulness can involve different practices, including guided meditations, mindful movement and breathing. Mindfulness programs can also help people develop a variety of skills, including openness to experiences and more focused attention.

Practicing mindfulness at schools

A few years ago, I decided to investigate school mindfulness programs themselves and consider what it means for children to learn mindfulness at schools. What do the programs actually teach?

I believe that understanding this information can help educators, parents and policymakers make more informed decisions about whether mindfulness belongs in their schools.

In 2023, my colleagues and I conducted a deep dive into 12 readily available mindfulness curricula for K-12 students to investigate what the programs contained. Across programs, we found no consistency of content, teaching practices or time commitment.

For example, some mindfulness programs in K-12 schools incorporate a lot of movement, with some specifically teaching yoga poses. Others emphasize interpersonal skills such as practicing acts of kindness, while others focus mostly on self-oriented skills such as focused attention, which may occur by focusing on one’s breath.

We also found that some programs have students do a lot of mindfulness practices, such as mindful movement or mindful listening, while others teach about mindfulness, such as learning how the brain functions.

Finally, the number of lessons in a curriculum ranged from five to 44, meaning some programs occurred over just a few weeks and some required an entire school year.

Despite indications that mindfulness has some positive impacts for school-age children, the evidence is also not consistent, as shown by other research.

One of the largest recent studies of mindfulness in schools found in 2022 no change in students who received mindfulness instruction.

Some experts believe, though, that the lack of results in this 2022 study on mindfulness was partially due to a curriculum that might have been too advanced for middle school-age children.

A group of young kids sit on yoga mats, close their eyes and hold their hands in a prayer position.
Mindfulness looks and is taught differently across various K-12 schools in the U.S.
Ariel Skelley/Digital Vision

The connection between mindfulness and education

Since attention is critical for students’ success in school, it is not surprising that mindfulness appeals to many educators.

Research on student engagement and executive functioning supports the claim that any student’s ability to filter out distractions and prioritize the objects of their thoughts improves their academic success.

Mindfulness programs have been shown to improve students’ mental health and decrease students’ and teachers’ stress levels.

Mindfulness has also been shown to help children emotionally regulate.

Even before social media, teachers perennially struggled to get students to pay attention. Reviews of multiple studies have shown some positive effects of mindfulness on outcomes, including improvements in academic achievement and school adjustment.

A 2023 report from the Centers for Disease Control and Prevention cites mindfulness as one of six evidence-based strategies K-12 schools should use to promote students’ mental health and well-being.

A relatively new trend

Knowing what is in the mindfulness curriculum, how it is taught and how long the student spends on mindfulness matters. Students may be learning very different skills with significantly different amounts of time to reinforce those skills.

Researchers suggest, for example, that mindfulness programs most likely to improve academic or mental health outcomes of children offer activities geared toward their developmental level, such as shorter mindfulness practices and more repetition.

In other words, mindfulness programs for children cannot just be watered down versions of adult programs.

Mindfulness research in school settings is still relatively new, though there is encouraging data that mindfulness can sharpen skills necessary for students’ academic success and promote their mental health.

In addition to the need for more research on the outcomes of mindfulness, it is important for educators, parents, policymakers and researchers to look closely at the curriculum to understand what the students are actually doing.

The Conversation

Deborah L. Schussler receives funding from Spencer Foundation.

ref. Mindfulness is gaining traction in American schools – but it isn’t clear what students are learning – https://theconversation.com/mindfulness-is-gaining-traction-in-american-schools-but-it-isnt-clear-what-students-are-learning-261247

Labor Day and May Day emerged from the movement for a shorter workday in industrial America

Source: The Conversation – USA (2) – By Jeffrey Sklansky, Professor of History, University of Illinois Chicago

It took more than a century for Chicago’s Haymarket Square to get this memorial to the historic labor strife that occurred there. Jeffrey Sklansky

Most of the world observes International Workers’ Day on May 1 or the first Monday in May each year, but not the United States and Canada. Instead, Americans and Canadians have celebrated Labor Day as a national holiday on the first Monday in September since 1894, 12 years after the first observance of Labor Day in New York City.

The celebrations aren’t the same.

In much of Europe, Asia, Africa and Latin America, the event commonly called May Day honors workers’ political and economic power, often with demonstrations by socialist or workers’ parties and tributes to national labor rights. America’s Labor Day features labor union parades in many places, but for most Americans, it’s less about organized labor and more about barbecues, beach days and back-to-school sales.

Both holidays, however, arose during the same period, in the U.S. nearly 150 years ago, in the midst of an explosive labor uprising in America’s industrial heartland. Their founding united native-born and immigrant workers in an extraordinary alliance to demand an eight-hour workday at a time when American workers toiled an average of 10 or more hours daily, six days a week.

The call for shorter hours was rooted in a big idea: that workers’ days belonged to them, even if employers owned their workplaces and paid for their work. That idea inspired the loftiest goals of a growing labor movement that spanned from Chicago and New York to Stockholm and Saint Petersburg. And the labor activism of the late 1800s still casts a distant light on Labor Day today, carrying a vital message about the struggle for control of workers’ daily lives.

I’m a historian at the University of Illinois Chicago, where I study the history of labor. The fight for shorter hours is no longer a top issue for organized labor in the U.S.. But it was a crusade for the eight-hour day that brought together the diverse coalition of labor groups that created Labor Day and May Day in the 1880s.

Colorful beach umbrellas cover the sand on a sunny day, with a lifeguard elevated above the crowd
On Labor Day, U.S. beaches are crowded with people who spend the late-summer holiday relaxing and having fun. One such destination is Chincoteague Island, Va., seen here on Labor Day weekend in 2018.
Bastiaan Slabbers/NurPhoto via Getty Images

Labor Day’s radical roots

Led by socialist-leaning trade unions, Labor Day’s founders included skilled, native-born craft workers defending control over their trades, immigrant laborers seeking relief from daylong drudgery, and revolutionary anarchists who saw the quest for control of the workers’ day as a step toward seizing factories and smashing the state.

They originally chose Sept. 5, 1882, for the first Labor Day to coincide with a general assembly in New York City of what was then the largest and broadest association of American workers, the Knights of Labor. Two years later, labor leaders moved the annual event to the first Monday in September, giving the majority of workers a two-day weekend for the first time.

As Labor Day parades and picnics spread, many American cities and states soon made it an official holiday. But since few employers gave workers the day off in its early years, Labor Day likewise became “a virtual one-day general strike in many cities,” according to historians Michael Kazin and Steven Ross.

American roots of May Day

My students come from working-class, mostly immigrant families, and Chicago’s history of labor conflict is all around our downtown campus in the heart of what were once meatpacking plants, stockyards and crowded immigrant neighborhoods.

My office is about 12 blocks from the spot – surrounded today by upscale office buildings – where the eight-hour movement reached a bloody climax in the battle of Haymarket Square. May Day commemorates that battle.

On May 1, 1886, unions of skilled workers organized by their crafts or trades led a nationwide general strike for the eight-hour day. They were joined by radical socialists, militant anarchists and many members of the Knights of Labor. More than 100,000 workers took part across the country.

The most dramatic demonstrations happened in Chicago, which had become the second-largest city in the U.S. after years of swift growth. Nearly 40,000 striking Chicago workers shut down much of that burgeoning industrial, agricultural and commercial hub. Three days later, a bomb thrown at a rally in Haymarket Square killed seven police officers, sparking a sweeping nationwide crackdown on labor activism.

In 1889, socialist trade unions and workers’ parties, meeting in Paris for the first congress of a new Socialist International, proclaimed May 1 an international workers’ holiday. They were partly following the lead of the new American Federation of Labor, which had called for renewed strikes on the anniversary of the 1886 action.

And they were honoring the memory of the eight labor activists who had been tried and convicted for the Haymarket bombing solely on the basis of their speeches and radical politics, in what was widely viewed as a rigged trial. Four “Haymarket martyrs” had been hanged and a fifth died by suicide before he could be executed.

Protesters march in the streets, waving French flags and holding a labor-themed poster aloft, with French words.
Protesters march through the streets of Marseille, France, with flags and placards on May 1, 2025, to mark International Workers Day.
Denis Thaust/SOPA Images/LightRocket via Getty Images

An earlier labor win

Though May 1 had long been associated with European celebrations of springtime, its modern meaning has deeper American roots that precede the Haymarket tragedy. It was on that date in 1867 that workers in Chicago celebrated an earlier victory.

At the end of the Civil War, campaigns for an eight-hour workday arose in cities across the country, championing a common interpretation of the abolition of slavery: for many workers, emancipation meant that employers purchased only their labor, not their lives.

Employers might monopolize workers’ means of making a living, but not their hours and days.

The movement led to laws declaring an eight-hour day in six states, including Illinois, where the new rule went into effect on May 1, 1867. But employers widely disobeyed or circumvented the laws, and states failed to enforce them while they lasted, so workers continued to struggle for a shorter workday.

Seizing the day

In the 19th century, American workers’ labor came to be measured by how long they worked and how much they were paid. While they were divided by their widely different wages, they were united by the generally uniform hours at each workplace.

The demand for a shorter workday without a pay cut was designed to appeal to all wage earners no matter who they were, where they were from, or what they did for a living.

Labor leaders said shorter hours meant employers would have to hire more people, creating jobs and boosting hourly pay. Spending less time on the job would enable workers to become bigger consumers, spurring economic growth.

Having “eight hours for work, eight hours for rest, and eight hours for what we will,” a popular labor movement refrain, would also leave more time for education, organization and political action.

Most broadly, the fight for shorter hours encapsulated workers’ struggle to control their own time, both on and off the job. That far-reaching struggle included efforts to limit the number of years people spent earning a living by ending child labor and creating pensions for retired workers – a topic I’m currently researching.

Benjamin Franklin famously said, “Time is money,” meaning that time off costs money that workers could be making on the job. But the message of the movement for a shorter workday was that the worth of workers’ lives could not be calculated in dollars and cents.

Diverging holidays

In the Haymarket battle’s aftermath, the alliance of radicals and reformers, factory operatives and skilled artisans, U.S.-born workers and immigrant laborers began to come apart. And as union leaders in the American Federation of Labor parted ways with socialists and anarchists, each side of the divided workers’ movement claimed one of the two labor days as its own, making the holidays appear increasingly opposed and losing sight of their shared foundation in the campaign for a shorter workday.

Conservative politicians and employers hostile to unions began to equate labor organizing with bomb throwing. In response, trade unions seeking acceptance as part of American industry and democracy displayed their allegiance on Labor Day by waving the American flag, singing patriotic songs and portraying themselves as proud, native-born Americans as opposed to foreign workers with subversive ideas.

Many political radicals and the immigrant workers among whom they found much of their following, meanwhile, came to identify more with the international workers’ movement associated with May Day than with American business and politics. They disavowed May Day’s origins among American trade unions, even as many trade unions distanced themselves from the radical roots of Labor Day. By the turn of the century, May Day moved further from the center of American culture, while Labor Day became more mainstream and less militant.

A man in a t-shirt identifying him as a member of Sheet Metal Workers Local 10, wearing a straw hat with American flags poking it out of it, walks in a parade.
A member of Sheet Metal Workers Local 105 walks in the small annual Labor Day parade hosted by the Los Angeles/Long Beach Harbor Labor Coalition on Sept. 5, 2022, in Wilmington, Calif.
Mario Tama/Getty Images

20th-century gains and losses

In the 20th century, labor unions won shorter hours for many of their members across the country. But they detached that demand from the broader agenda of workers’ autonomy and international solidarity.

They gained a landmark achievement with the federal enactment of the eight-hour day and 40-hour workweek for many industries during the 1930s. At that point, economist John Maynard Keynes projected that the rising productivity of labor would enable 21st-century wage earners to work just three hours a day.

Workers’ productivity did keep climbing as Keynes predicted, and their wages rose apace – until the 1970s. But their work hours did not decline, leaving the three-hour day a forgotten vision of what organized labor might achieve.

The Conversation

Jeffrey Sklansky is a member of UIC United Faculty, the labor union representing the bargaining units of Tenure/Tenure-Track and full-time Non-Tenure Track faculty at the University of Illinois Chicago.

ref. Labor Day and May Day emerged from the movement for a shorter workday in industrial America – https://theconversation.com/labor-day-and-may-day-emerged-from-the-movement-for-a-shorter-workday-in-industrial-america-262379

AI is making reading books feel obsolete – and students have a lot to lose

Source: The Conversation – USA (2) – By Naomi S. Baron, Professor Emerita of Linguistics, American University

Workarounds to reading a book cover-to-cover have existed for decades, but generative AI takes it to new heights. dem10/E+ via Getty Images

A perfect storm is brewing for reading.

AI arrived as both kids and adults were already spending less time reading books than they did in the not-so-distant past.

As a linguist, I study how technology influences the ways people read, write and think.

This includes the impact of artificial intelligence, which is dramatically changing how people engage with books or other kinds of writing, whether it’s assigned, used for research or read for pleasure. I worry that AI is accelerating an ongoing shift in the value people place on reading as a human endeavor.

Everything but the book

AI’s writing skills have gotten plenty of attention. But researchers and teachers are only now starting to talk about AI’s ability to “read” massive datasets before churning out summaries, analyses or comparisons of books, essays and articles.

Need to read a novel for class? These days, you might get by with skimming through an AI-generated summary of the plot and key themes. This kind of possibility, which undermines people’s motivation to read on their own, prompted me to write a book about the pros and cons of letting AI do the reading for you.

Palming off the work of summarizing or analyzing texts is hardly new. CliffsNotes dates back to the late 1950s. Centuries earlier, the Royal Society of London began producing summaries of the scientific papers that appeared in its voluminous “Philosophical Transactions.” By the mid-20th century, abstracts had become ubiquitous in scholarly articles. Potential readers could now peruse the abstract before deciding whether to tackle the piece in its entirety.

The internet opened up an array of additional reading shortcuts. For instance, Blinkist is an app-based, subscription service that condenses mostly nonfiction books into roughly 15-minute summaries – called “Blinks” – that are available in both audio and text.

But generative AI elevates such workarounds to new heights. AI-driven apps like BooksAI provide the kinds of summaries and analyses that used to be crafted by humans. Meanwhile, BookAI.chat invites you to “chat” with books. In neither case do you need to read the books yourself.

If you’re a student asked to compare Mark Twain’s “The Adventures of Huckleberry Finn” with J. D. Salinger’s “The Catcher in the Rye” as coming-of-age novels, CliffsNotes only gets you so far. Sure, you can read summaries of each book, but you still must do the comparison yourself. With general large language models or specialized tools such as Google NotebookLM, AI handles both the “reading” and the comparing, even generating smart questions to pose in class.

The downside is that you lose out on a critical benefit of reading a coming-of-age novel: the personal growth that comes from vicariously experiencing the protagonist’s struggles.

In the world of academic research, AI offerings like SciSpace, Elicit and Consensus combine the power of search engines and large language models. They locate relevant articles and then summarize and synthesize them, slashing the hours needed to conduct literature reviews. On its website, Elsevier’s ScienceDirect AI gloats: “Goodbye wasted reading time. Hello relevance.”

Maybe. Excluded from the process is judging for yourself what counts as relevant and making your own connections between ideas.

Reader unfriendly?

Even before generative AI went mainstream, fewer people were reading books, whether for pleasure or for class.

In the U.S., the National Assessment of Educational Progress reported that the number of fourth graders who read for fun almost every day slipped from 53% in 1984 to 39% in 2022. For eighth graders? From 35% in 1984 to 14% in 2023. The U.K.’s 2024 National Literacy Trust survey revealed that only one in three 8- to 18-year-olds said they enjoyed reading in their spare time, a drop of almost 9 percentage points from just the previous year.

Similar trends exist among older students. In a 2018 survey of 600,000 15-year-olds across 79 countries, 49% reported reading only when they had to. That’s up from 36% about a decade earlier.

The picture for college students is no brighter. A spate of recent articles has chronicled how little reading is happening in American higher education. My work with literacy researcher Anne Mangen found that faculty are reducing the amount of reading they assign, often in response to students refusing to do it.

Emblematic of the problem is a troubling observation from cultural commentator David Brooks:

“I once asked a group of students on their final day at their prestigious university what book had changed their life over the previous four years. A long, awkward silence followed. Finally a student said: ‘You have to understand, we don’t read like that. We only sample enough of each book to get through the class.’”

Now adults … according to YouGov, just 54% of Americans read at least one book in 2023. The situation in South Korea is even bleaker, where only 43% of adults said they had read at least one book in 2023, down from almost 87% in 1994. In the U.K., The Reading Agency observed declines in adult reading and hinted at one reason why. In 2024, 35% of adults identified as lapsed readers – they once read regularly, but no longer do. Of those lapsed readers, 26% indicated they had stopped reading because of time spent on social media.

The phrase “lapsed reader” might now apply to anyone who deprioritizes reading, whether it’s due to lack of interest, devoting more time to social media or letting AI do the reading for you.

All that’s lost, missed and forgotten

Why read in the first place?

The justifications are endless, as are the streams of books and websites making the case. There’s reading for pleasure, stress reduction, learning and personal development.

You can find correlations between reading and brain growth in children, happiness, longevity and slowing cognitive decline.

This last issue is particularly relevant as people increasingly let AI do cognitive work on their behalf, a process known as cognitive offloading. Research has emerged showing the extent to which people are engaging in cognitive offloading when they use AI. The evidence reveals that the more users rely on AI to perform work for them, the less they see themselves as drawing upon their own thinking capacities. A study employing EEG measurements found different brain connectivity patterns when participants enlisted AI to help them write an essay than when writing it on their own.

It’s too soon to know what effects AI might have on our long-term ability to think for ourselves. What’s more, the research so far has largely focused on writing tasks or general use of AI tools, not on reading. But if we lose practice in reading and analyzing and formulating our own interpretations, those skills are at risk of weakening.

Cognitive skills aren’t the only thing at stake when we rely too heavily on AI to do our reading work for us. We also miss out on so much of what makes reading enjoyable – encountering a moving piece of dialogue, relishing a turn of phrase, connecting with a character.

AI’s lure of efficiency is tantalizing. But it risks undermining the benefits of literacy.

The Conversation

Naomi S. Baron does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI is making reading books feel obsolete – and students have a lot to lose – https://theconversation.com/ai-is-making-reading-books-feel-obsolete-and-students-have-a-lot-to-lose-262680

Grief feels unbearable, disorienting and chaotic – a grief researcher and widow shares evidence-based ways to face the early days of loss

Source: The Conversation – USA (3) – By Liza Barros-Lane, Assistant Professor of Social Work, University of Houston-Downtown

Grief brings a person’s world to a halt. Valentina Shilkina/iStock via Getty Images Plus

The July 4 floods in Kerr County, Texas, sent shockwaves across the country. Now that most of the victims’ burials are over, the weight of grief is just beginning for loved ones left behind. It’s the daily devastation of an upended world where absence is glaringly present, nothing feels familiar, and life is paused in dizzying stillness.

I know this pain intimately. I’m a grief researcher, social work professor and widow. I lost my husband, Brent, in a drowning accident when I was 36. He went missing two days before his body was found.

Brent was a psychologist who specialized in grief, and we were trained to support others through suffering. Yet nothing could prepare me for my own loss.

Research and personal experience have shown me that profound loss disrupts the nervous system, sparking intense emotional swings and unleashing a cascade of physical symptoms. This kind of pain can make ordinary moments feel unbearable, so learning how to manage it is essential to surviving early grief. Thankfully, there are evidence-based tools to help people get through the rawest phases of loss.

Adolescents sit in mourning, with two leaning on each other crying, at a memorial service.
Kerrville, Texas, residents attend prayer service honoring the victims of the catastrophic flood on July 4.
Anadolu/Getty Images

Why early grief feels so disorienting

Losing someone central to your daily life unravels the routines that once anchored you.

Traumatic losses, the kind that arrive suddenly, violently or in ways that feel horrifying, carry a different kind of weight: the anguish of how the person died, the unanswered questions and the shock of having no time to prepare or say goodbye.

Everyday acts, like eating or going to bed, can highlight the absence and trigger both grief and dread. These moments reveal that grief is a whole-being experience. It affects not just our emotions, but also our bodies, thoughts, routines and sense of safety in the world.

Emotionally, grief can be chaotic. Emotions swing unpredictably, from sobs one moment to numbness the next. Mental health professionals call this emotional dysregulation, which includes feeling out of touch with emotions, reacting too little or too much, getting stuck in one emotional state or struggling to shift perspective.

Cognitively, focus feels impossible and memory lapses increase. Even knowing the loved one is gone, the brain scans for the person, expecting their voice or text, a natural attachment response that fuels disbelief, yearning and panic.

Physically, grief floods the body with stress hormones, leading to insomnia, fatigue, aches, heaviness and chest tightness. After losing someone close, studies suggest a brief increase in mortality risk, often from added strain on the heart, immune system and mental health.

Spiritually and existentially, loss can shake your beliefs to the core and make the world feel confusing, hollow and stripped of meaning.

Grief research confirms that these intense symptoms are typical for some time, exacerbated after traumatic loss.

Finding a new baseline

Eventually, most people begin to stabilize. But after traumatic loss, it’s not uncommon for that sense of chaos to linger for months or even years. In the beginning, treat yourself like someone recovering from major surgery: Rest often, move slowly and protect your energy.

Initially, you may only be able to manage small, familiar acts, such as brushing your teeth or making your bed, that remind you: I’m still here. That’s OK. Right now, your only job is survival, one manageable step at a time.

As you face everyday responsibilities again, allow space for rest. After Brent died, I brought a mat to work to lie down whenever fatigue or emotional weight became unbearable. I didn’t recognize this as pain management then, but that helped me survive the hardest days.

According to grief theorists, one of the most important tasks in early grief is learning to manage and bear emotional pain. Mourners must allow themselves to feel the weight of the loss.

But pain management isn’t just about sitting with the hurt. It also means knowing when to step away without slipping into avoidance, which can lead to panic, numbness and exhaustion. As Brent used to say, “The goal is to pick it up and put it down.” Taking intentional breaks through distraction or rest can make it possible to return to the grief without being consumed by it.

It also involves soothing yourself when the grief waves hit.

A man clutches a tree post in mourning.
Memorial services and prayer vigils are only the beginning of a long journey of grief and healing.
NurPhoto/Getty Images

Five small but powerful ways to face painful moments

Here are five simple evidence-based tools designed to make painful moments more bearable for you or a grieving loved one. They won’t erase the pain, but they can quickly offer relief for the raw, jagged edges of early grief.

1. Gentle touch to ease loneliness

Place one hand on your chest, stomach or gently on your cheek – wherever you instinctively reach when you’re in pain. Inhale slowly. As you exhale, say softly aloud or in your mind: “This hurts.” Then, “I’m here” or “I’m not alone in this.” Stay for one to two minutes, or as long as feels comfortable.

Why it helps: Grief often leaves you touch-starved, aching for physical connection. Soothing self-touch, a self-compassion practice, activates the vagus nerve, which helps regulate heart rate, breathing and the body’s calming response after stress. This gesture offers warmth and grounding, reducing the isolation of heartache.

2. Riding the wave

When grief surges, set a timer for two to five minutes. Stay with the emotion. Breathe. Observe it without judgment. If it’s too much, distract yourself briefly, such as by counting backward, then return to the feeling and notice how it may have shifted.

Why it helps: Emotions rise like waves. This skill helps you stay present during emotional surges without panicking, and it helps you learn that emotional surges peak and pass without destroying you. It draws from Dialectical Behavior Therapy, or DBT, an evidence-based treatment for people experiencing intense emotional dysregulation.

3. Soothing with soft textures

Wrap yourself in a soft blanket. Hold a stuffed animal. Or stroke your pet’s fur. Focus on the texture for two to five minutes. Breathe slowly.

Why it helps: Softness signals safety to your nervous system. It gives comfort when pain is too raw for words.

4. Cooling down overwhelm

Therapists often teach a set of DBT skills called TIPP to help people manage emotional overwhelm during crises like grief. TIPP stands for:

Temperature: Use cold, such as holding ice or applying cold water to the face, to trigger a calming response.

Intense exercise: Engage in short bursts of movement to release tension.

Paced breathing: Breathe in slow, controlled breaths to reduce arousal. Inhale slowly for two to four seconds, then exhale for four to six seconds.

Progressive muscle relaxation: Tense and release individual muscle groups to ease stress.

Why it helps: During grief, the nervous system can swing between high-arousal states, like panic and racing heart, to low-arousal states such as numbness and sadness.

Individual responses vary, but cold exposure can help calm a racing heart in moments of overwhelm, while pacing breathing or muscle relaxation soothes numbness and sadness.

5. Rating your pain

Rate your pain from 1 to 10. Then ask, “Why is it a 7, not a 10?” Or “When was it even slightly better?” Write down what helped.

Why it helps: Spotting even slight relief builds hope. It reminds you that the pain isn’t constant, and that small moments of relief are real and meaningful.

Even with these tools, there will still be moments that feel unbearable, when the future seems unreachable and dark.

In those moments, remind yourself that you don’t have to move forward now. This simple reminder helped me in the moments I felt completely panicked; when I couldn’t see how I’d survive the next hour, much less the future. Tell yourself: Just survive this moment. Then the next.

Lean on friends, counselors or hotlines like the Disaster Distress Hotline (1-800-985-5990) or the Suicide and Crisis Lifeline (988). If deep emotional pain continues to overwhelm you, seek professional help.

With support and care, you’ll begin to adapt to this changed world. Over time, the pain can soften, even if it never fully leaves, and you may find yourself slowly rebuilding a life shaped by grief, love and the courage to keep going.

The Conversation

Liza Barros-Lane does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Grief feels unbearable, disorienting and chaotic – a grief researcher and widow shares evidence-based ways to face the early days of loss – https://theconversation.com/grief-feels-unbearable-disorienting-and-chaotic-a-grief-researcher-and-widow-shares-evidence-based-ways-to-face-the-early-days-of-loss-262423