Artemis II : la NASA lance bientôt la première mission habitée vers la Lune depuis 54 ans, avec un Canadien à bord

Source: The Conversation – in French – By Gordon Osinski, Professor in Earth and Planetary Science, Western University

L’équipage de la nouvelle fusée lunaire Artemis II de la NASA au Centre spatial Kennedy, avec Jeremy Hansen de l’Agence spatiale canadienne complètement à droite. De gauche à droite : Reid Wiseman, Victor Glover et Christina Koch. (NASA)

La dernière mission Apollo s’est déroulée il y a 54 ans, et les humains ne se sont pas aventurés au-delà de l’orbite terrestre basse depuis ce temps. Cette situation est toutefois sur le point de changer avec le lancement de la mission Artemis II qui doit avoir lieu le mois prochain au Centre spatial Kennedy, en Floride.

L’agence spatiale américaine a annoncé que le lancement d’Artemis II, initialement prévu le 8 février, était reporté au 6 mars en raison d’une fuite d’hydrogène liquide découverte lors de la répétition générale du 3 février.

Il s’agira du premier vol habité du programme Artemis de la NASA, ainsi que de la première fois que des êtres humains s’aventureront vers la Lune depuis 1972. L’astronaute canadien Jeremy Hansen sera à bord. Il sera le premier non-Américain à voler jusqu’à la Lune, faisant du Canada le deuxième pays à envoyer un astronaute dans l’espace lointain.




À lire aussi :
Les innovations spatiales canadiennes sont essentielles aux missions Artemis


La NASA a annoncé qu’elle reportait le lancement d’Artemis II à la fenêtre de lancement de mars, qui commence le 6 mars, en raison d’une fuite d’hydrogène liquide découverte lors de la répétition générale.

Je suis professeur, explorateur et astrogéologue. Depuis 15 ans, je contribue à la formation de Hansen et d’autres astronautes en géologie et en sciences planétaires. Je suis également membre de l’équipe scientifique d’Artemis III et chercheur principal de la toute première mission canadienne d’exploration lunaire avec une astromobile.

une fusée dans un lanceur, la nuit
Le lanceur SLS de la mission Artemis II de la NASA avec le vaisseau spatial Orion fixés à la rampe de lancement mobile du Centre spatial Kennedy, en Floride.
(NASA)

Quelle est la mission d’Artemis ?

Le programme Artemis de la NASA, commencé en 2017, a un objectif ambitieux : celui de retourner sur la Lune et d’y établir une base afin de préparer l’envoi d’êtres humains sur Mars. Le lancement de la première mission, Artemis I, a eu lieu fin 2022. Après quelques retards, celui d’Artemis II est prévu en mars.

Hansen et ses trois coéquipiers américains en feront partie.

Il s’agit d’une mission particulièrement exaltante. Artemis II marquera le premier lancement d’êtres humains à l’aide du gigantesque lanceur SLS (Système de lancement spatial) de la NASA, ainsi que le premier vol d’humains à bord du vaisseau spatial Orion.

La fusée SLS est la plus puissante jamais construite par la NASA, avec la capacité de lancer plus de 27 tonnes métriques de charges utiles (équipements, instruments, matériel scientifique et cargaison) sur la Lune. Le vaisseau spatial Orion se trouve au sommet et transportera l’équipage vers la Lune. Le nom choisi par les membres de l’équipage pour sa capsule, Integrity, reflète, selon eux, les valeurs de confiance, de respect, de franchise et d’humilité.

une infographie illustre un vaisseau spatial
Infographie réalisée par la NASA montrant les différentes parties du vaisseau spatial Orion.
(NASA)

Que fera l’équipage d’Artemis II dans l’espace ?

Après le lancement, l’équipage procédera à des tests des systèmes de survie d’Integrity : le distributeur d’eau, l’équipement de lutte contre les incendies et, bien sûr, les toilettes. Saviez-vous qu’il n’y avait pas de toilettes lors des missions Apollo ? À la place, les équipages utilisaient des « tuyaux sanitaires ».

Si tout va bien, Artemis II allumera ce qu’on appelle l’étage de propulsion cryogénique provisoire (Interim Cryogenic Propulsion Stage), une partie de la fusée SLS toujours reliée à Integrity, afin d’élever l’orbite du vaisseau spatial. Si tout continue à bien fonctionner, Orion et ses quatre astronautes demeureront 24 heures en orbite haute, à une distance pouvant atteindre 70 000 kilomètres de la planète.

À titre de comparaison, la Station spatiale internationale orbite à seulement 400 kilomètres de la Terre.

Après une série de tests et de vérifications, l’équipage procédera à une des étapes les plus critiques de la mission : l’insertion translunaire (ITL). Il s’agit du moment où le vaisseau spatial passe d’une orbite terrestre, d’où il pourrait facilement revenir sur Terre, à une trajectoire vers la Lune et l’espace lointain.

La trajectoire en forme de « huit » de dix jours de la mission Artemis II
La trajectoire en « huit » de 10 jours de la mission Artemis II.
(NASA)

Une fois qu’Integrity est en route vers la Lune après l’ITL, il n’y a plus de retour en arrière possible, du moins pas sans se rendre d’abord jusqu’à la Lune. À l’instar des premières missions Apollo, Artemis II entrera alors dans ce qu’on appelle une « trajectoire de retour libre ». Même si les moteurs d’Integrity tombaient complètement en panne, la gravité de la Lune ferait naturellement tourner le vaisseau spatial autour d’elle et le redirigerait vers la Terre.




À lire aussi :
L’exploration spatiale n’est pas un luxe. Elle est nécessaire


Après trois jours de voyage vers la Lune, l’équipage entamera la phase la plus passionnante de la mission : le survol de la Lune. Integrity contournera la face cachée de la Lune, passant à une distance comprise entre 6 000 et 10 000 kilomètres au-dessus de sa surface, soit bien plus loin de la Terre que toutes les missions Apollo.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


S’inspirant d’un titre de Star Trek, on peut dire que, à ce stade, l’équipage d’Artemis II aura voyagé là où aucun humain n’est allé auparavant. Ce sera, littéralement, le point le plus éloigné de la Terre jamais atteint par un être humain.

L’exploration lunaire, un effort international

La présence d’un astronaute canadien dans l’équipage d’Artemis II témoigne de la nature collaborative et internationale du programme.

Une infographie montre tous les signataires des accords Artemis
Le 26 janvier 2026, Oman est devenu le 61ᵉ pays à signer les accords Artemis.
(NASA)

Si la NASA a créé le programme et en est le moteur, 61 pays ont signé les accords Artemis à ce jour.

Ces accords reposent sur la reconnaissance du fait que la coopération internationale dans le domaine spatial vise non seulement à renforcer l’exploration spatiale, mais aussi à améliorer les relations pacifiques entre les nations. Cette coopération est particulièrement nécessaire aujourd’hui, peut-être plus que jamais depuis la fin de la guerre froide.

J’espère sincèrement que lorsqu’Integrity reviendra de la face cachée de la Lune, les gens du monde entier prendront le temps, ne serait-ce que quelques instants, de réfléchir ensemble à un avenir meilleur. Bill Anders, qui a participé à la première mission Apollo habitée vers la Lune, a déclaré un jour :

Nous sommes venus explorer la Lune, et la chose la plus importante que nous ayons découverte est la Terre.

La Conversation Canada

Gordon Osinski a fondé la société Interplanetary Exploration Odyssey Inc. Il reçoit des fonds du Conseil de recherches en sciences naturelles et en génie du Canada et de l’Agence spatiale canadienne.

ref. Artemis II : la NASA lance bientôt la première mission habitée vers la Lune depuis 54 ans, avec un Canadien à bord – https://theconversation.com/artemis-ii-la-nasa-lance-bientot-la-premiere-mission-habitee-vers-la-lune-depuis-54-ans-avec-un-canadien-a-bord-275299

L’abattage religieux suscite la controverse. Mais la souffrance animale est aussi présente dans les abattoirs conventionnels

Source: The Conversation – in French – By Sarah Berger Richardson, Associate professor, L’Université d’Ottawa/University of Ottawa

Partout dans le monde, la pratique de l’abattage rituel est scrutée à la loupe. Au cours des dernières années, plusieurs pays européens — dont la Belgique, les Pays-Bas, la Suède, la Finlande, la Slovénie et le Danemark — ont adopté des lois exigeant l’étourdissement des animaux avant leur mise à mort, y compris dans le cadre de l’abattage casher et halal. Plus récemment, la France a connu une vive controverse autour de l’expansion des options halal dans les chaînes de restauration rapide.

Au Canada, un programme fédéral de 25 millions de dollars visant à soutenir la résilience et la productivité des secteurs casher et halal a provoqué le dépôt d’une pétition à la Chambre des communes réclamant son annulation. Par ailleurs, au Québec, le projet de loi sur la laïcité (projet de loi 9) prévoit de nouvelles règles interdisant aux institutions publiques d’offrir des menus exclusivement conformes à des prescriptions religieuses.

Ces controverses montrent que l’abattage rituel est à l’épicentre de tensions sociales et culturelles. Pourtant, une grande partie du débat fait abstraction des données empiriques sur les pratiques quotidiennes dans les abattoirs canadiens et sur la manière dont l’abattage rituel se compare à la production de viande conventionnelle.




À lire aussi :
Bien-être animal : ce que l’opacité des abattoirs canadiens nous empêche de voir


Comment le Canada réglemente l’abattage religieux

Le Règlement sur la salubrité des aliments au Canada établit les exigences relatives à la manipulation sans cruauté des animaux destinés à l’alimentation dans les abattoirs sous inspection fédérale. Il prévoit que tous les animaux doivent être rendus inconscients avant d’être saignés.

Une exception existe toutefois pour l’abattage rituel. Dans les pratiques casher et halal, les animaux doivent être conscients au moment de l’incision, laquelle consiste en une coupure continue et fluide au niveau du cou visant à provoquer une saignée rapide et complète pour le rendre inconscient en quelques secondes. Le règlement autorise ainsi l’abattage sans étourdissement préalable lorsqu’il est effectué conformément aux exigences du judaïsme ou de l’islam.

Dans des conditions idéales, l’étourdissement réduit la souffrance animale. Mais ces conditions sont souvent absentes dans les abattoirs conventionnels. En tant que chercheuse spécialisée dans la réglementation de l’agriculture animale, j’ai examiné des centaines de cas de manquements aux obligations de bien‑être animal à travers des demandes d’accès à l’information auprès de l’Agence canadienne d’inspection des aliments (ACIA). Ces rapports documentent des violations des règlements fédéraux, notamment des étourdissements inadéquats, des signes de souffrance physiologique et des défaillances dans les procédures de saignée.

L’étourdissement inadéquat : un problème généralisé dans l’industrie

Parmi les 796 incidents analysés entre 2017 et 2022 les types de non-conformité les plus fréquents, par ordre décroissant, concernaient :

  • L’état et le traitement des animaux à leur arrivée et lors du déchargement

  • L’étourdissement inadéquat

  • Les manipulations brusques ou violentes

  • Les incisions incorrectes

  • Les signes persistants de sensibilité après l’étourdissement

Ces rapports montrent que le principal problème de bien être animal dans les abattoirs n’est pas tant la manière dont les animaux sont mis à mort, mais plutôt les conditions dans lesquelles ils sont transportés. Nombre d’entre eux arrivent dans un état de détresse, blessés, en hypothermie ou déjà morts.

De plus, sur les 485 incidents impliquant, soit un étourdissement inadéquat, soit une saignée incorrecte, seulement 25 faisaient explicitement référence à l’abattage rituel. Ces derniers concernaient le plus souvent des problèmes tels que l’aiguisage du couteau, une technique d’incision déficiente ou des animaux présentant des signes de sensibilité pendant une durée supérieure à celle jugée acceptable. Dans un cas, un employé a fait plusieurs incisions au lieu du mouvement continu unique exigé. Ces situations sont préoccupantes — et elles méritent une attention sérieuse —, mais elles ne représentent qu’une fraction des cas de non-conformité observés.

Il importe de les situer dans le contexte plus large des pratiques d’étourdissement conventionnelles. Prenons l’exemple de l’étourdissement au CO2 dans le secteur porcin, la méthode dominante dans les abattoirs industriels. Une fois dans les chambres à gaz, les porcs manifestent fréquemment des signes prolongés de détresse — cris, convulsions, tentatives de fuite — avant de perdre connaissance. Il y a des enjeux similaires avec l’étourdissement électrique par bain d’eau chez la volaille, où un mauvais contact avec les électrodes peut entraîner un étourdissement incomplet et exposer les oiseaux à un choc douloureux avant même qu’ils n’entrent dans le bain électrifié.

L’étourdissement mécanique, méthode principalement utilisée pour les bovins, repose sur un pistolet destiné à rendre l’animal inconscient. Or, de nombreux rapports font état de tentatives multiples, attribuables à des défaillances de l’équipement, à un mauvais positionnement du tir ou à une formation inadéquate du personnel.

Ces pratiques touchent des millions d’animaux chaque année au Canada. Pourtant, elles suscitent rarement l’attention politique accordée au secteur relativement restreint de l’abattage casher et halal. L’abattage rituel représente environ 5,15 % des incidents signalés en lien avec l’étourdissement et la saignée, et environ 6 % de la production totale de viande au Canada.

Pris dans leur ensemble, ces chiffres ne révèlent pas un risque disproportionné pour le bien être animal associé à l’abattage rituel. Ils témoignent plutôt d’un secteur de petite taille confronté à des difficultés comparables, tant par leur nature que par leur fréquence, à celles qui caractérisent l’ensemble de l’industrie de la viande.

Pourquoi l’abattage rituel devient-il une cible politique ?

Si les données ne montrent pas des taux plus élevés de violations des normes de bien être animal, comment expliquer l’attention portée à cette méthode d’abattage ?

Une partie de la réponse réside dans la charge symbolique associée aux pratiques alimentaires qui marquent une identité religieuse minoritaire. Historiquement, l’abattage rituel a souvent servi de vecteur à la xénophobie — depuis les accusations médiévales de « meurtre rituel » jusqu’à l’interdiction de la viande casher imposée par le régime nazi en 1933. Aujourd’hui, les mouvements d’extrême droite en Europe présentent fréquemment l’abattage halal comme des preuves d’incompatibilité culturelle. Au Québec, les controverses entourant la viande halal alimentent des inquiétudes plus larges concernant l’immigration, le pluralisme et la visibilité des expressions religieuses.

Ce contexte politique complexe favorise des alliances inconfortables, dans lesquelles certaines formes de militantisme pour les droits des animaux s’associent — intentionnellement ou non — avec des dynamiques de politique identitaire.

Vers un débat plus honnête

Si l’objectif est véritablement d’améliorer le bien être animal à l’abattage, l’emphase sur l’abattage rituel risque de détourner l’attention de problèmes beaucoup plus importants et structurels au sein de la transformation de la viande conventionnelle.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Par exemple, un nombre croissant d’experts s’inquiètent de l’impact de l’utilisation du CO2 pour l’étourdissement sur le bien-être des porcs et recommandent l’adoption de méthodes alternatives. Ces préoccupations ont notamment été exprimées par l’Autorité européenne de sécurité des aliments ainsi que par le Comité sur le bien être animal du Department for Environment, Food and Rural Affairs en Écosse et au pays de Galles.

Au Canada, la réglementation entourant le transport des animaux est moins exigeante que celle en vigueur dans l’Union européenne. Par ailleurs, malgré une promesse électorale formulée lors des élections fédérales de 2021 visant à interdire l’exportation de chevaux vivants pour l’abattage, cette pratique se poursuit.

Au delà des enjeux liés à l’étourdissement et au transport, plusieurs situations de souffrance animale se trouvent en amont de la chaîne de production, dans les exploitations agricoles. À la ferme, les animaux d’élevage ne sont pas soumis aux mêmes normes de bien être ni à une surveillance continue comparable aux abattoirs. Cela laisse d’importantes sources de souffrance en dehors du champ de l’application réglementaire.

À mesure que la société prend davantage conscience de la capacité des animaux à ressentir la douleur et la peur, il devient clair que de nombreux leviers doivent être mobilisés pour réduire la souffrance dans l’agriculture animale. Cela implique de poser des questions difficiles sur l’éthique du secteur de la viande dans son ensemble. Les pratiques d’abattage rituel soulèvent des enjeux particuliers, mais elles sont loin d’être les seules.

La Conversation Canada

Sarah Berger Richardson ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. L’abattage religieux suscite la controverse. Mais la souffrance animale est aussi présente dans les abattoirs conventionnels – https://theconversation.com/labattage-religieux-suscite-la-controverse-mais-la-souffrance-animale-est-aussi-presente-dans-les-abattoirs-conventionnels-270610

The workplace wasn’t designed for humans – and it shows

Source: The Conversation – UK – By Christine Ipsen, Professor in Technology Implementation, Technical University of Denmark

Work designed for maximum output often treats people like expendable resources—and burnout is the predictable result. pexels/shvetsa, CC BY-SA

Input. Output. Targets met. Value created. Performance delivered. Strip work down to its essentials and for many people, this is what remains: a machine-like focus on producing, performing and optimising.

The system keeps moving – often with little concern for the human energy, attention and resilience required to keep it running. Over time, this can lead to stress, ill-health, disengagement and burnout. Almost half of employees worldwide say they’re currently burned out and nearly three-quarters of US workers report that workplace stress affects their mental health.

But exhaustion isn’t a personal failing – it’s built into the system. Indeed, this way of organising work is not accidental. It has deep roots in how modern workplaces were designed.

Much of this thinking dates back to the late 19th century and the work of Frederick Taylor, a US engineer whose ideas helped shape modern management. Taylor was widely known for his methods to improve industrial efficiency, by treating workers as parts of a machine – measured, paced and optimised.

Obviously, a lot has changed since Frederick Taylor’s time – we understand far more about mental health and people’s capacity for work. Yet, many workplaces still operate in this way – with a strict focus on performance and goals.

A new way of viewing work

These high levels of stress, ill-health and burnout made us reflect. As concern grows about exhausting natural resources in the name of profit, we began to question whether workplaces are doing the same to people – using them up for productivity, with little thought for the long-term cost.

While organisational psychology highlights motivation, engagement and well-being as drivers of performance, it often overlooks a crucial issue: what happens to people’s time, energy, skills and relationships once they are spent at work?

Many models of work assume these human resources are limitless, focusing on outputs rather than what is left behind. But without opportunities to recover and regenerate, this way of working leads to depletion, disengagement and ultimately burnout.

A man sits at a computer looking stressed, holding his head in his hands.
High performance, low battery.
pexels/diimejii, CC BY

But what if work didn’t have to use people up to get results? What if productivity and well-being weren’t in competition, but part of the same system?

Drawing on ideas from the circular economy, along with management theory and organisational psychology, we propose a different way of thinking about work. We call it circular work.

Circular work flips the usual logic. Instead of treating people’s time, energy and skills as resources to be consumed, it sees work as a cycle – where effort is matched with recovery, learning and renewal. The goal isn’t just short-term output, but work that people can sustain without burning out.

At its core, circular work connects employee well-being and organisational performance and is built around four simple ideas:

  • all human work resources are connected – energy, skills, knowledge and relationships affect each other

  • it’s possible to recover and regenerate spent work resources – rest, support, and learning help employees bounce back

  • work can build or drain resources – how work is designed determines whether people thrive or are thwarted

  • sustainable work grows from protected and renewed resources – investing in well-being and development helps to sustain people and organisations.

Humans not machines

The idea of renewing people’s energy and skills can sound radical in today’s target-driven work culture.

But renewal isn’t a luxury. It starts with a simple truth: people are not infinite or endlessly replaceable. Work can drain our energy, attention and health –sometimes in ways that take years to undo. Designing work as though this doesn’t matter comes at a real cost.

In practice, regeneration shows up in everyday management. Decisions about workload, autonomy, recovery time, recognition and support determines whether work depletes people or helps them recover and grow. Put simply, human needs and well-being have to sit at the centre of how work is organised.

Psychological safety is part of this. Regenerative workplaces are those where people can speak up, raise concerns and take reasonable risks without fear of blame.

This is where leadership really matters. Organisations need to ask hard questions about the true impact of management practices: do they drive absence, presenteeism and turnover – or do they enable learning, growth and renewal? Rewarding managers and teams who protect well-being reduces stress, retains talent and makes organisations places people want to work.

The bottom line is, as long as work is designed like a machine to maximise output, burnout will remain its most predictable outcome. But sustainable performance is possible. It just means actually designing workplaces that protect — and renew — the people working in them.


This article was commissioned as part of a partnership between
Videnskab.dk and The Conversation.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. The workplace wasn’t designed for humans – and it shows – https://theconversation.com/the-workplace-wasnt-designed-for-humans-and-it-shows-269127

How do scientists hunt for dark matter? A physicist explains why the mysterious substance is so hard to find

Source: The Conversation – USA – By David Joffe, Associate Professor of Physics, Kennesaw State University

The Coma Cluster, research into which supports the existence of dark matter. NASA, ESA, J. Mack (STScl), and J. Madrid (Australian Telescope National Facility)

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Can we generate a way to interact with dark matter with current technology? – Leonardo S., age 13, Guanajuato, Mexico


That’s a great question. It’s one of the most difficult and fascinating problems right now in both astronomy and physics, because while scientists know that the elusive substance called dark matter makes up the majority of all matter in the universe, we’ve never actually observed it directly. Dark matter is so difficult to interact with because it’s “dark,” which means it doesn’t interact directly with light in any way.

I’m a physicist, and scientists like me observe the world around us mainly by looking for signals from different wavelengths of light. So no matter what type of technology scientists use, they run into the same issue in the hunt for dark matter.

It’s not completely impossible to interact with dark matter, though, because it can interact with ordinary matter in other ways that don’t involve light. But those interactions are generally very weak. What we call dark matter is really anything that we can see only through these weaker interactions, especially gravity.

How we know dark matter exists

One way that dark matter can interact with ordinary matter is through gravity. In fact, gravity is the main reason scientists even think dark matter exists at all.

For decades, scientists have been observing how galaxies spin and move throughout the universe. Gravity acts on stars and galaxies, in the same way it keeps you from floating off into space. Heavier objects have a stronger gravitational pull. At these huge scales, researchers have spotted some unexpected quirks that gravity alone can’t explain.

For example, almost 100 years ago, a Swiss astronomer named Fritz Zwicky studied a cluster of galaxies called the Coma Cluster. He noticed the galaxies inside it were moving very fast, so much so that they should have flown apart many millions of years ago.

The only way the cluster could have stayed together for so long is if there was much more matter holding it together with gravity than the telescope could see. This extra matter necessary to hold the galaxies together became known as dark matter.

About 40 years after Zwicky, an American astronomer named Vera Rubin looked at the individual stars moving around the centers of spiral galaxies as they rotated. She saw that the stars at the outside edges of the spiral were moving much faster than you’d expect if only the gravity from the stars you could see was keeping them from flying off into intergalactic space.

Just as with the galaxies moving around the cluster, the motion of the stars around the edges of the galaxies could be best explained if there was much more matter in the galaxies than what we could see.

A spiral-shaped galaxy with a bright spot in the center
A rotating spiral galaxy in the Coma Cluster.
NASA, ESA, and the Hubble Heritage Team (STScl/AURA); Acknowledgement: K. Cook (Lawrence Livermore National Laboratory)

More recently, scientists have combined optical telescopes that observe visible light with X-ray telescopes. Optical telescopes can take pictures of galaxies as they move and rotate. Sometimes, galaxies in these images are distorted or magnified by gravity coming from large masses in front of them. This phenomenon is called gravitational lensing, which is when the gravity around a very heavy object is so strong that it bends the light passing by it, acting like a lens.

X-ray telescopes, on the other hand, can see the clusters of hot gases that surround galaxies. By combining these two telescopes, astronomers can see galaxies as well as the gases surrounding them – all the observable matter. Then, they can compare these images with the optical results. If there’s more gravitational lensing seen than what could be caused by the gas, there must be more mass hiding somewhere and causing the lensing.

Clouds of blue and pink shown, with lots of bright spots representing galaxies shown in the background.
The picture combines optical images of the galaxies with X-ray images. The region in the pink shows the area where the X-ray telescope sees the distribution of gas around the galaxies, and the blue area shows the region where gravitational lensing can be observed. There is blue in places where there isn’t pink, so lensing is showing that there’s something else heavy there. Dark matter is again the best explanation.
NASA, ESA, CXC, M. Bradac (University of California, Santa Barbara), and S. Allen (Stanford University)

How we might be able to see dark matter

Unfortunately, all this tells astronomers only that dark matter must be there, not what it really is. The evidence for dark matter is all based on how it interacts with gravity at very large scales. It’s still “dark” to scientists in the sense that it hasn’t interacted directly with any measurement devices.

The good news is that light and gravity aren’t the only forces in the universe. A force called the weak force might be able to interact directly with dark matter and give scientists a direct signal to observe. Most of the ideas about what the dark matter might be include the possibility of it interacting through the weak force, converting energy into signals that are visible.

The weak force is not observable at normal scales of distance. But for objects the size of an atom’s nucleus or smaller, it can change one type of subatomic particle into another. The weak force can also transfer energy and momentum at very short distances – this is the main effect scientists hope to observe with dark matter. These processes might be extremely rare, but in theory they should be possible to see.

Most experiments looking to see dark matter directly are searching for signals of rare weak interactions in an underground detector, or for gamma rays that can be seen in a special gamma-ray telescope.

In either case, a signal from dark matter would likely be very faint, resulting from an interaction that can’t be explained any other way, or a signal that doesn’t seem to have any other possible source. Even if the effect is faint, it might still be possible to observe, and any such signal would be an exciting step forward in being able to see the dark matter more directly.

In the end, it may be a combination of signals from experiments deep underground, in particle colliders, and different types of telescopes that finally lets scientists see dark matter more directly. Whichever technology ends up being successful, hopefully sometime soon the matter that makes up our universe will be a little less dark.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

David Joffe receives funding from NASA through a grant from the Georgia Space Grant Consortium.

ref. How do scientists hunt for dark matter? A physicist explains why the mysterious substance is so hard to find – https://theconversation.com/how-do-scientists-hunt-for-dark-matter-a-physicist-explains-why-the-mysterious-substance-is-so-hard-to-find-269876

Infusing asphalt with plastic could help roads last longer and resist cracking under heat

Source: The Conversation – USA – By Md S Hossain, Professor of Civil Engineering, University of Texas at Arlington

A stretch of road near Rockwall, Texas, paved with plastic-infused asphalt. Md. Sahadat Hossain

Globally, more than 400 million tons of plastic are produced each year, and less than 10% is recycled. Much of the rest ends up burned, buried or drifting through waterways, a problem that’s only getting worse.

As a civil engineer, I started asking a simple question: Instead of throwing used plastic away, what if we could build something useful with it?

That question led to a technology that mixes small amounts of recycled plastic with asphalt – the black, sticky material used to make roads and parking lots. The result is a stronger road that lasts longer and keeps some used plastic out of the environment.

You can see these roads on my university’s campus at the University of Texas at Arlington, where my team has paved test sections in parking lots. Perhaps more importantly for testing this technology at scale, we have constructed a one-mile section of plastic-infused road in Rockwall, Texas, a city near Dallas. We’ve gotten interest from more cities in and outside Texas as well.

My goal is to take one problem – plastic pollution – and use it to fix another: deteriorating roads.

Where the idea came from

I grew up in a low-income neighborhood in Bangladesh, near a large dump site. As a child, I noticed that people living closest to the piles of waste were often sick, while those farther away were healthier.

At the time, I didn’t know the science behind it – I just saw neighbors having to choose between buying medicine and buying dinner. That memory left a long-lasting impact on me.

Years later, when I became an engineer, I learned that poor waste management doesn’t just harm the environment – it harms people. That realization became the foundation of my work.

How plastic roads work

Traditional asphalt is made from a mix of stones, sand and a petroleum-based binder called bitumen, which holds everything together. In my research team’s process, we replace a small part of that bitumen – about 8% to 10% – with melted plastic from everyday items, such single-use plastic bags and plastic bottles. For our plastic road construction project near Dallas, we used 4.5 tons of plastic waste for nearly a mile of a one-lane road.

We first clean the plastic, then shred it into small flakes. Finally, we mix it into the asphalt at high temperatures. These steps ensure that it melts completely and bonds tightly, leaving no loose plastic behind.

This process is like adding rebar to concrete: The plastic adds flexibility and strength. Roads with this mix can better handle extreme temperatures and heavy traffic. In hot places, that means fewer cracks and potholes.

During an extreme heat wave in April 2024, plastic road constructed in Dhaka, Bangladesh, showed no visible signs of distress or cracks, whereas many roads in Bangladesh had visible cracks and distress during the same period.

Heating asphalt in a large piece of construction equipment.
The team used plastic-infused asphalt to pave a stretch of road.
Md Sahadat Hossain

It also reduces the demand for new petroleum-based materials, since we’re reusing recycled plastic that already exists. Plastic road replaces bitumen, an already petroleum-based ingredient in the road, with waste.

The plastic waste problem

Plastic waste has grown dramatically over the past several decades. In the U.S., plastic waste has increased every year since the 1960s, with the steepest rise between 1980 and 2000.

In 2018 alone, landfills received nearly 27 million tons of plastic, making up 18.5% of all municipal solid waste nationwide. That’s a staggering amount of material sitting unused.

Plastic-infused asphalt can also save money. Because it lasts longer and resists cracking, cities may spend less on repairs and maintenance. In Rockwall, for example, early estimates suggest these roads could extend the pavement’s life by several years.

A team using shovels and broom-like tools to smooth a patch of new pavement.
The construction team finishes up paving a stretch of road with plastic-infused pavement.
Md Sahadat Hossain

Under extreme heat, bitumen can melt. During a performance evaluation of a plastic road test section in Bangladesh, we found that adding plastic to the mix increases the road’s heat resistance. These results are especially helpful for states like Texas that deal with extreme heat over the summer. For our sites in UTA’s parking lot and in Rockwall, the pavement has so far stayed intact on days when temperatures surpassed 100 degrees Fahrenheit.

Overcoming challenges

But there are still challenges. Scaling up production requires a consistent supply of clean, sorted plastic, which not all cities have the infrastructure to provide. Some types of plastic can’t be safely melted or may release harmful fumes if not processed correctly. We’re studying these issues closely to make sure the process is safe.

There are also questions about what happens when plastic roads reach the end of their life. Could they release microplastics – tiny plastic fragments – as they wear down? Early research suggests the risk is low because the plastic is bound within the asphalt, but we’re continuing to monitor it.

A petri dish full of tiny shards of colorful plastics
Microplastics are tiny bits of plastic that show up in the environment.
Svetlozar Hristov/iStock via Getty Images Plus

My own lab studies show very minimal microplastic release, and a 2024 study found that the release of microplastics from recycled plastic-asphalt was estimated to be a thousand times less than the release of rubber particles from worn tires.

Eventually, we may need to come up with alternative materials for these roads if plastic waste begins to decline. But in the meantime, this type of waste is still readily available.

Building toward a sustainable future

Our next steps involve expanding this technology to more regions, testing different types of plastic blends and ensuring that every road built this way is durable, affordable and environmentally safe.

Right now, we are working on testing and implementing plastic roads in cities beyond Texas and even in other countries. We also have filed for a patent for the technology and in the long term plan to eventually commercialize it.

When I see plastic roads being built in Bangladesh – sometimes not far from where I grew up – I think back to the people who lived near those dump sites. This work isn’t just about roads or recycling. It’s about dignity and keeping at least some waste away from the places where people live.

The Conversation

Md S Hossain is listed under a patent filed for plastic-infused asphalt.

ref. Infusing asphalt with plastic could help roads last longer and resist cracking under heat – https://theconversation.com/infusing-asphalt-with-plastic-could-help-roads-last-longer-and-resist-cracking-under-heat-264156

No animal alive today is ‘primitive’ – why are so many still labeled that way?

Source: The Conversation – USA – By Kevin Omland, Professor of Biological Sciences, University of Maryland, Baltimore County

A platypus has evolved to fit its particular ecological niche. Joao Inacio/Moment via Getty Images

We humans have long viewed ourselves as the pinnacle of evolution. People label other species as “primitive” or “ancient” and use terms like “higher” and “lower” animals.

A drawing of a tree shape with monera and amoebae at the base of the trunk, many branches labeled with other organisms, and man at the very top
‘Man’ is at the very top looking down at all other forms of life in Ernst Haeckel’s drawing.
Ernst Haeckel/Photos.com via Getty Images Plus

This anthropocentric perspective was entrenched in 1866, when German scientist Ernst Haeckel drew one of the first trees of life. He placed “Man,” clearly labeled, at the top. This illustration helped establish the popular view that we are the ultimate goal of evolution.

Modern evolutionary biology and genomics debunk that flawed perspective, showing there is no hierarchy in evolution. All species alive today, from chimpanzees to bacteria, are cousins that each have equally long lineages, rather than ancestors or descendants.

Unfortunately, these outdated notions remain prevalent in scientific journals and science journalism. In my new book, “Understanding the Tree of Life,” I explore why it is fundamentally misleading to view any current species as primitive, ancient or simple. As an evolutionary biologist, I offer an alternative view that emphasizes evolution’s complex, nonhierarchical, interconnected history.

Not primitive, just different

Egg-laying mammals, the monotremes, are frequently labeled the most “primitive” living mammals. This category includes the platypus and four species of echidnas. Indeed, their egg-laying is an ancient characteristic shared with reptiles.

But platypuses also have many unique recent adaptations that make them well suited to their lifestyle: They have webbed feet for swimming and a bill with specialized electroreceptors that detect prey in the mud. Males have spurs with venom that they can use to defend themselves against rivals. If you take a platypus’s view, they’re the pinnacle of evolution for their specific ecological niche.

prickly looking echidna digging for food under a log
Echidnas have just what it takes to flourish in their unique niche.
Chris Beavon/Moment via Getty Images

Echidnas may seem primitive, especially because they lack a capability that humans have – giving birth to live young. Yet they possess many extraordinary traits that humans lack. Echidnas are known for their outer covering of protective spines. They also have powerful claws for digging, a sensitive beak and a long sticky tongue, all of which they use foraging for ants and termites. In a head-to-head competition foraging for prey in a termite mound, an echidna would easily outperform any human.

Other mammals native to Australia also turn up on lists of primitive mammals, such as many species of marsupials – pouched mammals, including kangaroos, koalas and wombats. These species generally give birth to small, minimally developed young that move to the mother’s pouch where they complete development. Pouch development may seem inferior to the human way, but it does have advantages. For example, kangaroos can simultaneously nurture young at three stages of development.

Evolutionary tree appearance depends on focus

Marsupials such as opossums, or monotremes such as the platypus, are often shown at the bottom or left side of an evolutionary tree. However, that does not mean that they are older, more primitive or less evolved.

Evolutionary trees – what scientists call phylogeniesshow cousin relationships. Just as your second or third cousin is no more primitive than you are, it is misleading to think of a koala or echidna as primitive because of where they are depicted on these trees.

When scientists and journalists choose which species to include in the evolutionary trees in their publications, it can influence how the public perceives these species. But species shown lower on the page are not “lower” on some evolutionary scale.

Rather, they are placed there because the focus of many of those trees is on placental mammals, such as humans, other primates, carnivores, rodents and so on. When the focus is on placental mammals, it makes sense to include one or two species of marsupials as comparisons for reference.

diagram showing family relationship of different marsupial species with animals in silhouette at the top, a human is included for comparison.
A phylogenetic tree focused on marsupials shows humans as one of the species included for comparison.
Spiekman, S., Werneburg, I. Sci Rep 7, 43197 (2017), CC BY

In contrast, in a tree focused on marsupials, one or two placental mammals could be included at the bottom of the page for comparison.

Why understanding the tree of life matters

Viewing humans as the goal of evolution leads to a misunderstanding of the entire evolutionary process. Since evolution is the conceptual foundation for all biology, this flawed perspective can hinder all biological and biomedical science.

Mastering a modern understanding of evolutionary trees is crucial to advances in fields ranging from animal behavior and physiology to conservation and biomedicine. For example, because rhesus monkeys are much more closely related to us than are capuchins, rhesus monkeys are generally better subjects for preliminary tests of human vaccines. Opossums, incorrectly considered to be primitive, are a great species for providing a broader framework for studies of neurobiology and aging because they are distantly related to us, not because they are lower or more ancestral.

Grasping the profound reality that humans are not the pinnacle of evolution, but one branch among many, is foundational for all modern biology. Understanding the tree of life is central to fully embracing the shared modern status of all animals, from platypuses to people.

The Conversation

Kevin Omland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. No animal alive today is ‘primitive’ – why are so many still labeled that way? – https://theconversation.com/no-animal-alive-today-is-primitive-why-are-so-many-still-labeled-that-way-266208

How the law can add to child sex trafficking victims’ existing trauma

Source: The Conversation – USA – By Kate Price, Associate Research Scientist at the Wellesley Centers for Women, Wellesley College

Most U.S. states retain the right to arrest and prosecute children for prostitution. Douglas Sacha/Getty Images

The January 2026 release of additional files related to the Justice Department’s investigation of convicted sex offenders Jeffrey Epstein and Ghislaine Maxwell has brought renewed attention to the late financier’s connections to the world’s rich and powerful.

However, the failure to redact identifying victim information and explicit photos has also brought unwanted attention to survivors. The lack of consideration for their welfare illustrates how legal proceedings can add to child sex trafficking victims’ existing trauma and burden instead of offering a stable path forward.

Some states have passed laws in recent years to protect child victims of sex trafficking. But at the same time, most states have passed laws that allow those same children to be arrested or prosecuted for prostitution. It’s a tug of war between advocates, law enforcement and policymakers to determine the best approach for keeping vulnerable children safe from pimps, predators and dangerous family members.

Often these intentions to “keep kids safe” end up harming the very children the laws are supposed to protect. This is done by identifying them as criminals and not victims.

As a sociologist and scholar who researches the commercial sexual exploitation of children, I believe Americans have to look at the many different ways states treat sexually exploited minors to fully understand this issue and the harm that is being done.

Retraumatizing victims

When approved in 2000, the federal Trafficking Victims Protection Act established that children under 18 who experience commercial sexual exploitation are sex trafficking victims.

Criminally charging a child with prostitution, as most states allow, asserts they are willfully participating in the commercial sex trade, while identifying a minor as a sex trafficking victim recognizes they are not in this situation by choice.

Some states require minors to prove a third party forced, deceived or coerced them into prostitution to be considered a child sex trafficking victim. Their innocence, despite their age, is not automatically assumed. This approach risks retraumatizing victims by labeling and stigmatizing them as criminal, as voluntary participants in the commercial sex trade.

Examining these state statutes is important because these minors are more likely to interact with local law enforcement than federal agents. That’s because in the U.S. federalist system, states have more power than the national government to set rules regarding crime.

Arresting and prosecuting minors for prostitution

As of 2025, 15 states do not arrest and prosecute children for prostitution, while seven states allow a minor to be arrested but not prosecuted for this charge, according to my unpublished research. As a result, sexually exploited minors can be criminalized in 35 states for their maltreatment because they can be charged or prosecuted for prostitution.

These laws determine how courts identify commercially sexually exploited minors, as victims or criminals.

Safe harbor laws have been adopted by 31 states as a legal strategy to divert sex trafficked minors from the criminal legal system. These measures connect them to specialized services, including trauma-informed health care and safe housing. But safe harbor statutes do not guarantee that children will be protected from arrest or prosecution for prostitution.

For example, New York’s 2008 safe harbor law requires a child charged with prostitution to admit they participated in this crime. The child also has to explain why they shouldn’t be held liable for the charge.

Another common strategy adopted by some states, including Rhode Island, requires a minor to fulfill a specific “child sex trafficking victim” definition – such as proving force, fraud or coercion by a third party – to avoid being criminalized for prostitution. Yet mandating sexually exploited minors to meet such requirements places the burden of proof on the child.

Conversely, Massachusetts’ safe harbor law does not afford any protections to minors, allowing a child to be arrested and prosecuted for prostitution. State and local police collaborate with child protective services and are trained not to arrest sexually exploited minors. But some officials argue law enforcement needs the threat of criminal charges to pressure minors they see as “noncompliant” to accept services or leave trafficking situations.

This approach blurs the line between criminal legal mechanisms and social work. It positions police as “helpers” who expect trafficked youth to accept support or risk criminal punishment.

In sum, unlike federal law, which recognizes all sexually exploited minors as victims, some state authorities present minors with a choice: comply with law enforcement or prove their innocence.

The adultification of child victims

These demands that shift legal burdens to sexually exploited minors signal that law enforcement and legislators expect them to have the capacity to make mature and rational choices. Yet, neuroscience research indicates juveniles don’t have the same decision-making capacity as adults until their early to mid-20s.

Further, sexually exploited minors with trauma may appear as uncooperative in stressful situations. Those include being detained or arrested for prostitution.

By blaming sex trafficked minors for “making bad choices,” the criminal legal system treats commercial sexual exploitation victims as complicit. And this may lead to prostitution charges instead of support. Furthermore, focusing on a child’s “choices” does not address the financial, familial and traumatic adversities that make victims vulnerable to sexual violence and exploitation in the first place.

Commercial sexual exploitation risk factors include complex post-traumatic stress disorder, low socioeconomic status, limited educational access and child sexual abuse prior to this exploitation. That includes exploitation from fraught family living situations where a parent, relative or caregiver sexually exploits a child.

Racial inequality in prostitution charges

Similarly, racial bias has deeply influenced trafficking legislation.

In 1910, Congress passed The Mann Act, also known as the White-Slave Traffic Act. This measure framed commercial sexual exploitation as a problem affecting only white women and girls, erasing the exploitation of people of color.

This pattern continues today. Black and brown children in the U.S. are more likely to be arrested and detained for prostitution than all other racial groups. Children who live in states with higher levels of structural economic inequality, which affects children of color at higher rates that white children, are at higher risk of being arrested and prosecuted for prostitution.

My research with Keith Bentele indicates that states with higher levels of structural economic inequality are less likely to adopt legislation protecting children from arrest and prosecution for prostitution.

Increasing compassion for victims

Without addressing these structural inequalities and the lack of a social safety net, sex trafficked children, particularly children of color and LGBTQ+ youth, are at risk of facing further marginalization and criminalization for prostitution.

One state has risen above the rest in recognizing and addressing these systemic barriers. Minnesota’s “No Wrong Door” framework utilizes a public health approach and is regarded as the gold standard of state-level commercial sexual exploitation legislation.

Protecting youth up to age 24 from prostitution charges, Minnesota offers housing and medical services to victims instead of criminal punishment. It also coordinates trauma-informed training for professionals, such as police and social workers.

An evaluation of this model indicates that it has successfully increased compassion for youth victims in the community, particularly among law enforcement.

Mallika Sunder, a student at Wellesley College and intern in its Wellesley Centers for Women, co-authored this article.

The Conversation

Kate Price does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How the law can add to child sex trafficking victims’ existing trauma – https://theconversation.com/how-the-law-can-add-to-child-sex-trafficking-victims-existing-trauma-271922

Journalism may be too slow to remain credible once events are filtered through social media

Source: The Conversation – USA – By Charles Edward Gehrke, Deputy Division Director of Wargame Design and Adjudication, US Naval War College

House Speaker Mike Johnson updates reporters about budget talks on Capitol Hill. AFP/Roberto Schmitt via Getty Images

In the first weeks after Russia’s invasion of Ukraine in 2022, a strange pattern emerged in Western media coverage. Headlines oscillated between confidence and confusion. Kyiv would fall within days, one story would claim, then another would argue that Ukraine was winning. Russian forces were described as incompetent, then as a terrifying existential threat to NATO.

Analysts spoke with certainty about strategy, morale and endgames, but often reversed themselves within weeks. To many news consumers, this felt like bias – either pro-Ukraine framing or anti-Russia narratives. Some commentators accused Western media outlets of cheerleading or propaganda.

But I’d argue that something more subtle was happening. The problem was not that journalists were biased. It was that journalism could not keep pace with the war’s informational structure. What looked like ideological bias was, more often, temporal lag.

I serve in the Navy as a war gamer. The most critical part of my job is identifying institutional failures. Trust is one of the most critical and, in this sense, the media is losing ground.

The gap between what people experience in real time and what journalism can responsibly publish has widened. This gap is partly where trust erodes. Social media collapses the distance between event, exposure and interpretation. Claims circulate before journalists can evaluate them.

This matters in my world because the modern battlefield is not just physical. Drone footage circulates instantly. Social media channels release claims in real time. Intelligence leaks surface before diplomats can respond.

These dynamics also matter for the public at large, which encounters fragments of reality, often through social media, long before any institution can responsibly absorb and respond to them.

Journalism, by contrast, is built for a slower world.

Slow journalism

At the core of their work, journalists observe events, filter signal from noise, and translate complexity into narrative. Their professional norms – editorial gatekeeping, standards for sourcing, verification of facts – are not bureaucratic relics. They are the mechanisms that produce coherence rather than chaos.

But these mechanisms evolved when information arrived more slowly and events unfolded sequentially. Verification could reasonably precede publication. Under those conditions, journalism excelled as a trusted intermediary between raw events and public understanding.

These conditions no longer exist.

A Ukrainian medic treats a soldier for leg injuries.
As in other conflicts, early reports out of battles in Ukraine sometimes ended up being inaccurate.
AP Photo/Leo Correa

Information now arrives continuously, often without clear provenance. Social media platforms amplify fragments of reality in real time, while verification remains necessarily slow. The key constraint is no longer access; it is tempo.

Granted, reporters often present accounts as events are occurring, whether on live broadcasts or through their own social media posts. Still, in this environment, journalism’s traditional strengths become sources of lag.

Caution delays response. Narrative coherence hardens fast. Corrections then feel like reversals rather than refinements.

Covering real-time events

The war in Ukraine has made this failure mode unusually visible. Modern warfare generates data faster than any institution can metabolize. Battlefield video and real-time casualty claims flood the system continuously.

For their part, journalists are forced to operate from an impossible position: expected to interpret events at the same speed they are livestreamed. And so journalists are forced sometimes to improvise.

Early coverage of the war leaned on simplified frames, including Russian incompetence, imminent victory and decisive turning points. They provided provisional stories generated to satisfy intense public demand for clarity.

As the war evolved, however, those stories collapsed.

A woman wearing a yellow jacket holds her phone to record ICE agents in one hand and her dog's leash in the other.
Citizen journalists can often record and upload images or video of events faster than traditional news outlets will produce a story.
SOPA Images via Getty Images

This did not mean the original reporting was malicious. It meant the narrative update cycle lagged behind the underlying reality. What analysts experienced as iterative learning, audiences experienced as contradiction.

The acceleration trap

This forces journalism into a reactive posture. Verification trails amplification, meaning accurate reports often arrive after the audience has already formed a first impression.

This inverts journalism’s historical role. Audiences encounter raw claims first and journalism second. When the two diverge, journalism appears disconnected from reality as people experienced it.

Over time, this produces a structural shift in trust. Journalism is no longer perceived as the primary interpreter of events, but as one voice among many, arriving late. Speed becomes a proxy for relevance. Interpretation without immediacy is discounted.

Although partisan bias certainly exists, it is insufficient to explain the systemic incoherence Americans are witnessing.

Can journalism adapt?

Institutions optimized for one tempo rarely adapt cleanly to another. Journalism is now confronting the risk that its interpretive cycle no longer matches the speed of the world it is trying to explain.

Its future credibility will depend less on accusations of bias or even error than the question of whether it can reconcile rigor with speed, perhaps by trading the illusion of early certainty for the transparency of real-time doubt.

If it cannot, trust will continue to drain. An institution that evolved to help society see is falling behind what society is already watching.

The opinions and views expressed are those of the author alone and do not necessarily represent those of the Department of the Navy or the U.S. Naval War College.

The Conversation

Charles Edward Gehrke does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Journalism may be too slow to remain credible once events are filtered through social media – https://theconversation.com/journalism-may-be-too-slow-to-remain-credible-once-events-are-filtered-through-social-media-273748

Sixth year of drought in Texas and Oklahoma leaves ranchers bracing for another harsh summer

Source: The Conversation – USA (2) – By Joel Lisonbee, Senior Associate Scientist, Cooperative Institute for Research in the Environmental Sciences, University of Colorado Boulder

Cattle auctions aren’t often all-night affairs. But in Texas Lake Country in June 2022, ranchers facing dwindling water supplies and dried out pastures amid a worsening drought sold off more than 4,000 animals in an auction that lasted nearly 24 hours – about 200 cows an hour.

It was the height of a drought that has gripped the Southern Plains for the past six years – a drought that is still holding on in much of the region in 2026.

The drought cost the agriculture industry across Kansas, Oklahoma and Texas an estimated US$23.6 billion in lost crops, higher feed costs and selling off cattle from 2020 through 2024 alone. As rangeland dried out, it also fueled devastating wildfires.

Historically, droughts of this magnitude happen in the Southern Plains about once a decade, but the severe droughts of this century have been lasting longer, leaving water supplies, native rangelands and farms with little time to recover before the next one hits.

Many cattle producers and rangelands were still recovering from a severe 2010-2015 drought when a flash drought hit western Texas in spring 2020, marking the beginning of the current multibillion-dollar, multiyear and multistate drought. Ample spring rainfall in 2025 and severe flooding in central Texas that year weren’t enough to end the drought, and a powerful winter storm in late January 2026 missed the driest parts of the region.

A map shows heavy precipitation across a large part of the country, but it mostly missed the areas facing the worse drought in the Southern Plains.
Precipitation from a severe winter storm in late January 2026, shown in blue and measured in inches, largely missed the areas with the worst drought conditions, indicated by red contour lines.
UC Merced, NDMC

In a recent study with colleagues at the Southern Regional Climate Center and the National Integrated Drought Information System, we assessed the causes and damage from the ongoing drought in the Southern Plains.

We found three key reasons for the enduring drought and its damage: rising temperatures and a La Niña climate pattern; water supply shortages; and lingering economic impacts from the previous drought.

Weather and climate helped drive the drought

The Southern Plains is known to be a hot spot for rapid drought development, and the ongoing drought that started in 2020 is no exception.

Documented “flash droughts” – defined as periods of rapid drought onset or intensification of existing droughts – occurred at least five times in the region from 2020 to 2025. As global temperatures rise and climates warm, research warns that the frequency and severity of flash drought events will increase.

Maps show how the current drought progressed and moved around the region. It was at its height in 2020-2023
The U.S. Drought Monitor’s monthly updates from January 2020 through January 2026 show how drought moved around in the Southern Plains over those years but never let go. Darker colors reflect the intensity of drought in each location.
Joel Lisonbee; compiled from U.S. Drought Monitor

For the southern part of the Southern Plains, winter precipitation is closely linked to the El Niño–Southern Oscillation, a climate pattern that affects weather around the world. Five of the past six years exhibited a La Niña pattern, which typically means the region sees winters that are warmer and drier than normal.

La Niña was likely the primary driver – although not the only driver – of the drought for Texas and southwest Oklahoma, and one of the reasons drought conditions have continued into 2026.

The Southern Plains have a long history with severe droughts. The Dust Bowl of the early 1930s may be the best-known example. But a history with drought doesn’t make it any easier to manage when crops and water supplies dry up.

Deeply rooted water shortages

The heat and dryness since 2020 have left many of the region’s rivers, reservoirs and even groundwater reserves well below average.

San Antonio’s reservoirs all reached record-low levels in 2024 and 2025, as did the Edwards Aquifer, which provides water for roughly 2.5 million people. They were still low as 2026 began. Surface water and groundwater resources across central and western Texas have been depleted to the point that even a few big storms can’t replenish them.

A few major rivers flow into the Southern Plains from other drought-affected regions. Consider the Rio Grande, which begins in Colorado and winds through New Mexico and along Texas’ southern border: Not only has the Lower Rio Grande valley in southern Texas missed out on needed precipitation this winter, so did the Rio Grande headwaters in southern Colorado.

Colorado is facing a snow drought in winter 2026, as is much of the western U.S. If it continues, there will be less snowmelt come summer to feed rivers, such as the Rio Grande, or fill reservoirs. In early February, the Elephant Butte, Amistad and Falcon reservoirs, along the Rio Grande, were only 11%, 34%, and 20% full, respectively.

Lingering economic impacts

Like water supplies, the economy doesn’t just recover when the rains return.

One of the reasons the current drought has been so costly is that parts of the region had not fully recovered from the 2010-2015 drought when the latest one began in 2020. With only a five-year break between droughts, the landscape behaved like someone with an already weakened immune system who caught a cold.

Severe droughts over time in the Southern Plains
The percentage of land in different levels of drought or wetness for each month based on the nine-month Standardized Precipitation Index leading up to the selected date. Reds indicate drier conditions; blues indicate wetter conditions.
National Integrated Drought Information System, NOAA Drought.gov

During the 2010-2015 drought, cattle producers in Texas sold off about 20% of the statewide herd as water became scarce and rangeland dried up. Rebuilding a herd after a drought is a slow process. Pasture recovery can take a year or more, and a newborn heifer will take two years to mature and produce her own first calf.

Cattle herds had still not returned to pre-2010 levels when the 2022 drought peak forced another mass sell-off. From 2020 through 2024, Texas’s herd size declined from 13.1 million to 12 million; Oklahoma’s declined from 5.3 million to 4.7 million; and Kansas’ declined from 6.5 million to 6.15 million.

Looking beyond livestock, a large percentage of the Southern Plains’ crops failed in 2022, the peak year of the drought. In Texas, 25% of the corn crop was planted but never harvested, and 45% of the soybean crop was similarly abandoned. A normal season would have yielded a $2.4 billion cotton crop in Texas, but 74% of that crop was abandoned, slashing its value to roughly $640 million.

Ending the Southern Plains drought

Is the end in sight? With La Niña fading in early 2026 and its opposite, El Niño, potentially on the horizon, there’s a chance for wetter conditions that could reduce the drought in the fall and winter months of 2026.

But the Southern Plains still have to get through spring and summer first. Ending a drought like this requires consistent precipitation over several months, and drought conditions are likely to get worse before they get better.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Sixth year of drought in Texas and Oklahoma leaves ranchers bracing for another harsh summer – https://theconversation.com/sixth-year-of-drought-in-texas-and-oklahoma-leaves-ranchers-bracing-for-another-harsh-summer-275219

Philadelphia was once a sweet spot for chocolatiers and other candymakers who made iconic treats for Valentine’s Day and other holidays

Source: The Conversation – USA (2) – By Jared Bahir Browsh, Assistant Teaching Professor of Critical Sports Studies, University of Colorado Boulder

S.F. Whitman & Sons introduced the Whitman’s Sampler, an assortment of its popular chocolates, in 1912. HUM Images/Universal Images Group via Getty Images

Many of America’s iconic holiday candies have Philadelphia or Pennsylvania roots – like Peeps on Easter, Reese’s peanut butter cups on Halloween, and a good, old-fashioned Whitman’s Sampler box of chocolates on Valentine’s Day.

As a Philadelphian and a cultural historian who teaches students about the history of American corporations, the role of the city in the nation’s food history often comes up in my class.

Philadelphia was one of the largest port cities in the U.S. through the early 20th century. Sugar and other candy ingredients were readily available from Delaware River docks. Improvements to sugar refining made the product significantly cheaper during the first half of the 19th century, while the Second Industrial Revolution, in the late 1800s and early 1900s, expanded transportation and trade.

This led to a dramatic increase in candymakers and confectioners in Philadelphia. Many, like Whitman’s Chocolates, one of the oldest still in existence, were concentrated in the Old City neighborhood.

Old City was also home to the oldest candy distributor in the country. Casani Candy Company was founded in 1865. While the company now operates across the Delaware River in Pennsauken, New Jersey, it continues to distribute hundreds of products, including Asher’s candy, which was founded in Philadelphia in 1892.

A box of chocolates

The company that would become Wilbur’s Chocolate was founded in 1865 by Henry Oscar Wilbur and Samuel Croft. After several moves, and a split between the founders in 1884, Wilbur opened a new facility in 1887 at the corner of Third and New streets, where the Chocolate Works condominiums are located today.

There, they began production of their famous Buds, made by pouring hot liquefied chocolate into molds that resembled flower buds.

Phillip Wunderle, maker of gumdrops and other candies, set up shop in North Liberties, just north of Old City, in 1871. An employee of Wunderle Candy Company named George Renninger is often credited with the invention of candy corn, the iconic Halloween staple. However, it would be a decade before this sugary treat, also called “chicken feed,” became popular.

Wunderle also employed a salesman who would go on to become a candy legend: Milton Hershey.

In 1900, Hershey revolutionized the chocolate industry by introducing the Hershey Bar, the first mass-produced milk chocolate in the United States. Seven years later, the Hershey Company introduced a bite-sized, teardrop-shaped chocolate similar to Wilbur’s buds. Legend has it that the name, Hershey’s Kisses, originated from the sound of the machine that manufactured the candies, but there were several other candies with the name that predate Hershey’s.

As Hershey grew more successful, Whitman’s looked for a way to maintain its market share. Whitman’s advertised heavily after the Civil War, and by the end of the 19th century, promoted its products with suggestive ads that linked chocolate with romance.

In 1912, Whitman’s introduced its Sampler box. It became a Valentine’s Day staple, especially after it became available in a heart-shaped box – a marketing stunt that English chocolate brand Cadbury reportedly started in 1868.

Three plastic tubs full of individually wrapped bubble gum
Buckets of Dubble Bubble along the bench in the Cincinnati Reds dugout before a baseball game against the Oakland Athletics.
Jason Mowry/Getty Images

Chewing gum and movie snacks

The candy market in Philly – and nearby Hershey, Pennsylvania – continued to grow during the Roaring ‘20s.

A former dairy manager at Hershey named H.B. Reese built his own candy factory in Hershey in 1926, and two years later, he introduced his famous Peanut Butter Cups. Reese’s merged with Hershey’s in 1963 and later introduced their popular candy in different holiday shapes, like Easter eggs, Christmas trees and Valentine’s Day hearts.

Another company now owned by Hershey is York Peppermint Pattie, a chocolate-covered soft mint candy introduced in 1940 in a town 40 miles south of Hershey.

Back in Philadelphia, Frank H. Fleer, an inventor of Chiclets, the peppermint- flavored candy-coated gum, founded his confectionery company in 1885 in the Fairmount neighborhood. Fleer sold the invention to the Trenton, New Jersey-based American Chicle Company in 1914. In 1923, Fleer Corporation first included sports cards with its candy, and in 1928, company accountant Walter Diemer helped perfect the formula for Dubble Bubble, the first bubble gum.

Boxes and tubs of a chocolate candy stacked on a shelf
Goldenberg’s Peanut Chews were originally produced for troops to snack on during World War I.
AP Photo/Matt Rourke)

Meanwhile, Goldenberg Candy Company, which was founded in Philadelphia in 1890, introduced their Peanut Chews in 1917 as an energy source for troops during World War I. Goldenberg’s Peanut Chews are now owned by Just Born, which makes the popular Easter candy Peeps and has its headquarters in Bethlehem, Pennsylvania. However, Peanut Chews are produced out of a facility in the Holmesburg section of Northeast Philadelphia.

Further establishing southeast Pennsylvania as the chocolate capital of America was the emergence of Philadelphia-based Blumenthal Brothers Chocolate Company in 1909. Beginning in the mid-1920s, it began producing candy for movie concessions after being approached by Philadelphia concessions entrepreneur Jacob Beresin when some theaters placed a ban on popcorn in the 1920s, which was considered too messy. Blumenthal’s Goobers, Raisinets and Sno-Caps are still popular movie snacks, and a sweet complement to date night.

The post-World War II era brought a number of business and market changes that led many of these candy companies to move out of Philadelphia.

Yet vestiges of Philadelphia’s candy dominance can still be found around the city. For unique handmade candy this Valentine’s Day, Philly residents can visit Shane’s Confectionery, which is arguably the oldest, continuously operated candy shop in America. (There’s some debate to that claim because the space has been a candy shop since 1863, but Shane’s didn’t open until 1910.) And stop back in March to pick up some “Irish Potatoes” – coconut cream rolled in cinnamon – for St. Patrick’s Day.

Cars on highway pass a large brick building with faded paint that reads 'Wilbur's'
A faded sign on the side of the former Wilbur Chocolate Co. complex in the Old City neighborhood of Philadelphia on May 7, 2013.
AP Photo/Matt Rourke

Read more of our stories about Philadelphia and Pennsylvania, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Jared Bahir Browsh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Philadelphia was once a sweet spot for chocolatiers and other candymakers who made iconic treats for Valentine’s Day and other holidays – https://theconversation.com/philadelphia-was-once-a-sweet-spot-for-chocolatiers-and-other-candymakers-who-made-iconic-treats-for-valentines-day-and-other-holidays-274714