Comment un médicament valant un milliard de dollars a été découvert dans le sol de l’Île de Pâques (et pourquoi scientifiques et industrie pharmaceutique ont une dette envers les peuples autochtones)

Source: The Conversation – in French – By Ted Powers, Professor of Molecular and Cellular Biology, University of California, Davis

Le peuple Rapa Nui est pratiquement absent de l’histoire de la découverte de la rapamycine telle qu’elle est généralement racontée. Posnov/Moment/Getty

La découverte en 1964, sur l’île de Pâques, de la rapamycine, un nouvel antibiotique, a marqué le début d’une success story pharmaceutique à plusieurs milliards de dollars. Pourtant, l’histoire a complètement occulté les individus et les dynamiques politiques qui ont rendu possible l’identification de ce « médicament miracle ».


Baptisé du nom autochtone de l’île, Rapa Nui, la rapamycine a initialement été employée comme immunosuppresseur, afin de prévenir le rejet des greffes d’organes et d’améliorer le taux de succès de l’implantation de stents, de petits treillis métalliques destinés à étayer les artères dans le cadre de la lutte contre la maladie coronarienne (laquelle se traduit par rétrécissement progressif des artères qui nourrissent le cœur, ndlr).

Son usage s’est depuis étendu au traitement de divers types de cancers, et les chercheurs explorent aujourd’hui son potentiel dans le contexte de la prise en charge du diabète,

des maladies neurodégénératives, voire de la lutte contre les méfaits du vieillissement. Ainsi, des études mettant en évidence la capacité de la rapamycine à prolonger la durée de vie ou à combattre les maladies liées à l’âge semblent paraître presque quotidiennement depuis quelque temps… Une requête sur PubMed, le moteur de recherche recense plus de 59 000 articles mentionnant la rapamycine. Il s’agit de l’un des médicaments qui fait le plus parler de lui dans le domaine médical.

Cependant, bien que la rapamycine soit omniprésente en science et en médecine, la façon dont elle a été découverte demeure largement méconnue du public. En tant que scientifique ayant consacré sa carrière à l’étude de ses effets sur les cellules, j’ai ressenti le besoin de mieux comprendre son histoire.

À ce titre, les travaux de l’historienne Jacalyn Duffin portant sur la Medical Expedition to Easter Island (METEI), une expédition scientifique mise sur pied dans les années 1960, ont complètement changé la manière dont nombre de mes collègues et moi-même envisageons désormais notre domaine de recherche.

La découverte du complexe héritage de la rapamycine soulève en effet d’importantes questions sur les biais systémiques qui existent dans le secteur de la recherche biomédicale, ainsi que sur la dette des entreprises pharmaceutiques envers les territoires autochtones d’où elles extraient leurs molécules phares.

Pourquoi un tel intérêt pour la rapamycine ?

L’action de la rapamycine s’explique par sa capacité à inhiber une protéine appelée target of rapamycin kinase, ou TOR. Cette dernière est l’un des principaux régulateurs de la croissance et du métabolisme cellulaires. De concert avec d’autres protéines partenaires, TOR contrôle la manière dont les cellules répondent aux nutriments, au stress et aux signaux environnementaux, influençant ainsi des processus majeurs tels que la synthèse protéique et la fonction immunitaire.

Compte tenu de son rôle central dans ces activités cellulaires fondamentales, il n’est guère surprenant qu’un dysfonctionnement de TOR puisse se traduire par la survenue de cancers, de troubles métaboliques ou de maladies liées à l’âge.

Structure chimique de la rapamycine
Structure chimique de la rapamycine.
Fvasconcellos/Wikimedia

Un grand nombre des spécialistes du domaine savent que cette molécule a été isolée au milieu des années 1970 par des scientifiques travaillant au sein du laboratoire pharmaceutique Ayerst Research Laboratories, à partir d’un échantillon de sol contenant la bactérie Streptomyces hydroscopicus. Ce que l’on sait moins, c’est que cet échantillon a été prélevé dans le cadre d’une mission canadienne appelée Medical Expedition to Easter Island, ou METEI, menée à Rapa Nui – l’Île de Pâques – en 1964.

Histoire de la METEI

L’idée de la Medical Expedition to Easter Island (METEI) a germé au sein d’une équipe de scientifiques canadiens composée du chirurgien Stanley Skoryna et du bactériologiste Georges Nogrady. Leur objectif était de comprendre comment une population isolée s’adaptait au stress environnemental. Ils estimaient que la prévision de la construction d’un aéroport international sur l’île de Pâques offrait une occasion unique d’éclairer cette question. Selon eux, en accroissant les contacts de la population de l’île avec l’extérieur, l’aéroport risquait d’entraîner des changements dans sa santé et son bien-être.

Financée par l’Organisation mondiale de la santé, et soutenue logistiquement par la Marine royale canadienne, la METEI arriva à Rapa Nui en décembre 1964. Durant trois mois, l’équipe fit passer à la quasi-totalité des 1 000 habitants de l’île toute une batterie d’examens médicaux, collectant des échantillons biologiques et procédant à un inventaire systématique de la flore et de la faune insulaires.

Dans le cadre de ces travaux, Georges Nogrady recueillit plus de 200 échantillons de sol, dont l’un s’est avéré contenir la souche de bactéries Streptomyces productrice de rapamycine.

Affiche du mot METEI écrit verticalement entre l’arrière de deux têtes de moaï, avec l’inscription « 1964-1965 RAPA NUI INA KA HOA (N’abandonnez pas le navire) »
Logo du METEI.
Georges Nogrady, CC BY-NC-ND

Il est important de comprendre que l’objectif premier de l’expédition était d’étudier le peuple de Rapa Nui, dans un contexte qui était vu comme celui d’un laboratoire à ciel ouvert. Pour encourager les habitants à participer, les chercheurs n’ont pas hésité à recourir à la corruption, leur offrant des cadeaux, de la nourriture et diverses fournitures. Ils ont également eu recours à la coercition : à cet effet, ils se sont assuré les services d’un prêtre franciscain en poste de longue date sur l’île pour les aider au recrutement. Si leurs intentions étaient peut-être honorables, il s’agit néanmoins là d’un exemple de colonialisme scientifique dans lequel une équipe d’enquêteurs blancs choisit d’étudier un groupe majoritairement non blanc sans son concours, ce qui crée un déséquilibre de pouvoir. Un biais inhérent à l’expédition existait donc dès la conception de la METEI.

Par ailleurs, plusieurs des hypothèses de départ avaient été formulées sur des bases erronées. D’une part, les chercheurs supposaient que les habitants de Rapa Nui avaient été relativement isolés du reste du monde, alors qu’il existait en réalité une longue histoire d’interactions avec des pays extérieurs, comme en témoignaient divers récits dont les plus anciens remontaient au début du XVIIIe siècle, et dont les publications s’étalaient jusqu’à la fin du XIXe siècle.

D’autre part, les organisateurs de la METEI partaient du postulat que le bagage génétique de la population de Rapa Nui était homogène, sans tenir compte de la complexe histoire de l’île en matière de migrations, d’esclavage et de maladies (certains habitants étaient en effet les descendants de survivants de la traite des esclaves africains qui furent renvoyés sur l’île et y apportèrent certaines maladies, dont la variole). La population moderne de Rapa Nui est en réalité métissée, issue à la fois d’ancêtres polynésiens, sud-américains, voire africains.

Cette erreur d’appréciation a sapé l’un des objectifs clés du METEI : évaluer l’influence de la génétique sur le risque de maladie. Si l’équipe a publié un certain nombre d’études décrivant la faune associée à Rapa Nui, son incapacité à établir une base de référence est probablement l’une des raisons pour lesquelles aucune étude de suivi n’a été menée après l’achèvement de l’aéroport de l’île de Pâques en 1967.

Rendre crédit à qui de droit

Les omissions qui existent dans les récits sur les origines de la rapamycine sont le reflet d’angles morts éthiques fréquemment présents dans la manière dont on se souvient des découvertes scientifiques.

Georges Nogrady rapporta de Rapa Nui des échantillons de sol, dont l’un parvint à Ayerst Research Laboratories. Là, Surendra Sehgal et son équipe isolèrent ce qui fut nommé rapamycine, qu’ils finirent par commercialiser à la fin des années 1990 en tant qu’immunosuppresseur, sous le nom Rapamune. Si l’on connaît bien l’obstination de Sehgal, qui fut déterminante pour mener à bien le projet en dépit des bouleversements qui agitaient à cette époque la société pharmaceutique pour laquelle il travaillait – il alla même jusqu’à dissimuler une culture de bactéries chez lui – ni Nogrady ni la METEI ne furent jamais crédités dans les principaux articles scientifiques qu’il publia.

Bien que la rapamycine ait généré des milliards de dollars de revenus, le peuple de Rapa Nui n’en a tiré aucun bénéfice financier à ce jour. Cela soulève des questions sur les droits des peuples autochtones ainsi que sur la biopiraterie (qui peut être définie comme « l’appropriation illégitime par un sujet, notamment par voie de propriété intellectuelle, parfois de façon illicite, de ressources naturelles, et/ou éventuellement de ressources culturelles en lien avec elles, au détriment d’un autre sujet », ndlr), autrement dit dans ce contexte la commercialisation de connaissances autochtones sans contrepartie.

Des accords tels que la Convention des Nations unies de 1992 sur la diversité biologique et la Déclaration de 2007 sur les droits des peuples autochtones visent à protéger les revendications autochtones sur les ressources biologiques, en incitant tous les pays à obtenir le consentement et la participation des populations concernées, et à prévoir des réparations pour les préjudices potentiels avant d’entreprendre des projets.

Ces principes n’étaient cependant pas en vigueur à l’époque du METEI.

Gros plans de visages alignés portant des couronnes de fleurs dans une pièce sombre
Les habitants de Rapa Nui n’ont reçu que peu ou pas de reconnaissance pour leur rôle dans la découverte de la rapamycine.
Esteban Felix/AP Photo

Certaines personnes soutiennent que, puisque la bactérie productrice de rapamycine a été trouvée ailleurs que dans le sol de l’île de Pâques, ce dernier n’était ni unique ni essentiel à la découverte du médicament. D’autres avancent aussi qu’étant donné que les insulaires n’utilisaient pas la rapamycine et n’en connaissaient pas l’existence sur leur île, cette molécule ne constituait pas une ressource susceptible d’être « volée ».

Cependant, la découverte de la rapamycine à Rapa Nui a jeté les bases de l’ensemble des recherches et de la commercialisation ultérieures autour de cette molécule. Cela n’a été possible que parce que la population a été l’objet de l’étude montée par l’équipe canadienne. La reconnaissance formelle du rôle essentiel joué par les habitants de Rapa Nui dans la découverte de la rapamycine, ainsi que la sensibilisation du public à ce sujet, sont essentielles pour les indemniser à hauteur de leur contribution.

Ces dernières années, l’industrie pharmaceutique a commencé à reconnaître l’importance d’indemniser équitablement les contributions autochtones. Certaines sociétés se sont engagées à réinvestir dans les communautés d’où proviennent les précieux produits naturels qu’elles exploitent.

Toutefois, s’agissant des Rapa Nui, les entreprises qui ont directement tiré profit de la rapamycine n’ont pas encore fait un tel geste.

Si la découverte de la rapamycine a sans conteste transformé la médecine, il est plus complexe d’évaluer les conséquences pour le peuple de Rapa Nui de l’expédition METEI. En définitive, son histoire est à la fois celle d’un triomphe scientifique et d’ambiguïtés sociales.

Je suis convaincu que les questions qu’elle soulève (consentement biomédical, colonialisme scientifique et occultation de certaines contributions) doivent nous faire prendre conscience qu’il est nécessaire d’examiner de façon plus critique qu’ils ne l’ont été jusqu’à présent les héritages des découvertes scientifiques majeures.

The Conversation

Ted Powers ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Comment un médicament valant un milliard de dollars a été découvert dans le sol de l’Île de Pâques (et pourquoi scientifiques et industrie pharmaceutique ont une dette envers les peuples autochtones) – https://theconversation.com/comment-un-medicament-valant-un-milliard-de-dollars-a-ete-decouvert-dans-le-sol-de-lile-de-paques-et-pourquoi-scientifiques-et-industrie-pharmaceutique-ont-une-dette-envers-les-peuples-autochtones-266381

En 2030, la NASA dira adieu à la station spatiale internationale et entrera dans l’ère des stations commerciales

Source: The Conversation – in French – By John M. Horack, Professor of Mechanical and Aerospace Engineering, The Ohio State University

La Station spatiale internationale sera désorbitée en 2030. NASA via AP

La Station spatiale internationale vit ses dernières années : en 2030, elle sera désorbitée. Vingt-cinq ans d’occupation continue laisseront alors place à une nouvelle ère, celle des stations spatiales commerciales.


Depuis novembre 2000, la NASA et ses partenaires internationaux assurent sans interruption une présence humaine en orbite basse, avec toujours au moins un Américain à bord. Une continuité qui fêtera bientôt ses 25 ans.

Dans l’histoire de l’exploration spatiale, la Station spatiale internationale apparaît sans doute comme l’une des plus grandes réalisations de l’humanité, un exemple éclatant de coopération dans l’espace entre les États-Unis, l’Europe, le Canada, le Japon et la Russie. Mais même les plus belles aventures ont une fin.

Un emblème représentant une photo de la Station spatiale internationale, entourée d’un anneau où figurent les drapeaux des pays partenaires
L’emblème de la Station spatiale internationale arbore les drapeaux des États signataires d’origine.
CSA/ESA/JAXA/NASA/ROSCOSMOS

En 2030, la Station spatiale internationale sera désorbitée : elle sera dirigée vers une zone isolée du Pacifique.

Je suis ingénieur en aérospatiale et j’ai contribué à la conception de nombreux équipements et expériences pour l’ISS. Membre de la communauté spatiale depuis plus de trente ans, dont dix-sept au sein de la NASA, il me sera difficile d’assister à la fin de cette aventure.

Depuis le lancement des premiers modules en 1998, la Station spatiale internationale a été le théâtre d’avancées scientifiques majeures dans des domaines tels que la science des matériaux, la biotechnologie, l’astronomie et l’astrophysique, les sciences de la Terre, la combustion et bien d’autres encore.

Les recherches menées par les astronautes à bord de la station et les expériences scientifiques installées sur sa structure extérieure ont donné lieu à de nombreuses publications dans des revues à comité de lecture. Certaines ont permis de mieux comprendre les orages, d’améliorer les procédés de cristallisation de médicaments essentiels contre le cancer, de préciser comment développer des rétines artificielles en apesanteur, d’explorer la production de fibres optiques ultrapures et d’expliquer comment séquencer l’ADN en orbite.

Vue de dessus d’un scientifique en blouse de laboratoire et portant des gants, manipulant une pipette sur un plan de travail à bord de l’ISS
L’environnement en microgravité de l’ISS en fait un cadre idéal pour une grande variété de projets de recherche scientifique.
NASA, CC BY

Au total, plus de 4 000 expériences ont été menées à bord de l’ISS, donnant lieu à plus de 4 400 publications scientifiques destinées à améliorer la vie sur Terre et à tracer la voie de futures activités d’exploration spatiale.

La station a démontré toute la valeur de la recherche conduite dans l’environnement unique des vols spatiaux – marqué par une très faible gravité, le vide, des cycles extrêmes de température et des radiations – pour faire progresser la compréhension d’une large gamme de processus physiques, chimiques et biologiques.

Maintenir une présence en orbite

Avec le retrait annoncé de la station, la NASA et ses partenaires internationaux n’abandonnent pas pour autant leur avant-poste en orbite terrestre basse. Ils cherchent au contraire des alternatives pour continuer à exploiter le potentiel de ce laboratoire de recherche unique et prolonger la présence humaine ininterrompue maintenue depuis 25 ans à quelque 402 kilomètres au-dessus de la Terre.

En décembre 2021, la NASA a annoncé trois contrats visant à soutenir le développement de stations spatiales privées et commerciales en orbite basse. Depuis plusieurs années, l’agence confie déjà le ravitaillement de l’ISS à des partenaires privés. Plus récemment, elle a adopté un dispositif similaire avec SpaceX et Boeing pour le transport d’astronautes à bord respectivement de la capsule Dragon et du vaisseau Starliner.

Un vaisseau spatial blanc de forme conique avec deux panneaux solaires rectangulaires en orbite, la Terre en arrière-plan
La capsule Dragon de SpaceX s’amarre à l’ISS.
NASA TV via AP

Fort de ces succès, la NASA a investi plus de 400 millions de dollars pour stimuler le développement de stations spatiales commerciales, avec l’espoir de les voir opérationnelles avant la mise hors service de l’ISS.

L’aube des stations spatiales commerciales

En septembre 2025, la NASA a publié un projet d’appel à propositions pour la phase 2 des partenariats concernant les stations spatiales commerciales. Les entreprises retenues recevront des financements pour réaliser les revues critiques de conception et démontrer le bon fonctionnement de stations capables d’accueillir quatre personnes en orbite pendant au moins 30 jours.

La NASA procédera ensuite à une validation et une certification formelles afin de garantir que ces stations répondent à ses normes de sécurité les plus strictes. Cela permettra ensuite à l’agence d’acheter des missions et des services à bord de ces stations sur une base commerciale – de la même manière qu’elle le fait déjà pour le transport de fret et d’équipages vers l’ISS. Reste à savoir quelles entreprises réussiront ce pari, et selon quel calendrier.

Pendant que ces stations verront le jour, les astronautes chinois continueront à vivre et à travailler à bord de leur station Tiangong, un complexe orbital habité en permanence par trois personnes, évoluant à environ 400 kilomètres au-dessus de la Terre. Si la continuité habitée de l’ISS venait à s’interrompre, la Chine et Tiangong prendraient ainsi le relais comme station spatiale habitée sans discontinuité la plus ancienne en activité. Tiangong est occupée depuis environ quatre ans.

Photos et vidéos de l’ISS permettant d’observer la Terre depuis l’orbite.

En attendant, levons les yeux

Il faudra encore plusieurs années avant que les nouvelles stations spatiales commerciales n’encerclent la Terre à environ 28 000 kilomètres par heure et avant que l’ISS ne soit désorbitée en 2030.

D’ici là, il suffit de lever les yeux pour profiter du spectacle. Lors de ses passages, l’ISS apparaît la plupart des nuits comme un point bleu-blanc éclatant, souvent l’objet le plus brillant du ciel, traçant silencieusement une courbe gracieuse à travers la voûte étoilée. Nos ancêtres n’auraient sans doute jamais imaginé qu’un jour, l’un des objets les plus lumineux du ciel nocturne serait conçu par l’esprit humain et assemblé par la main de l’homme.

The Conversation

John M. Horack a reçu des financements de recherche externes de la NASA, de Voyager Technologies et d’autres organismes liés au domaine spatial, dans le cadre de son travail de professeur à l’Université d’État de l’Ohio.

ref. En 2030, la NASA dira adieu à la station spatiale internationale et entrera dans l’ère des stations commerciales – https://theconversation.com/en-2030-la-nasa-dira-adieu-a-la-station-spatiale-internationale-et-entrera-dans-lere-des-stations-commerciales-266506

Meet Irene Curie, the Nobel-winning atomic physicist who changed the course of modern cancer treatment

Source: The Conversation – USA – By Artemis Spyrou, Professor of Nuclear Physics, Michigan State University

Irene and Frederic Joliot-Curie shared the Nobel Prize in 1935. Bettmann/Contributor via Getty Images

The adage goes “like mother like daughter,” and in the case of Irene Joliot-Curie, truer words were never spoken. She was the daughter of two Nobel Prize laureates, Marie Curie and Pierre Curie, and was herself awarded the Nobel Prize in chemistry in 1935 together with her husband, Frederic Joliot.

While her parents received the prize for the discovery of natural radioactivity, Irene’s prize was for the synthesis of artificial radioactivity. This discovery changed many fields of science and many aspects of our everyday lives. Artificial radioactivity is used today in medicine, agriculture, energy production, food sterilization, industrial quality control and more.

Two portraits, one on the left of a man with dark hair wearing a suit, Frederic Joliot, and on the right, of Irene Joliot-Curie, who has ear-length hair.
Frederic Joliot and Irene Joliot-Curie.
Wellcome Collection, CC BY

We are two nuclear physicists who perform experiments at different accelerator facilities around the world. Irene’s discovery laid the foundation for our experimental studies, which use artificial radioactivity to understand questions related to astrophysics, energy, medicine and more.

Early years and battlefield training

Irene Curie was born in Paris, France, in 1897. In an unusual schooling setup, Irene was one of a group of children taught by their academic parents, including her own by then famous mother, Marie Curie.

Marie Curie sits at a table with scientific equipment on it. Irene Curie stands next to her, fiddling with the equipment.
Marie Curie and her daughter Irene were both scientists studying radioactivity.
Wellcome Collection, CC BY

World War I started in 1914, when Irene was only 17, and she interrupted her studies to help her mother find fragments of bombs in wounded soldiers using portable X-ray machines. She soon became an expert in these wartime radiology techniques, and on top of performing the measurements herself, she also spent time training nurses to use the X-ray machines.

After the war, Irene went back to her studies in her mother’s lab at the Radium Institute. This is where she met fellow researcher Frederic Joliot, whom she later married. The two worked together on many projects, which led them to their major breakthrough in 1934.

A radioactive discovery

Isotopes are variations of a particular element that have the same number of protons – positively charged particles – and different numbers of neutrons, which are particles with no charge. While some isotopes are stable, the majority are radioactive and called radioisotopes. These radioisotopes spontaneously transform into different elements and release radiation – energetic particles or light – in a process called radioactive decay.

At the time of Irene and Frederic’s discovery, the only known radioactive isotopes came from natural ores, through a costly and extremely time-consuming process. Marie and Pierre Curie had spent years studying the natural radioactivity in tons of uranium ores.

In Irene and Frederic’s experiments, they bombarded aluminum samples with alpha particles, which consist of two protons and two neutrons bound together – they are atomic nuclei of the isotope helium-4.

In previous studies, they had observed the different types of radiation their samples emitted while being bombarded. The radiation would cease when they took away the source of alpha particles. In the aluminum experiment, however, they noticed that even after they removed the alpha source, they could still detect radiation.

The amount of radiation decreased by half every three minutes, and they concluded that the radiation came from the decay of a radioisotope of the element phosphorus. Phosphorus has two additional protons compared to aluminum and was formed when the alpha particles fused with the aluminum nuclei. This was the first identification of an artificially made radioisotope, phosphorus-30. Because phosphorus-30 was created after bombarding aluminum with alpha particles – rather than occurring in its natural state – Irene and Frederic induced the radioactivity. So, it is called artificial radioactivity.

A diagram showing an atom of 27-aluminum next to an alpha which is made of two neutrons and two protons. Next to it is an arrow to a lone neutron and an atom of 30-phosphorus with an arrow labeled 'positron' coming off it.
In Irene and Frederic’s experiments, an isotope of aluminum was hit with an alpha particle (two neutrons and two protons bound together). The collision resulted in two protons and a neutron from the alpha particle binding to the aluminum, making it an isotope of phosphorus, which decayed, releasing a particle called a positron.
Artemis Spyrou

After her major discovery, Irene stayed active not only in research but in activism and politics as well. In 1936, almost a decade before women gained the right to vote in France, she was appointed under secretary of state for scientific research. In this position, she laid the foundations for what would become the National Centre for Scientific Research, which is the French equivalent of the U.S. National Science Foundation or National Institutes of Health.

She co-created the French Atomic Energy Commission in 1945 and held a six-year term, promoting nuclear research and development of the first French nuclear reactor. She later became director of the Curie Laboratory at the Radium Institute and a professor at the Faculty of Science in Paris.

Medical uses of artificial radioactivity

The Joliot-Curie discovery opened the road to the extensive use of radioisotopes in medical applications. Today, radioactive iodine is used regularly to treat thyroid diseases. Radioisotopes that emit positrons – the positive equivalent of the electron – are used in medical PET scans to image and diagnose cancer, and others are used for cancer therapy.

To diagnose cancer, practitioners can inject a small amount of the radioisotope into the body, where it accumulates at specific organs. Specialized devices such as a PET scanner can then detect the radioactivity from the outside. This way, doctors can visualize how these organs are working without the need for surgery.

To then treat cancer, practitioners use large amounts of radiation to kill the cancer cells. They try to localize the application of the radioisotope to just where the cancer is so that they’re only minimally affecting healthy tissue.

An enduring legacy

In the 90 years since the Joliot-Curie discovery of the first artificial radioisotope, the field of nuclear science has expanded its reach to roughly 3,000 artificial radioisotopes, from hydrogen to the heaviest known element, oganesson. However, nuclear theories predict that up to 7,000 artificial radioisotopes are possible.

As physicists, we work with data from a new facility at Michigan State University, the Facility for Rare Isotope Beams, which is expected to discover up to 1,000 new radioisotopes.

A graph showing protons on the Y axis and neutrons on the X axis, with an upwards trend line labeled 'stable isotopes' and a cloud of data points surrounding it labeled 'radioisotopes produced in experiments' and 'radioisotopes predicted to exist'
Scientists graph the known isotopes in the chart of nuclei. They have discovered roughly 3,000 radioisotopes (shown with cyan boxes) and predict the existence of another 4,000 radioisotopes (shown with gray boxes).
Facility for Rare Isotope Beams

While the Joliot-Curies were bombarding their samples with alpha particles at relatively low speeds, the Michigan State facility can accelerate stable isotopes up to half the speed of light and smash them on a target to produce new radioisotopes. Scientists using the facility have already discovered five new radioisotopes since it began operating in 2022, and the search continues.

Each of the thousands of available radioisotopes has a different set of properties. They live for different amounts of time and emit different types of radiation and amounts of energy. This variability allows scientists to choose the right isotope for the right application.

Iodine, for example, has more than 40 known radioisotopes. A main characteristic of radioisotopes is their half-life, meaning the amount of time it takes for half of the isotopes in the sample to transform into a new element. Iodine radioisotopes have half-lives that span from a tenth of a second to 16 million years. But not all of them are useful, practical or safe for thyroid treatment.

A diagram showing an atom of 131-Iodine, with an arrow to an atom of 131-Xenon, representing decay. Coming off the Xenon is an arrow denoting an electron, and a wavy arrow denoting radiation.
The iodine radioisotope used in cancer therapy has a half-life of eight days. Eight days is long enough to kill cancer cells in the body, but not so long that the radioactivity poses a long-term threat to the patient and those around them.
Artemis Spyrou

Radioisotopes that live for a few seconds don’t exist long enough to perform medical procedures, and radioisotopes that live for years would harm the patient and their family. Because it lives for a few days, iodine-131 is the preferred medical radioisotope.

Artificial radioactivity can also help scientists study the universe’s mysteries. For example, stars are fueled by nuclear reactions and radioactive decay in their cores. In violent stellar events, such as when a star explodes at the end of its life, they produce thousands of different radioisotopes that can drive the explosion. For this reason, scientists, including the two of us. produce and study in the lab the radioisotopes found in stars.

With the advent of the Facility for Rare Isotope Beams and other accelerator facilities, the search for new radioisotopes will continue opening doors to a world of possibilities.

The Conversation

Artemis Spyrou receives funding from the National Science Foundation.

Andrea Richard receives funding from the Department of Energy, National Nuclear Security Administration.

ref. Meet Irene Curie, the Nobel-winning atomic physicist who changed the course of modern cancer treatment – https://theconversation.com/meet-irene-curie-the-nobel-winning-atomic-physicist-who-changed-the-course-of-modern-cancer-treatment-264840

How VR and AI could help the next generation grow kinder and more connected

Source: The Conversation – USA – By Ekaterina Muravevskaia, Assistant Professor of Human-Centered Computing, Indiana University

Technology can be isolating, but it can also help kids learn emotional connection. Dusan Stankovic/E+ via Getty Images

Empathy is not just a “nice-to-have” soft skill – it is a foundation of how children and adults regulate emotions, build friendships and learn from one another.

Between the ages of 6 and 9, children begin shifting from being self-centered to noticing the emotions and perspectives of others. This makes early childhood one of the most important periods for developing empathy and other social-emotional skills.

Traditionally, pretend play has been a natural way to practice empathy. Many adults can remember acting out scenes as doctor and patient, or using sticks and leaves as imaginary currency. Those playful moments were not just entertainment – they were early lessons in empathy and taking someone else’s perspective.

But as children spend more time with technology and less in pretend play, these opportunities are shrinking. Some educators worry that technology is hindering social-emotional learning. Yet research in affective computing – digital systems that recognize emotions, simulate them or both – suggests that technology can also become part of the solution.

Virtual reality, in particular, can create immersive environments where children interact with characters who display emotions as vividly as real humans. I’m a human-computer interaction scientist who studies social-emotional learning in the context of how people use technology. Used thoughtfully, the combination of VR and artificial intelligence could help reshape social-emotional learning practices and serve as a new kind of “empathy classroom” or “emotional regulation simulator.”

Game of emotions

As a part of my doctoral studies at the University of Florida, in 2017 I began developing a VR Empathy Game framework that combines insights from developmental psychology, affective computing and participatory design with children. At the Human-Computer Interaction Lab at the University of Maryland, I worked with their KidsTeam program, where children of 7-11 served as design partners, helping us to imagine what an empathy-focused VR game should feel like.

In 2018, 15 master’s students at the Florida Interactive Entertainment Academy at the University of Central Florida and I created the first game prototype, Why Did Baba Yaga Take My Brother? This game is based on a Russian folktale and introduces four characters, each representing a core emotion: Baba Yaga embodies anger, Goose represents fear, the Older Sister shows happiness and the Younger Sister expresses sadness.

The VR game Why Did Baba Yaga Take My Brother? is designed to help kids develop empathy.

Unlike most games, it does not reward players with points or badges. Instead, children can progress in the game only by getting to know the characters, listening to their stories and practicing empathic actions. For example, they can look at the game’s world through a character’s glasses, revisit their memories or even hug Baba Yaga to comfort her. This design choice reflects a core idea of social-emotional learning: Empathy is not about external rewards but about pausing, reflecting and responding to the needs of others.

My colleagues and I have been refining the game since then and using it to study children and empathy.

Different paths to empathy

We tested the game with elementary school children individually. After asking general questions and giving an empathy survey, we invited children to play the game. We observed their behavior while they were playing and discussed their experience afterward.

Our most important discovery was that children interacted with the VR characters following the main empathic patterns humans usually follow while interacting with each other. Some children displayed cognitive empathy, meaning they had an understanding of the characters’ emotional states. They listened thoughtfully to characters, tapped their shoulders to get their attention, and attempted to help them. At the same time, they were not completely absorbed in the VR characters’ feelings.

cartoon image a woman with horns smiling and holding her arms out theo her sides
Characters in the researchers’ VR game express a range of emotions.
Ekaterina Muravevskaia

Others expressed emotional contagion, directly mirroring characters’ emotions, sometimes becoming so distressed by fear or sadness that it made them stop the game. In addition, a few other children did not connect with the characters at all, focusing mainly on exploring the virtual environment. All three behaviors can happen in real life as well when children interact with their peers.

These findings highlight both the promise and the challenge. VR can indeed evoke powerful empathic responses, but it also raises questions about how to design experiences that support children with different temperaments – some need more stimulation, and others need gentler pacing.

AI eye on emotions

The current big question for us is how to effectively incorporate this type of empathy game into everyday life. In classrooms, VR will not replace real conversations or traditional role-play, but it can enrich them. A teacher might use a short VR scenario to spark discussion, encouraging students to reflect on what they felt and how it connects to their real friendships. In this way, VR becomes a springboard for dialogue, not a stand-alone tool.

We are also exploring adaptive VR systems that respond to a child’s emotional state in real time. A headset might detect if a child is anxious or scared – through facial expressions, heart rate or gaze – and adjust the experience by scaling down the characters’ expressiveness or offering supportive prompts. Such a responsive “empathy classroom” could give children safe opportunities to gradually strengthen their emotional regulation skills.

This is where AI becomes essential. AI systems can make sense of the data collected by VR headsets such as eye gaze, facial expressions, heart rate or body movement and use it to adjust the experience in real time. For example, if a child looks anxious or avoids eye contact with a sad character, the AI could gently slow down the story, provide encouraging prompts or reduce the emotional intensity of the scene. On the other hand, if the child appears calm and engaged, the AI might introduce a more complex scenario to deepen their learning.

In our current research, we are investigating how AI can measure empathy itself – tracking moment-to-moment emotional responses during gameplay to provide educators with better insight into how empathy develops.

Future work and collaboration

As promising as I believe this work is, it raises big questions. Should VR characters express emotions at full intensity, or should we tone them down for sensitive children? If children treat VR characters as real, how do we make sure those lessons carry to the playground or dinner table? And with headsets still costly, how do we ensure empathy technology doesn’t widen digital divides?

These are not just research puzzles but ethical responsibilities. This vision requires collaboration among educators, researchers, designers, parents and children themselves. Computer scientists design the technology, psychologists ensure the experiences are emotionally healthy, teachers adapt them for curriculum, and children co-create the games to make them engaging and meaningful.

Together, we can shape technologies that not only entertain but also nurture empathy, emotional regulation and deeper connection in the next generation.

The Conversation

Ekaterina Muravevskaia does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How VR and AI could help the next generation grow kinder and more connected – https://theconversation.com/how-vr-and-ai-could-help-the-next-generation-grow-kinder-and-more-connected-263181

Shutdowns are as American as apple pie − in the UK and elsewhere, they just aren’t baked into the process

Source: The Conversation – USA – By Garret Martin, Hurst Senior Professorial Lecturer, Co-Director Transatlantic Policy Center, American University School of International Service

The obligatory showing of the red briefcase containing budget details is as exciting as it gets in the U.K. Justin Tallis – WPA Pool/Getty Images

When it comes to shutdowns, the U.S. is very much an exception rather than the rule.

On Oct. 1, 2025, hundreds of thousands of federal employees were furloughed as the business of government ground to a halt. With negotiations in Congress seemingly deadlocked over a funding deal, many political watchers are predicting a lengthy period of government closure.

According to the nonpartisan nonprofit Committee for a Responsible Federal Budget, the latest shutdown represents the 20th such funding gap since 1976.

But it doesn’t have to be like this – and in most countries, it isn’t. Other Western democracies experience polarization and political turmoil, too, yet do not experience this problem. Take for example the U.K., traditionally one of Washington’s closest allies and home to the “mother of parliaments.”

In the British system, government shutdowns just don’t happen – in fact, there has never been one and likely never will be.

A sign reads 'The US Capitol Visitors Center is closed due to a lapse in appropriations.'
The U.S. Capitol Visitors Center is closed to visitors during the federal government shutdown on Oct. 01, 2025.
Chip Somodevilla/Getty Images

So why do they occur in Washington but not London? Essentially, it comes down to four factors: the relative power of the legislature; how easy it is to pass a budget; the political stakes at play; and distinctive appropriation rules.

1. Legislative power

There are significant differences in how the legislatures of the U.K. and U.S. shape the budgetary process.

In the U.K., only the executive branch – the party or coalition in power – has the authority to propose spending plans. Parliament, which consists of members from all political parties, maintains an oversight and approval role, but it has very limited power over the budgetary timeline or to amend spending plans. This is a stark contrast with the U.S., where Congress – which may be split or controlled by a party different to the executive – plays a far more consequential role.

The U.S. president starts the budget process by laying out the administration’s funding priorities. Yet, the Constitution grants Congress the power of the purse – that is, the power to tax and spend.

Moreover, past legislation has bolstered congressional control. The 1974 Congressional Budget Act helped curtail presidential involvement in the budgeting process, giving Congress more authority over the timeline. That gave Congress more power but also offered it more opportunities to bicker and derail the budgetary process.

2. Thresholds to pass a budget

Congress and the U.K. Parliament also differ when it comes to their voting rules. Passing the U.S. budget is inherently more complicated, as it requires the support of both the Senate and the House of Representatives.

In Parliament, however, the two houses – the elected House of Commons and unelected House of Lords – are not equally involved. The two Parliament Acts of 1911 and 1949 limited the power of the House of Lords, preventing it from amending or blocking laws relating to budgeting.

Additionally, approving the budget in Westminster requires only an absolute majority of votes in the House of Commons. That tends to be quite a straightforward hurdle to overcome in the U.K. The party in power will typically also command a majority of votes in the chamber or be able to muster one up with the support of smaller parties. It is not, however, so easy in Congress. While a simple majority suffices in the House of Representatives, the Senate still has a 60-vote requirement to close debates before proceeding with a majority vote to pass a bill.

3. Political stakes

U.S. and U.K. politicians do not face the same high stakes over budget approval. Members of Congress may eventually pay a political price for how they vote on the budget, but there is no immediate threat to their jobs. That is not so in the U.K.

Indeed, the party or coalition in power in the U.K. must maintain the “confidence” of the House of Commons to stay in office. In other words, they need to command the support of the majority for key votes. U.K. governments can actually fall – be forced to resign or call for new elections – if they lose formal votes of confidence. Since confidence is also implied in other major votes, such as over the annual budget proposals, this raises the stakes for members of Parliament. They have tended to think twice before voting against a budget, for fear of triggering a dissolution of Parliament and new elections.

4. Distinctive appropriation rules

Finally, rules about appropriation also set the U.S. apart. For many decades, federal agencies could still operate despite funding bills not being passed. That, however, changed with a ruling by then-Attorney General Benjamin Civiletti in 1980. He determined that it would be illegal for governments to spend money without congressional approval.

That decision has had the effect of making shutdowns more severe. But it is not a problem that the U.K. experiences because of its distinct rules on appropriation. So-called “votes on account” allow the U.K. government “to obtain an advance on the money they need for the next financial year.”

This is an updated version of an article that was first published by The Conversation U.S. on Sept. 28, 2023.

The Conversation

Garret Martin receives funding from the European Union for the Transatlantic Policy Center, which he co-directs.

ref. Shutdowns are as American as apple pie − in the UK and elsewhere, they just aren’t baked into the process – https://theconversation.com/shutdowns-are-as-american-as-apple-pie-in-the-uk-and-elsewhere-they-just-arent-baked-into-the-process-266553

Where George Washington would disagree with Pete Hegseth about fitness for command and what makes a warrior

Source: The Conversation – USA – By Maurizio Valsania, Professor of American History, Università di Torino

On Dec. 4, 1783, after six years fighting against the British as head of the Continental Army, George Washington said farewell to his officers and returned to civilian life. Engraving by T. Phillibrown from a painting by Alonzo Chappell

As he paced across a stage at a military base in Quantico, Virginia, on Sept. 30, 2025, Secretary of Defense Pete Hegseth told the hundreds of U.S. generals and admirals he had summoned from around the world that he aimed to reshape the military’s culture.

Ten new directives, he said, would strip away what he called “woke garbage” and restore what he termed a “warrior ethos.”

The phrase “warrior ethos” – a mix of combativeness, toughness and dominance – has become central to Hegseth’s political identity. In his 2024 book “The War on Warriors,” he insisted that the inclusion of women in combat roles had drained that ethos, leaving the U.S. military less lethal.

In his address, Hegseth outlined what he sees as the qualities and virtues the American soldier – and especially senior officers – should embody.

On physical fitness and appearance, he was blunt: “It’s completely unacceptable to see fat generals and admirals in the halls of the Pentagon and leading commands around the country and the world.”

He then turned from body shape to grooming: “No more beardos,” Hegseth declared. “The era of rampant and ridiculous shaving profiles is done.”

As a historian of George Washington, I can say that the commander in chief of the Continental Army, the nation’s first military leader, would have agreed with some of Secretary Hegseth’s directives – but only some.

Washington’s overall vision of a military leader could not be further from Hegseth’s vision of the tough warrior.

A man in front of a US flag, looking like he is shouting and holding out his fists.
U.S. Secretary of Defense Pete Hegseth speaks to senior military leaders at Marine Corps Base Quantico on Sept. 30, 2025.
Andrew Harnik/Getty Images

280 pounds – and trusted

For starters, Washington would have found the concern with “fat generals” irrelevant. Some of the most capable officers in the Continental Army were famously overweight.

His trusted chief of artillery, Gen. Henry Knox, weighed around 280 pounds. The French officer Marquis de Chastellux described Knox as “a man of thirty-five, very fat, but very active, and of a gay and amiable character.”

Others were not far behind. Chastellux also described Gen. William Heath as having “a noble and open countenance.” His bald head and “corpulence,” he added, gave him “a striking resemblance to Lord Granby,” the celebrated British hero of the Seven Years’ War. Granby was admired for his courage, generosity and devotion to his men.

Washington never saw girth as disqualifying. He repeatedly entrusted Knox with the most demanding assignments: designing fortifications, commanding artillery and orchestrating the legendary “noble train of artillery” that brought cannon from Fort Ticonderoga to Boston.

When he became president, after the Revolution, Washington appointed Knox the first secretary of war – a sign of enduring confidence in his judgment and integrity.

Beards: Outward appearance reflects inner discipline

As for beards, Washington would have shared Hegseth’s concern – though for very different reasons.

He disliked facial hair on himself and on others, including his soldiers. To Washington, a beard made a man look unkempt and slovenly, masking the higher emotions that civility required.

Beards were not signs of virility but of disorder. In his words, they made a man “unsoldierlike.” Every soldier, he insisted, must appear in public “as decent as his circumstances will permit.” Each was required to have “his beard shaved – hair combed – face washed – and cloaths put on in the best manner in his power.”

For Washington, this was no trivial matter. Outward appearance reflected inner discipline. He believed that a well-ordered body produced a well-ordered mind.

To him, neatness was the visible expression of self-command, the foundation of every other virtue a soldier and leader should possess.

That is why he equated beards and other forms of unkemptness with “indecency.” His lifelong battle was against indecency in all its forms. “Indecency,” he once wrote, was “utterly inconsistent with that delicacy of character, which an officer ought under every circumstance to preserve.”

More statesman than warrior

By “delicacy,” Washington meant modesty, tact and self-awareness – the poise that set genuine leaders apart from individuals governed by passions.

For him, a soldier’s first victory was always over himself.

“A man attentive to his duty,” he wrote, “feels something within him that tells him the first measure is dictated by that prudence which ought to govern all men who commits a trust to another.”

In other words, Washington became a soldier not because he was hotheaded or drawn to the thrill of combat, but because he saw soldiering as the highest exercise of discipline, patience and composure. His “warrior ethos” was moral before it was martial.

Washington’s ideal military leader was more statesman than warrior. He believed that military power must be exercised under moral constraint, within the bounds of public accountability, and always with an eye to preserving liberty rather than winning personal glory.

In his mind, the army was not a caste apart but an instrument of the republic – an arena in which self-command and civic virtue were tested. Later generations would call him the model of the “republican general”: a commander whose authority rested not on bluster or bravado but on composure, prudence and restraint.

That vision was the opposite of the one Pete Hegseth performed at Quantico.

A man on a white horse and in a uniform saluting a long line of soldiers in front of him.
Washington formally taking command of the Continental Army on July 3, 1775, in Cambridge, Mass.
Currier and Ives image, photo by Heritage Art/Heritage Images via Getty Images

Discipline and steadiness, not fury and bravado

The “warrior ethos” Hegseth celebrates – loud, performative – was precisely what Washington believed a soldier must overcome.

In March 1778, after Marquis de Lafayette abandoned an impossible winter expedition to Canada, Washington praised caution over juvenile bravado.

“Every one will applaud your prudence in renouncing a project in which you would vainly have attempted physical impossibilities,” he wrote from the snows of Valley Forge.

For Washington, valor was never the same as recklessness. Success, he believed, depended on foresight, not fury, and certainly not bravado.

The first commander in chief cared little for waistlines or whiskers, in the end; what concerned him was discipline of the mind. What counted was not the cut of a man’s figure but the steadiness of his judgment.

Washington’s own “warrior ethos” was grounded in decency, temperance and the capacity to act with courage without surrendering to rage. That ideal built an army – and in time, a republic.

The Conversation

Maurizio Valsania does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Where George Washington would disagree with Pete Hegseth about fitness for command and what makes a warrior – https://theconversation.com/where-george-washington-would-disagree-with-pete-hegseth-about-fitness-for-command-and-what-makes-a-warrior-266530

Moral panics intensify social divisions and can lead to political violence

Source: The Conversation – USA – By Ron Barrett, Professor of Anthropology, Macalester College

The day before Charlie Kirk was assassinated, I was teaching a college class on science, religion and magic. Our class was comparing the Salem witch trials of the 1690s with the McCarthy hearings of the early 1950s, when U.S. democratic processes were eclipsed by the Red Scare of purported communist infiltration.

The aim of the class was to better understand the concept of moral panics, which are societal epidemics of disproportionate fear of real or perceived threats. Such outsized fear can often lead to violence or repression against certain socially marginalized groups. Moral panics are recurring themes in my research on the anthropology of fear and discrimination.

Our next class meeting would apply the moral panic concept to a recent example of political violence. Tragically, there were many of these examples to choose from.

Minnesota State Representative Melissa Hortman and her husband were assassinated on June 14, 2025, which happened to be the eighth anniversary of the congressional baseball shootings in which U.S. House Majority Whip Steve Scalise and three other Republicans were wounded. These shootings were among at least 15 high-profile instances of political violence since Rep. Gabby Gifford was severely wounded in a 2011 shooting that killed five and wounded another 13 people.

Seven of these violent incidents occurred within the past 12 months. Kirk’s killing became the eighth.

In most of these cases, we may never fully know the perpetrator’s motives. But the larger pattern of political violence tracks with the increasing polarization of American society. While researching this polarization, I have found recurring themes of segregation and both the dehumanization and disproportionate fear of people with opposing views among liberals and conservatives alike.

Segregation and self-censorship

The first ingredient of a moral panic is the segregation of a society into at least two groups with limited contact between them and an unwillingness to learn from one another.

In 17th century Salem, Massachusetts, the social divisions were long-standing. They were largely based on land disputes between family factions and economic tensions between agriculturally-based village communities and commercially-based town communities.

Within these larger groups, a growing number of widowed women had become socially marginalized for becoming economically independent after their husbands died in colonial wars between New England and New France. And rumors of continuing violence led residents in towns and villages to avoid Native Americans and new settlers in surrounding frontier areas. Salem was divided in many ways.

A black-and-white copy of a painting depicts a trial in Salem, Massachusetts, in 1692.
The painting ‘Trial of George Jacobs of Salem for Witchcraft’ by Tompkins Harrison Matteson. Jacobs was one of the few men accused of witchcraft.
Tompkins Harrison Matteson/Library of Congress via AP

Fast forward to the end of World War II. That’s when returning American veterans used their benefits to settle into suburban neighborhoods that would soon be separated by race and class through zoning policies and discriminatory lending practices. This set the stage for what has come to be called The Big Sort, the self-segregation of people into neighborhoods where residents shared the same political and religious ideologies.

It was during the early stages of these sorting processes that the Red Scare and McCarthy hearings emerged.

The Big Sort turned digital in the early 2000s with the rise of online information and social media platforms with algorithms that conform to the particular desires and biases of their user communities.

Consequently, it is now easier than ever for conservatives and liberals to live in separate worlds of their own choosing. Under these conditions, Democrats and Republicans tend to exaggerate the characteristics of the other party based on common stereotypes.

Dehumanization and discrimination

Dehumanization is perhaps the most crucial ingredient of a social panic. This involves labeling people according to categories that deprive them of positive human qualities. This labeling process is often conducted by “moral entrepreneurs” – people invested by their societies with the authority to make such claims in an official, unquestionable and seemingly objective way.

In 1690s Massachusetts, the moral entrepreneurs were religious authorities who labeled people as satanic witches and killed many of them. In 1950s Washington, the moral entrepreneurs were members of Congress and expert witnesses who labeled people Soviet collaborators and ruined many of their lives.

In the 21st century, the moral entrepreneurs include media personalities and social influencers as well as the nonhuman bots and algorithms whose authority is derived by constructing the illusion of broad consensus.

Under these conditions, many U.S. liberals and conservatives regard their counterparts as savage, immature, corrupt or malicious. Not surprisingly, surveys reveal that animosity between conservatives and liberals has been at its highest over the past five years than any other time since the measurements first began in 1978.

Adding to the animosity, dehumanization can also justify discrimination against a rival group. This is shown in social psychology experiments in which conservatives and liberals discriminate against one another to a greater degree than by race when deciding on scholarships and job opportunities. Such discrimination lends credence to further animosity.

Exaggerating fear

There is a fine line between animosity and disproportional fear. The latter can lead to extreme policies and violent actions during a moral panic.

Such fear often takes the form of perceived threats. Rachel Kleinfeld, a scholar who studies polarization and political violence, says that one of the best ways to rally a political base is to make them think they are under attack by the other side. She says that “is why ‘They are out to take your x’ is such a time-honored fundraising and get-out-the-vote message.”

In the past few years, the “x” that could be taken has escalated to core freedoms and personal safety, threats which could easily trigger widespread fear on both sides of the political divide.

But the question remains whether exaggerated fears are sufficient to trigger political violence. Are assassins like Kirk’s killer simply pathological outliers among agitated but otherwise self-restrained populations? Or are they sensitive indicators of a looming social catastrophe?

A black-and-white photo shows two men dressed in suits sitting in front of a desk.
The House Committee on Un-American Activities investigates movie producer Jack Warner, right, in Washington on Oct. 20, 1947.
AP Photo

Countering the panic

We do not have the answers to that question yet. But in the interim, there are efforts in higher education to reduce animosity and encourage constructive interactions and discussion between people with different perspectives.

A nonpartisan coalition of faculty, students and staff – known as the Heterodox Academy – is promoting viewpoint diversity and constructive debates on over 1,800 campuses. The college where I teach has participated in the Congress to Campus program, promoting bipartisan dialogue by having former legislators from different parties engage in constructive debates with one another about timely political issues. These debates serve as models for constructive dialogue.

It was in the spirit of constructive dialogue that my class debated whether the Kirk assassination could be explained as the product of a moral panic. Many agreed that it could, and most agreed it was probably an assault on free speech despite having strong objections to Kirk’s views. The debate was passionate, but everyone was respectful and listened to one another. No witches were to be found in the class that day.

The Conversation

Ron Barrett does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Moral panics intensify social divisions and can lead to political violence – https://theconversation.com/moral-panics-intensify-social-divisions-and-can-lead-to-political-violence-265238

Trump scraps the nation’s most comprehensive food insecurity report − making it harder to know how many Americans struggle to get enough food

Source: The Conversation – USA (2) – By Tracy Roof, Associate Professor of Political Science, University of Richmond

Nearly 1 in 7 Americans had trouble consistently getting enough to eat in 2023. Patrick Strattner/fStop via Getty Images

The Trump administration announced on Sept. 20, 2025, that it plans to stop releasing food insecurity data. The federal government has tracked and analyzed this data for the past three decades, but it plans to stop after publishing statistics pertaining to 2024 data. The Conversation U.S. asked Tracy Roof, a political scientist who has researched the history of government nutrition programs, to explain the significance of the U.S. Household Food Security Survey and what might happen if the government discontinues it.

What’s food insecurity?

The U.S. Department of Agriculture defines food security as “access by all people at all times to enough food for an active, healthy life.”

People who are food insecure are unsure they can get enough food or unable to get enough food to meet these basic needs because they can’t afford it.

How does the government measure it?

The USDA has collected data on food insecurity since the mid-1990s. It includes the share of the population that is food insecure and a subset of this group considered to have very low food security.

People who are food insecure may not significantly reduce how much they eat, but they are likely to eat less balanced meals or lower-quality food. People with very low food security report eating less altogether, such as by skipping meals or eating smaller meals.

These statistics are based on answers to questions the USDA adds to the Current Population Survey, which the Census Bureau administers every December. There are 10 questions in the survey. Households with children are asked four more.

The questions inquire about access to food, such as whether someone has worried in the past year that their food would run out before they had enough money to buy more, or how frequently they have skipped meals, could not afford balanced meals, or felt hunger.

The U.S. food insecurity rate stood at 13.5% in 2023, the most recent year for which data is currently available. The final annual food security report, expected in October, will be issued for 2024 – based on data collected during the Biden administration’s last year.

Why did the government start measuring it?

Calls for creating the food stamp program in the 1960s led to an intense debate in Washington about the extent of malnutrition in the U.S. Until then, the government did not consistently collect reliable or national statistics on the prevalence of malnutrition.

Those concerns reached critical mass when the Citizens’ Board of Inquiry into Hunger and Malnutrition, launched by a group of anti-hunger activists, issued a report in 1968, Hunger USA. It estimated that 10 million Americans were malnourished.

That report highlighted widespread incidence of anemia and protein deficiency in children. That same year, a CBS documentary, “Hunger in America,” shocked Americans with disturbing images of malnourished children. The attention to hunger resulted in a significant expansion of the food stamp program, but it did not lead to better government data collection.

The expansion of government food assistance all but eliminated the problem of malnutrition. In 1977, the Field Foundation sent teams of doctors into poverty stricken areas to assess the nutritional status of residents. Although there were still many people facing economic hardship, the doctors found there was little evidence of the nutritional deficiencies they had seen a decade earlier.

Policymakers struggled to reach a consensus on the definition of hunger. But the debate gradually shifted from how to measure malnutrition to how to estimate how many Americans lacked sufficient access to food.

Calls for what would later be known as food insecurity data grew after the Reagan administration scaled back the food stamps program in the early 1980s. Despite the unemployment rate soaring to nearly 11% in 1982 and a steep increase in the poverty rate, the number of people on food stamps had remained relatively flat.

Although the Reagan administration denied that there was a serious hunger problem, news reports were filled with stories of families struggling to afford food.

Many were families of unemployed breadwinners who had never needed the government’s help before. During this period, the number of food banks grew substantially, and they reported soaring demand for free food.

Because there was still no government data available to resolve the dispute, the Reagan administration responded to political pressure by creating a task force on hunger in 1983. It called for improved measures of the nutritional status of Americans.

The task force also pointed to the difference between “hunger as medically defined” and “hunger as commonly defined.” That is, someone can experience hunger – not getting enough to eat – without displaying the physical signs of malnutrition. In other words, it would make more sense to measure access to food as opposed to the effects of malnutrition.

In 1990 Congress passed the National Nutrition Monitoring and Related Research Act, which President George H.W. Bush signed into law. It required the secretaries of Agriculture and Health and Human Services to develop a 10-year plan to assess the dietary and nutritional status of Americans. This plan, in turn, recommended developing a standardized measurement of food insecurity.

The Food Security Survey, developed in consultation with a team of experts, was first administered in 1995. Rather than focusing on nutritional status, it was designed to pick up on behaviors that suggested people were not getting enough to eat.

Did tracking food insecurity help policymakers?

Tracking food insecurity allowed the USDA, Congress, researchers and anti-hunger groups to know how nutritional assistance programs were performing and what types of households continued to experience need. Researchers also used the data to look at the causes and consequences of food insecurity.

Food banks relied on the data to understand who was most likely to need their help.

The data also allowed policymakers to see the big jump in need during the Great Recession starting in 2008. It also showed a slight decline in food insecurity with the rise in government assistance early in the COVID-19 pandemic, followed by another big jump with steeply rising food prices in 2022.

The big budget bill Congress passed in July will cut spending on the Supplemental Nutrition Assistance Program by an estimated US$186 million through 2034, an almost 20% reduction.

Supporters of SNAP, the new name for the food stamp program adopted in 2008, worry the loss of the annual reports will hide the full impact of these cuts.

Why is the administration doing this?

In the brief press release the USDA issued on Sept. 20 announcing the termination of the annual food insecurity reports, the USDA indicated that the Trump administration considers the food security survey to be “redundant, costly, politicized, and extraneous,” and does “nothing more than fear monger.”

While I disagree with that characterization, it is true that anti-hunger advocates have pointed to increases in food insecurity to call for more government help.

Is comparable data available from other sources?

Although the USDA noted there are “more timely and accurate data sets” available, it was not clear which datasets it was referring to. Democrats have called on the Trump administration to identify the data.

Feeding America, the largest national network of food banks, releases an annual food insecurity report called the Map the Meal Gap. But like other nonprofits and academic researchers that track these trends, it relies on the government’s food insecurity data.

There is other government data on food purchases and nutritional status, and a host of other surveys that use USDA questions. However, there is no other survey that comprehensively measures the number of Americans who struggle to get enough to eat.

As in the 1980s, policymakers and the public may have to turn to food banks’ reports of increased demand to get a sense of whether the need for help is rising or falling. But those reports can’t replace the USDA’s Food Security Survey.

The Conversation

Tracy Roof does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump scraps the nation’s most comprehensive food insecurity report − making it harder to know how many Americans struggle to get enough food – https://theconversation.com/trump-scraps-the-nations-most-comprehensive-food-insecurity-report-making-it-harder-to-know-how-many-americans-struggle-to-get-enough-food-266006

Why Major League Baseball keeps coming back to Japan

Source: The Conversation – USA (2) – By Jared Bahir Browsh, Assistant Teaching Professor of Critical Sports Studies, University of Colorado Boulder

When Shohei Ohtani stepped onto the field at the Tokyo Dome in March 2025, he wasn’t just playing a game – he was carrying forward more than 100 years of baseball ties between the U.S. and Japan.

That history was front and center when the Los Angeles Dodgers and Chicago Cubs opened their 2025 regular season facing off in the Tokyo Series on March 18 and 19. The two games featured several players from Japan, capping a slate of events that included four exhibition games against Japanese professional teams.

It was a massive financial success. Marking MLB’s first return to Tokyo since 2019, the series generated over US$35 million in ticket sales and sponsorship revenue and $40 million in merchandise sales.

The first game of the Tokyo Series broke viewership records in Japan.

For MLB, which has seen significant viewership growth this season, it was proof that its investment in Japan and international baseball over the past three decades has been paying off.

Baseball’s early journey to Japan

Baseball, which is by far the most popular sport in Japan, was introduced to the nation during the Meiji Restoration in the late 19th century.

American baseball promoters were quick to see the potential of the Japanese market, touring the country as early as 1908. The most famous such tour took place in 1934 and featured a number of American League All-Stars, including Babe Ruth and catcher Moe Berg, who was later revealed to be a U.S. spy.

That trip had a long legacy. The U.S. All-Stars faced a team called The Greater Japan Tokyo Baseball Club, which, a year later, barnstormed in the United States. When they played the San Francisco Seals, the Seals’ manager, Lefty O’Doul – who later trained baseball players in Japan – suggested a name change to better promote the team for an American audience.

Commenting that Tokyo is the New York of Japan, O’Doul suggested they take on one of their team names. And since “Yankee” is a uniquely American term, The Greater Japan Tokyo Baseball Club was reborn as the Tokyo (Yomiuri) Giants.

When the Giants returned to Japan, the Japanese Baseball League was formed, which was reorganized into Nippon Professional Baseball in 1950. The Giants have gone on to dominate the NPB, winning 22 Japan Series and producing Sadaharu Oh, who hit 868 home runs during his illustrious career.

Breaking into MLB

The first Japanese-born MLB player, Masanori Murakami, debuted for the San Francisco Giants in September 1964. But his arrival wound up sparking a contractual tug-of-war between the NPB and MLB. To prevent future disputes, the two leagues signed an agreement in 1967 that essentially blocked MLB teams from signing Japanese players.

By the 1990s, this agreement became untenable, as some Japanese players in NPB became frustrated by their lack of negotiating power. After the Kintetsu Buffaloes refused to give Hideo Nomo a multiyear contract after the 1994 season, his agent found a loophole in the “voluntary retirement clause” that would allow him to sign with an MLB franchise. He signed with the Los Angeles Dodgers in February 1995.

Nomo’s impact was immeasurable. His “tornado” windup and early success made him one of the most popular players in the major leagues, which was recovering from the cancellation of the World Series the previous year. In Japan, “Nomo fever” took hold, with large crowds gathering television screens in public to watch him play, even though his games aired in the morning. Nomo helped drive Japanese sponsorship and television rights as his first season ended with him winning National League Rookie of the Year.

But within a few years, disputes over contracts soon showed the need for new rules. This ultimately led to the establishment of posting rules for NPB players looking to transition to the major leagues.

The rules have shifted some since they were set out in late 1998, but if a player declares their intention to leave NPB, then MLB teams have a 45-day window to negotiate. If the player from NPB is under 25 or has less than nine years of professional experience, they’re subject to the limited MLB signing pool for international players. Otherwise, they’re declared a free agent.

A wave of stars

The new rules led many more Japanese players to join major league baseball from Nippon Professional Baseball: Of the 81 Japanese players who’ve played in the majors, all but four played in NPB before their debut. Ichiro Suzuki, who became the first Japanese player inducted into the National Baseball Hall of Fame, was also the first Japanese position player to make the leap.

Other players, like Hideki Matsui, the only Japanese player to be named World Series MVP, continued the success. And then came Ohtani, a two-way superstar who both hits and pitches, drawing comparisons to Babe Ruth.

For MLB, Japanese players haven’t just boosted performance on the field – they’ve expanded its global fan base. The Dodgers brought in over $120 million in increased revenue in Ohtani’s first year alone, easily covering his salary even with Ohtani signing the richest contract in baseball history. The franchise has also seen its value increase by at least 23% to nearly $8 billion. MLB has also seen a significant increase in viewership over the past two seasons, partially driven by the growing interest from Japan.

As American sports leagues deal with an increasingly distracted, fragmented domestic audience, it’s not surprising that they’re looking abroad for growth. And as MLB teams prepare to court another wave of Japanese stars this offseason, it’s clear that its decades-long investment in Japan is paying off.

The Conversation

Jared Bahir Browsh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Major League Baseball keeps coming back to Japan – https://theconversation.com/why-major-league-baseball-keeps-coming-back-to-japan-264668

Breastfeeding is ideal for child and parent health but challenging for most families – a pediatrician explains how to find support

Source: The Conversation – USA (3) – By Ann Kellams, Professor of Pediatrics, University of Virginia

Many new parents start out breastfeeding but switch to formula within a few days. JGI/Jamie Grill via Tetra Images

As a pediatrician, I thought my medical background and pediatric training meant I would be well prepared to breastfeed my newborn. I knew all about the research on how an infant’s diet can affect both their short- and long-term health. Compared to formula, breastfeeding is linked to a lower risk of sudden infant death syndrome, lower rates of infections and hospitalizations and a lower risk of developing diabetes later in life. Breastfeeding can also provide health benefits to the parent.

But I struggled to breastfeed my own firstborn. I was exhausted and in pain. My nipples were bleeding and my breasts swollen. I worried about whether my baby was getting enough to eat. And I was leaking breast milk all over the place. I found myself asking questions familiar to many new parents: What in the world is going on with breastfeeding? Can I keep this up when I go back to work? How does a breast pump even work? Why doesn’t anyone know how to help me? And why are some families able to start breastfeeding and never look back?

The American Academy of Pediatrics recommends caregivers breastfeed their child for up to two years. However, many new parents are unable to reach these breastfeeding goals and find it very difficult to get breastfeeding going. Combined with inadequate support, some blame themselves or feel like less than a good parent.

While over 80% of families start out breastfeeding their baby, roughly 19% of newborns have already received infant formula two days after birth. Around half of families are able to breastfeed their babies six months after birth and only 36% at 12 months.

Mother breastfeeding newborn, eyes closed in pain, lying in hospital bed at night
Breastfeeding can be painful – especially without support.
Yoss Sabalet/Moment via Getty Images

Inspired by my own and my patients’ experiences with breastfeeding, I sought extra training in the field of breastfeeding and lactation medicine. Now, as a board-certified physician in breastfeeding and lactation medicine, I wanted to understand how pregnant and breastfeeding parents – and those who care for them – perceive breastfeeding. How do they define breastfeeding success? What would make breastfeeding easier, especially for underserved communities with some of the lowest breastfeeding rates in the U.S.

Listening to new parents

In partnership with the Academy of Breastfeeding Medicine and Reaching Our Sisters Everywhere, a nonprofit focused on supporting breastfeeding among Black families, my team and I started a research project to identify the key components of a successful breastfeeding journey as defined by parents. We also wanted to determine what would enable families to achieve their breastfeeding goals.

To do this, we asked a range of parents and experts in the field of breastfeeding and lactation medicine about what would make breastfeeding easier for families. We recruited participants through social media, listservs and at the Academy of Breastfeeding Medicine’s annual international meeting, inviting them to provide feedback through virtual listening sessions, online surveys and in-person gatherings.

What we found is fascinating. From the perspective of the parents we talked to, success for breastfeeding had less to do with how long or to what extent they exclusively breastfed. Rather, success had a lot more to do with their experience with breastfeeding and whether they had the support they needed to make it possible.

Support included someone who could listen and help them with breastfeeding; communities that welcomed breastfeeding in public; and supportive loved ones, friends and workplaces. Having their questions about breastfeeding answered in accessible and practical ways through resources such as breastfeeding and lactation professionals in their area, peer support and websites with reliable, trustworthy information was also important to helping them feel successful in breastfeeding.

Parent sitting in chair with baby in lap, hand on temple, breast pumps in foreground
Figuring out how to make time and room for breastfeeding can be taxing.
FatCamera/iStock via Getty Images Plus

Important questions about breastfeeding also arose from these conversations. How can hospitals, clinics and health care workers make sure that breastfeeding support is available to everyone and is equitable? What education do health care professionals need about breastfeeding, and what are barriers to them getting that education? How should those in health care prepare families to breastfeed before the baby is born? And how can the care team ensure that families know when and how to get help for breastfeeding problems?

The good news is that most of the problems raised within our study are solvable. But it will take an investment in resources and support for breastfeeding, including training health care workers on troubleshooting common problems such as nipple pain, ineffective latch and concerns about breast milk production.

Corporate influences on feeding babies

Commercial infant formula is a US$55 billion dollar industry. And yet, most formula use would not be necessary were barriers to breastfeeding reduced.

Research shows that the marketing practices of commercial infant formula companies are predatory, pervasive and misleading. They target not only families but also health care workers. During my medical training, commercial infant formula companies would give us lectures, free lunches, and books and calculators, and my fellow residents and I knew the representatives by name. As a medical director of a newborn unit, I saw these companies stocking our hospital shelves with commercial infant formula and building relationships with our nursing staff. These companies profit when breastfeeding goes wrong.

The World Health Organization has advocated against aggressive commercial infant formula marketing.

This is not to say that commercial infant formula is a bad thing. When breastfeeding isn’t possible, it can be lifesaving. But in some cases, because the U.S. doesn’t provide universal paid maternity leave and not all workplaces are supportive of breastfeeding, parents may find themselves relying on commercial infant formula.

Thinking about breast milk and commercial infant formula less as a question of lifestyle or brand choices and more as an important health care decision can help families make more informed choices. And health care providers can consider thinking about infant formula as a medicine for when it is necessary to ensure adequate nutrition, putting more focus on helping families learn about and successfully breastfeed.

Breastfeeding is a team sport

As the saying goes, it takes a village to raise a child, and breastfeeding is no exception – it is a team sport that calls upon everyone to help new parents achieve this personal and public health goal.

What can you do differently to support breastfeeding in your family, neighborhood, workplace and community?

When I am educating new or expectant families about breastfeeding, I emphasize skin-to-skin contact whenever the parent is awake and able to monitor and respond to baby. I recommend offering the breast with every feeding cue, until the baby seems content and satisfied after each feeding.

Manually expressing drops of milk into the baby’s mouth after each feeding can boost their intake and also ensure the parent’s body is getting signaled to make more milk.

If your family has concerns about whether the baby is getting enough milk, before reaching for formula, ask a lactation consultant or medical professional who specializes in breastfeeding how to tell whether everything is going as expected. Introducing formula can lead to decreased milk production, the baby preferring artificial nipples over the breast and stopping breastfeeding earlier than planned.

Some parents are truly unable to continue breastfeeding for various reasons, and they should not feel ashamed or stigmatized by it.

Finally, give yourself time for breastfeeding to feel routine – both you and baby are learning.

The Conversation

Ann L. Kellams receives funding from NICHD for her research and Pediatric UptoDate as an author. She is the immediate past-president of the Academy of Breastfeeding Medicine.

ref. Breastfeeding is ideal for child and parent health but challenging for most families – a pediatrician explains how to find support – https://theconversation.com/breastfeeding-is-ideal-for-child-and-parent-health-but-challenging-for-most-families-a-pediatrician-explains-how-to-find-support-240396