The Trump administration is decreasing the attention federal regulators pay to pipeline leaks. But leaks from natural gas pipelines don’t just waste energy and warm the planet – they can also make the air more dangerous to breathe. That air pollution threat grows not just in the communities where the leaks happen but also as far as neighboring states, as our analysis of gas leaks and air pollution levels across the U.S. has found.
For instance, in September 2018 the Merrimack Valley pipeline explosion in Massachusetts, which released roughly 2,800 metric tons of methane, damaged or destroyed about 40 homes and killed one person. We found that event caused fine-particle air pollution concentrations in downwind areas of New Hampshire and Vermont to spike within four weeks, pushing those areas’ 2018 annual average up by 0.3 micrograms per cubic meter. That’s an increase of about 3% of the U.S. EPA’s annual health standard for PM2.5. Elevated air pollution then showed up in New York and Connecticut through the rest of 2018 and into 2019.
In our study, we examined pipeline leak data from the U.S. Pipeline and Hazardous Materials Safety Administration from 2009 to 2019 and data about the state’s level of small particulate matter in the air from Columbia University’s Center for International Earth Science Information Network. We also incorporated, for each state, data on environmental regulations, per-capita energy consumption, urbanization rate and economic productivity per capita.
In simple terms, we found that in years when a state – or its neighboring states – experienced more methane leak incidents, that state’s annual average fine-particle air pollution was measurably higher than in years with fewer leaks.
A 2018 natural gas leak and explosion in Massachusetts destroyed and damaged homes, killed one person and increased air pollution over a wide area. John Tlumacki/The Boston Globe via Getty Images
Methane’s role in fine‑particle formation
Natural gas is primarily made of methane, a powerful greenhouse gas. But methane also helps set off chemical reactions in the air that lead to the formation of tiny particles known as PM2.5 because they are smaller than 2.5 micrometers (one ten-thousandth of an inch). They can travel deep into the lungs and cause health problems, such as increasing a person’s risk of heart disease and asthma.
So, when natural gas leaks, energy is wasted, the planet warms and air quality drops. These leaks can be massive, like the 2015 Aliso Canyon disaster in California, which sent around 100,000 metric tons of methane into the atmosphere.
But smaller leaks are also common, and they add up, too: Because the federal database systematically undercounts minor releases, we estimate that undocumented small leaks in the U.S. may total on the order of 15,000 metric tons of methane per year – enough to raise background PM2.5 by roughly 0.1 micrograms per cubic meter in downwind areas. Even this modest increase can contribute to health risks: There is no safe threshold for PM2.5 exposure, with each rise of 1 microgram per cubic meter linked to heightened mortality from cardiovascular and respiratory diseases.
The most direct way to reduce this problem is to reduce the number and quantity of methane leaks from pipelines. This could include constructing them in ways or with materials or processes that are less likely to leak. Regulations could create incentives to do so or require companies to invest in technology to detect methane leaks quickly, as well as encourage rapid responses when a leak is identified, even if it appears relatively small at first.
Reducing pipeline leaks would not just conserve the energy that is contained in the methane and reduce the global warming that results from increasing amounts of methane in the atmosphere. Doing so would also improve air quality in communities that are home to pipelines and in surrounding areas and states.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Camille Banger, Assistant Professor in Business Information Technology, University of Wisconsin-Stout
Students pick up on AI-infused apps quickly, but generative AI appears to require more reflection on how to use technology.Hill Street Studios via Getty Images
The tech world says generative artificial intelligence is essential for the future of work and learning. But as an educator, I still wonder: Is it really worth bringing it into the classroom? Will these tools truly help students learn, or create new challenges we haven’t yet faced?
Like many other people in higher education, I was skeptical but knew I couldn’t ignore it. So, instead of waiting for all the answers, I decided to dive in and discover what preparing students for an AI-powered world really means beyond the hype. Last semester, I developed a business technology class where the latest generative AI tools were woven into the curriculum.
What I found is that AI productivity products have a learning curve, much like other applications that students, and ultimately white-collar workers, use in knowledge work. But I needed to adjust how I taught the class to emphasize critical thinking, reflection on how these tools are being used and checks against the errors they produce.
The project
It’s no secret that generative AI is changing how people work, learn and teach. According to the 2025 McKinsey Global Survey on AI, 78% of respondents said their organizations use AI in at least one business function, and many are actively reskilling their workforce or training them with new skills to meet the demands of this shift.
As program director of the Business Information Technology bachelor’s degree program at the University of Wisconsin-Stout, Wisconsin’s polytechnic university, I spend a lot of time thinking about how to prepare students for the workplace. I’m also an AI enthusiast, but a skeptical one. I believe in the power of these tools, but I also know they raise questions about ethics, responsibility and readiness.
So, I asked myself: How can I make sure our students are ready to use AI and understand it?
In spring 2025, University of Wisconsin-Stout launched a pilot for a small group of faculty and staff to explore Microsoft 365 Copilot for business. Since it works alongside tools such as Word, Excel, Outlook, PowerPoint, OneDrive and Teams, which are products our students already use, I saw an opportunity to bring these latest AI features to them as well.
To do that, I built an exploratory project into our senior capstone course. Students were asked to use Copilot for Business throughout the semester, keep a journal reflecting on their experience and develop practical use cases for how AI could support them both as students and future professionals. I didn’t assign specific tasks. Instead, I encouraged them to explore freely.
My goal wasn’t to turn them into AI experts overnight. I wanted them to build comfort, fluency and critical awareness about how and when to use AI tools in real-world contexts.
What my students and I learned
What stood out to me the most was how quickly students moved from curiosity to confidence.
Many of them had already experimented with tools such as ChatGPT and Google Gemini, but Copilot for Business was a little different. It worked with their own documents, emails, meeting notes and class materials, which made the experience more personal and immediately relevant.
In their journals, students described how they used Copilot to summarize Teams video meetings, draft PowerPoint slides and write more polished emails. One student said it saved them time by generating summaries they could review after a meeting instead of taking notes during the call or rewatching a recording. Another used it to check their assignment against the rubric – a scoring tool that outlines the criteria and performance levels for assessing student work – to help them feel more confident before submitting their work.
College students will likely be asked to use AI features in business productivity applications once they enter the workforce. What’s the best way to teach them how to effectively use them? Denise Jans on Unsplash
Several students admitted they struggled at first to write effective prompts – the typed requests that guide the AI to generate content – and had to experiment to get the results they wanted. A few reflected on instances where Copilot, like other generative AI tools, produced inaccurate or made-up information, or hallucinations, and said they learned to double-check its responses. This helped them understand the importance of verifying AI-generated content, especially in academic and professional settings.
Some students also said they had to remind themselves to use Copilot instead of falling back on other tools they were more familiar with. In some cases, they simply forgot Copilot was available. That feedback showed me how important it is to give students time and space to build new habits around emerging technologies.
What’s next
While Copilot for Business worked well for this project, its higher cost compared with previous desktop productivity apps may limit its use in future classes and raises ethical questions about access.
That said, I plan to continue expanding the use of generative AI tools across my courses. Instead of treating AI as a one-off topic, I want it to become part of the flow of everyday academic work. My goal is to help students build AI literacy and use these tools responsibly and thoughtfully, as a support for their learning, not a replacement for it.
Historically, software programs enabled people to produce content, such as text documents, slides or the like, whereas generative AI tools produce the “work” based on user prompts. This shift requires a higher level of awareness about what students are learning and how they’re engaging with the materials and the AI tool.
This pilot project reminded me that integrating AI into the classroom isn’t just about giving students access to new tools. It’s about creating space to explore, experiment, reflect and think critically about how these tools fit into their personal and professional lives and, most importantly, how they work.
As an educator, I’m also thinking about the deeper questions this technology raises. How do we ensure that students continue developing original thoughts and critical thinking when AI can easily generate ideas or content? How can we preserve meaningful learning while still taking advantage of the efficiency these tools offer? And what kinds of assignments can help students use AI effectively while still demonstrating their own thinking?
These questions aren’t easy, but they are important. Higher education has an important role to play in helping students use AI and understand its impact and their responsibility in shaping how it’s used.
Striking the right balance between fostering original thought and critical thinking with AI can be tricky. One way I’ve approached this is encouraging students to first create their content on their own, then use AI for review. This way, they maintain ownership of their work and see AI as a helpful tool rather than a shortcut. It’s all about knowing when to leverage AI to refine or enhance their ideas.
One piece of advice I received that really stuck with me was this: Start small, be transparent and talk openly with your students. That’s what I did, and it’s what I’ll continue doing as I enter this next chapter of teaching and learning in the age of AI.
Camille Banger does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Jennifer Duyne Barenstein, Senior Lecturer of Social Anthropology, Swiss Federal Institute of Technology Zurich
Located in the Peñarol neighborhood of Montevideo, COVIMT 1 was the city’s first mutual aid housing cooperative. It was founded by textile workers, who completed construction of the complex in 1972.Bé Estudio, CC BY-SA
More than 1.8 billion people lack access to adequate and affordable housing. Yet too few countries have taken meaningful steps to ensure dignified housing for their most vulnerable citizens.
We research how cooperative housing can serve as one solution to the affordable housing crisis. There are a variety of cooperative housing models. But they generally involve residents collectively owning and managing their apartment complexes, sharing responsibilities, costs and decision-making through a democratic process.
Other countries, such as El Salvador and Colombia, have struggled to integrate housing cooperatives into their countries’ preexisting housing policies. In fact, although Latin America has a long-standing tradition of community-driven and mutual aid housing, housing cooperatives haven’t taken root in many places, largely due to weak political and institutional backing.
Uruguay is an exception.
With a population of just 3.4 million, the small Latin American nation has a robust network of housing cooperatives, which give access to permanent, affordable housing to citizens at a range of income levels.
An experiment becomes law
Housing cooperatives in Uruguay emerged in the 1960s during a time of deep economic turmoil.
The first few pilot projects delivered outstanding results. Financed through a mix of government funds, loans from the Inter-American Development Bank and member contributions, they were more cost-effective, faster to build and higher in quality than conventional housing.
These early successes played a key role in the passage of Uruguay’s National Housing Law in 1968. This law formally recognized housing cooperatives and introduced a legal framework that supported different models. The most common models to emerge roughly translate to “savings cooperatives” and “mutual aid cooperatives.”
In the savings model, members pool their savings to contribute around 15% of the capital investment. This gives them access to a government-subsidized mortgage to finance the construction. The cooperative then determines how repayment responsibilities are distributed among its members. Typically, members purchase “social shares” in the cooperative, equivalent to the cost of the assigned housing unit. If a member decides to leave the cooperative, their social shares are reimbursed. These shares are also inheritable, allowing them to be passed on to heirs.
In contrast, the mutual aid model enables households without savings to participate by contributing 21 hours per week toward construction efforts. Tasks are assigned to individuals according to their abilities. They can range from manual labor to administrative tasks, such as the ordering of construction materials.
Despite their differences, both models share a fundamental principle: The land and housing units are held collectively and are permanently removed from the private market.
Typically, once cooperatives are established, each household must contribute a monthly fee that covers the repayment of the state’s loan and maintenance costs. In exchange, members have an unlimited and inheritable contract of “use and enjoyment” of a quality apartment. If a member decides to leave, they are partially reimbursed for the contributions they’ve made over time, typically with a 10% deduction that the cooperative keeps.
This ensures that cooperative housing provides long-term security and remains affordable, especially for those at the lowest rungs of the income ladder.
This growth has been possible thanks to state support, federations of cooperatives and nonprofit groups.
The state recognized that the success of housing cooperatives depended on sustained public support. The National Housing Law defined the rights and responsibilities of cooperatives. It also outlined the state’s obligations: overseeing operations, setting criteria for financial assistance and providing access to land.
Beyond organizing and advocating for the right to housing – and human rights more broadly – FUCVAM offers its member cooperatives a wide range of support services, including training to strengthen cooperative management, legal counseling and conflict mediation.
Finally, a vital pillar of this model are the Technical Assistance Institutes, which were also recognized by the National Housing Law. These are independent, nonprofit organizations that advise cooperatives.
Their role is crucial: The construction of large-scale housing projects is complicated. The vast majority of citizens have no prior experience in construction or project management. The success of Uruguay’s cooperative model would be unthinkable without their support.
From the outskirts to the city center
Uruguay’s housing cooperatives have not only expanded, but have also evolved in response to changing needs and challenges.
In their early years, most cooperatives built low-density housing on the outskirts of cities. This approach was largely influenced by the ideals of the Garden City movement, a planning philosophy of the late 19th century that prioritized low-density housing and a balance between development and green spaces. In Uruguay, there was also a cultural preference for single-family homes. And land was more expensive in city centers.
These early cooperatives, however, contributed to urban sprawl, which has a number of drawbacks. Infrastructure has to be built out. It’s harder to reach jobs and schools. There’s more traffic. And single-family homes aren’t an efficient use of land.
Meanwhile, in the 1970s Montevideo’s historic city center started experiencing abandonment and decay. During this period, the country’s shifting socioeconomic landscape created a set of new challenges. More people relied on irregular incomes from informal work, while more single women became heads of households.
In response, housing cooperatives have shown a remarkable ability to adapt.
For women, by women
As urban sprawl pushed development outward, Montevideo’s historic center, Ciudad Vieja, was hemorrhaging residents. Its historic buildings were falling apart.
Seeking to revitalize the area without displacing its remaining low-income residents, the city saw housing cooperatives as a solution.
This spurred the creation of 13 mutual aid cooperatives in Ciudad Vieja, which now account for approximately 6% of all housing units in the area.
One of the pioneers of this effort was Mujeres Jefas de Familia, which translates to Women Heads of Household. Known by the acronym MUJEFA, it was founded in 1995 by low-income, single mothers. MUJEFA introduced a new approach to cooperative housing: homes designed, built and governed with the unique needs of women in mind.
Architect Charna Furman spearheaded the initiative. She wanted to overcome the structural inequalities that prevent women from finding secure housing: financial dependence on men, being primary caregivers, and the absence of housing policies that account for single women’s limited access to economic resources.
Remaining in Ciudad Vieja was important to members of MUJEFA. Its central location allowed them to be close to their jobs, their kids’ schools, health clinics and a close-knit community of friends and family.
However, the project faced major hurdles. The crumbling structure the group acquired in 1991 – an abandoned, heritage-listed building – needed to be transformed into 12 safe, functional apartments.
The cooperative model had to adapt. Municipal authorities temporarily relaxed certain regulations to allow older buildings to be rehabbed as cooperatives. There was also the challenge of organizing vulnerable people – often long-time residents at risk of eviction, who were employed as domestic workers or street vendors – into groups that could actively participate in the renovation process. And they had to be taught how to retrofit an older building.
Today, 12 women with their children live in the MUJEFA cooperative. It’s a compelling example of how cooperative housing can go beyond simply putting a roof over families’ heads. Instead, it can be a vehicle for social transformation. Women traditionally excluded from urban planning were able to design and construct their own homes, creating a secure future for themselves and their children.
Building up, not out
COVIVEMA 5, completed in 2015, was the first high-rise, mutual aid cooperative in a central Montevideo neighborhood. Home to around 300 residents, it’s made up of 55 units distributed across two buildings.
Members participated in the building process with guidance from the Centro Cooperativista Uruguayo, one of the oldest and most respected Technical Assistance Institutes. Architects had to adapt their designs to make it easier for regular people with little experience in construction to complete a high-rise building. Cooperative members received specialized training in vertical construction and safety protocols. While members contributed to the construction, skilled labor would be brought in when necessary.
Members of the cooperative also designed and built Plaza Luisa Cuesta, a public square that created open space in an otherwise dense neighborhood for residents to gather and socialize.
Housing cooperatives are neither public nor private. They might be thought of as an efficient and effective “third way” to provide housing, one that gives residents a stake in their homes and provides long-term security. But their success depends upon institutional, technical and financial support.
Jennifer Duyne Barenstein receives funding from The Swiss National Science Foundation. She is affiliated with the Centre for Research on Architecture, Society and the Built Environment, Department of Architecture, ETH Zurich
Daniela Sanjinés receives funding from the Swiss National Science Foundation. She is affiliated with the Centre for Research on Architecture, Society and the Built Environment, Department of Architecture, ETH Zurich.
L’entreprise Colossal Biosciences a annoncé vouloir redonner vie aux moas, des oiseaux proches des autruches actuelles. John Megahan/Wikipédia, CC BY
Ils comptaient déjà faire revivre le mammouth laineux et le loup sinistre, voilà que l’entreprise états-unienne Colossal Biosciences se lance dans un nouveau projet de « désextinction » avec un oiseau géant, le moa, disparu il y a environ six cents ans. Ces projets posent de très nombreuses questions.
Cette concomitance met en exergue des questions fondamentales sur l’éthique de l’innovation et sa viabilité dans un environnement économique rendu volontairement imprévisible.
La « résurrection » du loup sinistre dans une économie instable
Colossal Biosciences a attiré l’attention, il y a peu, en mettant au monde une créature ayant une structure génétique similaire à celle du loup sinistre (Canis dirus), prédateur géant disparu il y a 13 000 ans. Le processus mobilise extraction et reconstruction d’ADN ancien, puis édition génomique d’embryons actuels de loups, perpétuant l’élan donné par le séquençage du premier génome d’espèce éteinte en 2008.
Aussi, les pratiques actuelles de renégociation commerciale et de menaces tarifaires réactivent une vision que la théorie économique moderne considère comme obsolète depuis Ricardo, pénalisant notamment les entreprises biotechnologiques, fortement dépendantes d’investissements à long terme et de financements stables.
Tous les quinze jours, de grands noms, de nouvelles voix, des sujets inédits pour décrypter l’actualité scientifique et mieux comprendre le monde. Abonnez-vous gratuitement dès aujourd’hui !
Dans un environnement où les projets peuvent être brutalement interrompus pour des raisons politiques ou économiques, que devient la responsabilité morale envers ces êtres créés uniquement comme étapes vers un objectif hypothétique ? Cette dérive instrumentalise le vivant au service d’une promesse technologique, dont la réalisation demeure incertaine, soulevant des questions déontologiques qui dépassent largement le cadre de la recherche scientifique traditionnelle.
Dans ce contexte, l’Union européenne se profile comme une alternative solide, avec un marché unique, un cadre réglementaire stable, et des politiques de financement intégrées autour de l’excellence scientifique et de l’impact sociétal (cf. Horizon Europe). L’engagement politique en faveur de la biodiversité y représente un atout, de même que le recours actif au principe de précaution et le soutien public « à la recherche et à l’innovation responsables (RRI) ». Cela permet une anticipation des impacts sociaux et environnementaux et une implication des acteurs concernés, particulièrement crucial pour les technologies controversées.
La réglementation européenne, récemment renforcée sur la biosécurité, ménage une voie prudente, entre le laisser-faire américain et des interdictions pures et simples, et s’appuie sur un dialogue entre science, politique et société civile, propice à l’élaboration de normes éthiques partagées. L’écosystème du financement, mêlant fonds publics, capital-risque et crowdfunding, réduit la vulnérabilité aux cycles économiques et favorise la viabilité à long terme des innovations.
Dans ce contexte, le récent vote de la loi Duplomb en France semble constituer un message incompréhensible et une erreur économique qui recrée l’instabilité institutionnelle là où elle avait disparu. En réintroduisant des pesticides interdits depuis 2018 et en assouplissant les garde-fous environnementaux, cette législation génère une volatilité réglementaire préjudiciable aux biotechnologies. Les secteurs innovants perçoivent ces revirements comme des signaux d’imprévisibilité institutionnelle compromettant l’attractivité des territoires concernés.
Cette incohérence normative menace l’image de la France comme territoire d’accueil stable pour les innovations technologiques, particulièrement dans l’agriculture de précision où la cohérence réglementaire conditionne les investissements R&D. L’adoption controversée de ce texte, contre tous les avis scientifiques, illustre la façon dont l’incohérence politique pourrait transformer un avantage concurrentiel européen en handicap économique.
La coexistence inquiétante du loup sinistre « ressuscité » et d’une économie américaine déstabilisée par ses propres choix illustre bien nombre de dilemmes contemporains. Dans ce contexte, l’innovation biotechnologique se distingue, car elle ne requiert pas seulement des prouesses techniques, mais aussi un environnement stable et un cadre éthique rigoureux.
Les technologies de désextinction, fascinantes mais problématiques, posent des questions essentielles sur notre rapport à la nature, sur la responsabilité intergénérationnelle et sur l’orientation du progrès.
Alors que les États-Unis semblent s’éloigner d’une innovation responsable, l’Europe pourrait s’imposer comme le nouveau berceau de ces avancées, si tant est que les pays membres ne sombrent pas eux-mêmes dans l’incohérence réglementaire. Mais où que se développe l’innovation, le débat éthique que suscite la désextinction nécessite une réflexion collective et transnationale, dépassant les logiques de marché et les frontières géopolitiques.
Caroline Gans Combe a reçu des financements de l’Union européenne dans le cadre des projets DEFORM et ProRes.
Les astrocytes, ici en vert, au milieu des neurones en rouge, sont un type de cellules présentes dans le cerveau.Dchordpdx/Wikipedia, CC BY
Bien que leur rôle soit moins connu que celui des neurones, les astrocytes sont des cellules essentielles au fonctionnement du cerveau. Une nouvelle étude, chez la souris, parue dans la revue Nature communications révèle le rôle des astrocytes du striatum, une structure du circuit de la récompense, dans le contexte de l’obésité induite par une alimentation enrichie en graisses et en sucres. Ces cellules pourraient représenter une cible intéressante pour le traitement des maladies métaboliques.
Le cerveau est constitué de milliards de neurones. Ce sont des cellules excitables, c’est-à-dire qu’elles peuvent générer des potentiels d’actions et transmettre des informations aux autres neurones sous forme de courant électrique. Cependant, les neurones ne constituent que la moitié des cellules du cerveau, l’autre moitié étant constitué de cellules gliales, parmi lesquelles on trouve les astrocytes. Ces derniers sont impliqués dans de nombreuses pathologies du cerveau telles que les maladies neurodégénératives (la maladie d’Alzheimer ou de Parkinson), les troubles psychiatriques ou l’épilepsie.
Contrairement aux neurones, les astrocytes ne peuvent pas générer de courants électriques, mais présentent des variations de leur concentration en calcium intracellulaire. Le calcium intracellulaire est impliqué dans de nombreux processus liés au fonctionnement des cellules et aurait un rôle indispensable pour la physiologie des astrocytes. Le fait que les astrocytes soient silencieux pour les méthodes classiques d’enregistrement de l’activité cérébrale telles que l’électroencéphalogramme (ECG) a rendu leur étude beaucoup plus lente et difficile. Par conséquent, leur rôle a été largement sous-estimé et nous sommes encore loin d’avoir élucidé la manière dont ils communiquent avec les neurones.
C’est avec le développement d’outils d’imagerie ciblant des acteurs cellulaires spécifiques que leur rôle dans les processus cérébraux peut enfin être élucidé. Les résultats que nous avons obtenus permettent de mettre en évidence plusieurs caractéristiques de l’activité astrocytaire dans le contexte de l’obésité induite par l’alimentation enrichie en graisse et en sucre.
Quand le corps n’équilibre plus la balance énergétique
L’obésité est un problème majeur de santé publique, affectant 17 % de la population française et accroissant le risque relatif d’un ensemble de pathologies : par exemple, les maladies cardiaques, l’hypertension, le diabète de type 2, des maladies du foie et certaines formes de cancer. Cette pathologie est complexe et implique différents facteurs dont la contribution au développement de l’obésité varie considérablement d’un individu à l’autre : ces facteurs sont génétiques, environnementaux (comme le stress ou la qualité du sommeil) ou liés aux habitudes alimentaires. Une alimentation enrichie en graisses et en sucres est définitivement une coupable identifiée.
Notre corps maintient un état d’équilibre appelé homéostasie, grâce à un mécanisme de régulation précis qui équilibre les apports nutritionnels et les dépenses énergétiques. Cet équilibre de la balance énergétique est réalisé grâce à des circuits cérébraux bien identifiés, impliquant notamment l’hypothalamus. Toutefois, un autre moteur puissant de l’alimentation est l’aspect hédonique de la nourriture, c’est-à-dire le plaisir que nous trouvons à consommer des aliments appétissants, au-delà des besoins énergétiques du corps. Cette motivation à manger pour le plaisir repose notamment sur la libération de dopamine au niveau d’une région cérébrale appelée striatum.
Il a été démontré que l’obésité induite par l’alimentation était associée à des altérations de la transmission de la dopamine, à des dérèglements alimentaires de type addictif/compulsif ainsi qu’à une altération de la flexibilité cognitive, c’est-à-dire la capacité à s’adapter facilement à de nouvelles situations.
Les astrocytes, des cellules protectrices des neurones
Si l’implication des neurones (qui libèrent ou répondent à la dopamine) a été beaucoup étudiée dans le cadre de ces processus physiologiques et physiopathologiques, le rôle des astrocytes a longtemps été négligé.
L’excès de nutriments favorise des mécanismes inflammatoires dans le cerveau qui s’accompagnent de la libération de substances susceptibles de modifier le fonctionnement des neurones et des astrocytes. Or les astrocytes occupent une place stratégique dans le cerveau, à l’interface entre les vaisseaux sanguins et les neurones, ces cellules pivots permettraient de contrôler aussi bien l’information neuronale que l’apport énergétique. En condition d’excès nutritionnel dans la circulation, elles pourraient constituer un premier rempart qui protégerait les neurones des altérations induites par les éléments circulant dans le sang.
Coupe de cerveaux de souris montrant les astrocytes (en vert) au niveau du striatum. L’obésité modifie la forme des astrocytes qui deviennent réactifs, un signe d’inflammation cérébrale. Montalban et al./Nature Communication, Fourni par l’auteur
Dans notre travail, réalisé chez la souris, nous montrons tout d’abord que les régimes gras affectent la structure et la fonction des astrocytes du striatum.
Nous avions déjà caractérisé de telles modifications dans l’hypothalamus, la région impliquée dans l’initiation de la prise alimentaire et qui est en contact étroit avec le compartiment sanguin, mais elles étaient très peu caractérisées dans le striatum. Nous montrons, d’une part, une réactivité des astrocytes, qui s’exprime par des modifications morphologiques, et, d’autre part, des changements dans la dynamique des flux calciques, susceptibles d’altérer leur communication avec les neurones de la structure.
Un impact sur la flexibilité cognitive et le métabolisme énergétique
Cette observation faite, nous avons décidé de manipuler directement les astrocytes par une approche permettant de forcer une cascade de signalisation dans les cellules en insérant spécifiquement dans les astrocytes un récepteur synthétique jouant le rôle d’interrupteur. Cette approche permet en particulier d’induire une vague de calcium (un second messager clé au niveau intracellulaire) afin d’en observer les conséquences.
Que se passe-t-il si l’on augmente artificiellement la quantité de calcium et que l’on « active » les astrocytes ? Est-ce que cela a un impact sur l’activité neuronale et le comportement des souris ?
L’activation de cet interrupteur moléculaire et l’afflux de calcium cohérent dans la population d’astrocytes ciblée a effectivement eu pour conséquence de modifier la cinétique et la réponse des neurones avoisinants démontrant ainsi, pour la première fois, que la manipulation des astrocytes pouvait interférer avec les réseaux neuronaux.
Nous avons appliqué cette technique en comparant des souris nourries avec un régime standard avec des souris rendues obèses par un régime enrichi en graisses et en sucres. Les souris sous régime enrichi présentent des défauts cognitifs qui s’expriment par une difficulté à s’adapter à une nouvelle situation. Dans notre cas, les souris devaient apprendre qu’une récompense était située dans le bras gauche d’un labyrinthe, puis nous avons examiné comment elles s’adaptaient si nous changions le bras récompensé.
Dans ce contexte, les souris nourries avec un régime enrichi avaient du mal à s’adapter, or l’activation forcée des astrocytes du striatum dorsal a permis aux animaux de réapprendre facilement la tâche, et ce, en absence de perte de poids. La manipulation des astrocytes du striatum a ainsi permis de corriger l’altération cognitive induite par le régime riche.
Si le striatum est bien connu pour son rôle dans les processus cognitifs et motivationnels, cette structure cérébrale n’est pas traditionnellement associée à la régulation du métabolisme corporel. Notre étude apporte un élément supplémentaire dans ce sens. En effet, nous montrons que la manipulation in vivo des astrocytes dans le striatum exerce un contrôle sur le métabolisme énergétique de l’animal en affectant particulièrement le choix des substrats métabolique (lipides ou sucres) utilisés par la souris pour assurer son métabolisme. Après activation des astrocytes, elles utilisent plus de lipides.
Ce travail révèle un rôle nouveau pour les astrocytes dans les circuits de la récompense. Ils participent en effet au contrôle des fonctions cognitives et nos résultats illustrent pour la première fois leur capacité à restaurer une fonction cognitive dans un contexte obésogène. D’autre part, ce travail établit un lien direct entre les astrocytes du striatum et le contrôle du métabolisme énergétique global de l’animal.
Une approche prometteuse consisterait à développer des stratégies thérapeutiques ciblant spécifiquement les astrocytes, plutôt que les neurones, au sein du système de la récompense, dans le traitement de l’obésité et, plus largement, des pathologies métaboliques.
Serge Luquet a reçu des financements de l’Agence Nationale de la Recherche (ANR) ANR-19-CE37-0020-02, ANR-20-CE14-0020, and ANR-20-CE14-0025-01, la Fondation pour la Recherche Médicale (FRM) FRM Project #EQU202003010155.
Claire Martin et Enrica Montalban ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur poste universitaire.
After South Africa’s first democratic elections in 1994, there was significant optimism about police reform in the country. Impressive steps were taken to bring the South African Police Service under civilian control and to create a service responsive to calls for assistance from the public.
During the apartheid period, South Africa’s police worked to preserve the political order and pursue political opponents. It did not focus on dealing with crime. This is why the achievements of the 1990s are so important. For the first time, black South Africans could call upon officers to respond to personal emergencies. This period also saw a drop in crime levels.
However, this promising early transformation was interrupted. The appointment of Jackie Selebi as national police commissioner in 2000 heralded a new era. Selebi was an African National Congress (ANC) insider. The ANC originated as a liberation movement and has governed the country since 1994.
Selebi had served as the head of the ANC’s Youth League in the 1980s, when it was banned. In 1987 he was appointed to the organisation’s national executive committee, its highest decision-making organ.
His appointment as police commissioner was the start of significant change in the purpose of policing. It marked the end of the focus on civilian control of the police force and prosecuting authorities. As an ANC insider, Selebi led efforts to establish party control over the police.
Other telling developments ensued. The Scorpions were disbanded in 2009 by acting president Kgalema Motlanthe. The unit’s job was to pursue high-profile cases against senior ANC politicians (among others).
The police became increasingly entangled in the ANC’s internal political conflicts. At the same time the office of the national police commissioner experienced high turnover due to intense political manoeuvring. Between 2009 and 2022, there were seven national commissioners.
Recent developments have once again brought the intermingling of police work and power battles in the ANC to the fore. In early July 2025, Lieutenant General Nhlanhla Mkhwanazi, the commissioner of police in the province of KwaZulu-Natal, made some startling claims. He called a press conference and, wearing camouflage uniform, he implicated the minister of police, Senzo Mchunu, together with the deputy national commissioner for crime detection, in a scheme to close down investigations into political assassinations in the province.
President Cyril Ramaphosa rushed back from a meeting of the Brics countries in Brazil to attend to the matter. He announced that the police minister had been placed on leave with immediate effect. He also announced a judicial inquiry into the allegations.
I have conducted research into South Africa’s security apparatus over the last decade. Based on this work, and new research forthcoming in the Journal of Southern African Studies done with Jelena Vidojevic, co-founder of the New South Institute, it is clear that elite contestation in the ANC is intensifying.
In other words, the ability of internal party structures to manage gatekeeping is declining. Many of the people involved are indifferent or even hostile to South Africa’s democratic and constitutional order.
As the ability of some political elites to access state resources through the party declines, some are linked with organised criminal networks. Organised crime has been on the edges of South African politics. It now risks taking a more central role.
In this environment, the police service will often be the thin (blue) line between multiparty contestation according to constitutional rules and the criminalisation of politics in South Africa.
The shift
Large organisational changes within the police vividly illustrate this shift away from its core function.
The Visible Policing programme was meant to meant to deter crime through patrols, checkpoints and roadblocks. But, instead, there was a steady decline in resource allocation. Employee numbers dropped between 2015 and 2021.
Detective services and crime intelligence also experienced such declines.
Conversely, employee numbers in the Protection and Security Services programme, responsible for providing bodyguards to politicians, increased sharply between 2014 and 2016.
Evidence heard by the commission of inquiry into state capture suggested that some officers and budgets in the service were even used to supply President Jacob Zuma and other politicians with what amounted to a private militia.
This reorientation of resources coincided with a rise in crime across the country, a decline in arrests by 24.5%, and a drop in the police’s efficacy in solving crimes.
Furthermore, a politicised police leadership effectively stopped policing various categories of crime. This was particularly true of offences like fraud, corruption, and certain types of theft, and particularly when politically connected persons were involved.
The state capture commission heard extensive evidence about the failure of the police to pursue politically sensitive investigations. Investigations into senior officials were frequently frustrated or impeded, and cases at state-owned enterprises were abandoned.
This shows how police resources were actively redirected as weapons of elite competition, pursuing political enemies and protecting allies within the ruling party.
Mkhwanazi’s claims, if substantiated, suggest that this political policing remains entrenched.
What now?
Ramaphosa has announced the appointment of Firoz Cachalia as the acting minister of police. Cachalia, a well regarded legal academic, served as ANC minister for community safety. Between 2019 and 2022 he was part of the ANC’s national executive committee.
His appointment raises serious questions.
If the core problem with the police is that it has become embroiled in ANC internal politics, having an ANC insider head the ministry of police (even if only on an acting basis) threatens only to compound the problem.
Moreover, South Africans have already witnessed a long and expensive judicial inquiry into state capture. And despite extensive evidence of police failure to pursue politically sensitive investigations, nothing concrete has come of it.
How likely is it that this new initiative will be any different, especially if those investigating it and presiding over key institutions are themselves ANC insiders?
To depoliticise the police service and redirect its attention and activities towards crime and emergencies, a crucial first step is to reconsider the appointment processes for the national police commissioner and other top managers.
Under the current system the president has sole discretion. This bakes party-political considerations into the decision-making process.
Without structural changes, genuine democratic policing will remain an elusive ideal.
In 2024/25 the murder rate in South Africa stood at 42 per 100,000, among the highest in the world and close to levels not seen since the early 2000s.
At the very least, the minister of police must not be an ANC insider. Democratic renewal in South Africa requires bringing the police firmly under parliamentary control.
Ivor Chipkin teaches public policy at the Gordon Institute of Business Science (GIBS) at the University of Pretoria. He is the director of the New South Institute.
La plupart des navigateurs proposent une « navigation privée », souvent perçue comme un moyen de surfer anonymement. Pourtant, ce mode ne garantit pas l’anonymat en ligne, et de nombreux internautes surestiment sa portée.
La navigation privée permet d’éviter que quelqu’un d’autre ayant accès à votre ordinateur voie vos activités en ligne a posteriori. C’est utile, par exemple, sur un ordinateur public ou partagé, pour ne pas laisser d’identifiants enregistrés ni d’historique compromettant.
Cependant, il est important de comprendre que cette confidentialité est avant tout locale (sur votre appareil). Le mode privé n’implique pas de naviguer de façon anonyme sur le réseau Internet lui-même. Il ne s’agit pas d’un « bouclier d’invisibilité » vis-à-vis des sites web visités, de votre fournisseur d’accès à Internet (FAI), ou de votre employeur.
Des études confirment les limites techniques du mode privé. Des traces subsistent malgré la fermeture de la session, en contradiction avec ce qu’affirme la documentation du navigateur. Une analyse sur Android a révélé que la mémoire vive conserve des données sensibles : mots-clés, identifiants, cookies, récupérables même après redémarrage.
Le mode privé ne bloque pas les cookies publicitaires, il les supprime simplement en fin de session. Lorsqu’on revient sur un site dans une nouvelle session privée, celui-ci ne « se souvient » pas des choix précédents : il faut donc souvent redéfinir ses préférences (accepter ou refuser les cookies). Les bannières de consentement aux cookies, bien connues des internautes européens depuis l’entrée en vigueur du Règlement général sur la protection des données (RGPD) et de la directive ePrivacy, réapparaissent donc systématiquement. La fatigue du consentement pousse de nombreux internautes à tout accepter sans lire.
Quelles alternatives pour se protéger réellement ?
Le mode privé ne suffit pas à garantir l’anonymat en ligne. Pour mieux protéger sa vie privée, il faut combiner plusieurs outils.
Un VPN (virtual private network ou réseau privé virtuel, en français) crée un tunnel sécurisé entre votre appareil et Internet, permettant de naviguer de façon plus confidentielle en chiffrant vos données et en masquant votre adresse IP. En 2024, 19 % des utilisateurs de VPN français souhaitent avant tout cacher leur activité, et 15 % protéger leurs communications.
Un navigateur comme Tor va plus loin : il rebondit vos requêtes via plusieurs relais pour masquer totalement votre identité. C’est l’outil préféré des journalistes ou militants, mais sa lenteur peut décourager un usage quotidien. Des alternatives comme Brave ou Firefox Focus proposent des modes renforcés contre les traqueurs, tandis que des extensions comme uBlock Origin ou Privacy Badger bloquent efficacement pubs et trackers. Ces extensions sont compatibles avec les principaux navigateurs comme Chrome, Firefox, Edge, Opera et Brave.
Il est aussi essentiel d’adopter une hygiène numérique : gérer les cookies, limiter les autorisations, préférer des moteurs comme DuckDuckGo, qui ne stockent pas vos recherches, ne vous profile pas et bloque automatiquement de nombreux traqueurs, et éviter de centraliser ses données sur un seul compte. En ligne, la vraie confidentialité repose sur une approche globale, proactive et éclairée.
Sabrine Mallek ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Lorsque l’ancien champion du monde d’échecs Vladimir Kramnik a laissé entendre qu’Hikaru Nakamura, l’un des meilleurs joueurs du monde actuellement, trichait sur la plateforme en ligne Chess.com, un statisticien a été appelé pour enquêter.
C’est pourtant ce qui s’est passé l’été dernier. Erik Allebest, PDG du plus grand site d’échecs en ligne au monde, Chess.com, m’a demandé d’enquêter sur les allégations de l’ancien champion du monde d’échecs Vladimir Kramnik concernant les longues séries de victoires d’un des meilleurs joueurs du monde, l’Américain Hikaru Nakura.
Kramnik a déclaré que ces séries avaient une très faible probabilité de se produire et qu’elles étaient donc très suspectes. Il n’a pas formellement accusé Hikaru de tricherie, mais le sous-entendu était clair. Sur Internet, les esprits se sont vite échauffés : les partisans de Kramnik postant des commentaires virulents (souvent en russe) sur cette présumée tricherie, tandis que de nombreux joueurs de Chess.com et partisans d’Hikaru rejetaient les accusations.
Qui a raison ? Qui a tort ? Est-il possible de trancher ?
Erik Allebest m’a demandé de réaliser une analyse statistique indépendante et impartiale pour déterminer le degré d’improbabilité de ces séries de victoires.
Le calcul de probabilités
Pour résoudre ce problème, j’ai d’abord dû calculer la probabilité que chaque joueur gagne ou fasse match nul dans chaque partie. Les joueurs peuvent avoir des niveaux de jeu très différents. Les meilleurs ont évidemment plus de chances de vaincre des adversaires moins expérimentés. Mais à quel point ?
Chess.com attribue un classement à chaque joueur qui varie après chaque partie, et ces notes m’ont été communiquées. Mon analyse a suggéré qu’un modèle mathématique pouvait fournir une estimation précise des probabilités de victoire, de défaite ou de nulle pour chaque partie.
En outre, les écarts par rapport à cette probabilité dans les résultats de parties successives étaient approximativement indépendants, de sorte que l’influence d’une partie sur la suivante pouvait être ignorée en toute sécurité. J’ai ainsi obtenu une probabilité claire que chaque joueur gagne (ou perde) chaque partie.
Je pouvais alors analyser ces séries de victoires qui avaient provoqué tant de débats enflammés. Il s’est avéré qu’Hikaru Nakamura, contrairement à la plupart des autres joueurs de haut niveau, avait joué de nombreuses parties contre des joueurs beaucoup plus faibles. Cela lui donnait donc une très grande probabilité de gagner chaque partie. Mais malgré cela, est-il normal d’observer de si longues séries de victoires, parfois plus de 100 parties d’affilée ?
Tester le caractère aléatoire
Pour le vérifier, j’ai effectué ce que l’on appelle des simulations de Monte Carlo, qui répètent une expérience en intégrant des variations aléatoires.
J’ai codé des programmes informatiques pour attribuer au hasard des victoires, des défaites et des nuls à chaque partie d’Hikaru Nakamura, selon les probabilités de mon modèle. J’ai demandé à l’ordinateur de mesurer à chaque fois les séries de victoires les plus surprenantes (les moins probables). Cela m’a permis de mesurer comment les séries réelles d’Hikaru pouvaient se comparer aux prédictions.
J’ai constaté que dans de nombreuses simulations, les résultats simulés comprenaient des séries tout aussi « improbables » que les séries réelles.
Cela démontre que les résultats d’Hikaru aux échecs étaient à peu près conformes à ce que l’on pouvait attendre. Il avait une telle probabilité de gagner chaque partie, et avait joué tellement de parties sur Chess.com, que des séries de victoires aussi longues étaient susceptibles d’émerger selon les règles des probabilités.
Les réponses à mes découvertes
J’ai rédigé un bref rapport à propos de mes recherches et l’ai envoyé à Chess.com.
Le site a publié un article, qui a suscité de nombreux commentaires, pour la plupart favorables.
Nakamura a ensuite publié son propre commentaire en vidéo, soutenant également mon analyse. Pendant ce temps, Kramnik a publié une vidéo de 29 minutes critiquant mes recherches.
Ce dernier ayant soulevé quelques points importants, j’ai rédigé un addendum à mon rapport pour répondre à ses préoccupations et montrer qu’elles n’avaient pas d’incidence sur la conclusion. J’ai également converti mon rapport en un article scientifique que j’ai soumis à une revue de recherche.
Puis je me suis ensuite consacré à mes tâches d’enseignant et j’ai laissé de côté les controverses sur les échecs jusqu’à ce que je reçoive une réponse de plus de six pages en décembre dernier. Il s’agissait de trois rapports d’arbitres et de commentaires d’éditeurs de la revue dans laquelle j’avais soumis mon article scientifique.
J’ai également découvert que Kramnik avait posté une deuxième vidéo de 59 minutes critiquant mon addendum et soulevant d’autres points.
J’ai tenu compte des points supplémentaires soulevés par Kramnik et par les arbitres tout en révisant mon article en vue de sa publication. Il a finalement été publié dans Harvard Data Science Review.
J’étais heureux de voir mes résultats publiés dans une prestigieuse revue de statistiques, ce qui leur conférait un sceau d’approbation officiel. Et peut-être, enfin, de régler cette controverse sur les échecs au plus haut niveau.
Jeffrey S. Rosenthal reçoit des fonds de recherche du CRSNG du Canada, mais n’a reçu aucune compensation de Chess.com ou de qui que ce soit d’autre pour ce travail.
Australia has joined 28 international partners in calling for an immediate end to the war in Gaza and a lifting of all restrictions on food and medical supplies.
Foreign Minister Penny Wong, along with counterparts from countries including the United Kingdom, France and Canada, has signed a joint statement demanding Israel complies with its obligations under international humanitarian law.
The statement condemns Israel for what it calls “the drip feeding of aid and the inhumane killing of civilians” seeking “their most basic need” of water and food, saying:
The suffering of civilians in Gaza has reached new depths. The Israeli government’s aid delivery model is dangerous, fuels instability and deprives Gazans of human dignity […] It is horrifying that over 800 Palestinians have been killed while seeking aid.
Weapon of war
Gazans, including malnourished mothers denied baby formula, face impossible choices as Israel intensifies its use of starvation as a weapon of war.
In Gaza, survival requires negotiating what the United Nations calls aid “death traps”.
According to the UN, 875 Gazans have been killed – many of them shot – while seeking food since the US-backed Gaza Humanitarian Foundation began operating in late May. Another 4,000 have been injured.
Gaza has been described as the “hungriest place on Earth”, with aid trucks being held at the border and the United States destroying around 500 tonnes of emergency food because it was just out of date.
More than two million people are at critical risk of famine. The World Food Programme estimates 90,000 women and children require urgent treatment for malnutrition.
Nineteen Palestinians have starved to death in recent days, according to local health authorities.
We can’t say we didn’t know
After the breakdown of the January ceasefire, Israel implemented a humanitarian blockade on the Gaza Strip. Following mounting international pressure, limited aid was permitted and the controversial Gaza Humanitarian Foundation began operations.
As anticipated, only a fraction of the aid has been distributed.
About 1,600 trucks entered Gaza between May 19 and July 14, well below the 630 trucks needed every day to feed the population.
Israeli ministers have publicly called for food and fuel reserves to be bombed to starve the Palestinian people – a clear war crime – to pressure Hamas to release Israeli hostages.
Famine expert Alex De Waal says Israel’s starvation strategy constitutes a dangerous weakening of international law. It also disrupts norms aimed at preventing hunger being used as a weapon of war:
operations like the Gaza Humanitarian Foundation are a big crack in these principles [that is] not going to save Gaza from mass starvation.
Palestinian organisations were the first to raise the alarm over Israel’s plans to impose controls over aid distribution.
UN Relief Chief Tom Fletcher briefed the UN Security Council in May, warning of the world’s collective failure to call out the scale of violations of international law as they were being committed:
Israel is deliberately and unashamedly imposing inhumane conditions on civilians in the occupied Palestinian territory.
Tom Fletcher briefing the United Nations on the ‘atrocity’ being committed in Gaza.
Since then, clear and unequivocal warnings of the compounding risks of genocide, war crimes, crimes against humanity and ethnic cleansing have intensified from the UN, member states and international law experts.
Weaponising aid
The Gaza Humanitarian Foundation claims it has handed out millions of meals since it began operating in the strip in May. But the UN has called the distribution model “inherently unsafe”.
Near-daily shootings have occurred since the militarised aid hubs began operating. Malnourished Palestinians risking death to feed their families are trekking long distances to reach the small number of distribution sites.
While the foundation denies people are being shot, the UN has called the aid delivery mechanism a “deliberate attempt to weaponise aid” that fails to comply with humanitarian principles and risks further war crimes.
Jewish Physicians for Human Rights has rejected the aid’s “humanitarian” characterisation, stating it “is what systematic harm to human beings looks like”.
Human rights and legal organisations are calling for all involved to be held accountable for complicity in war crimes that “exposes all those who enable or profit from it to real risk of prosecution”.
Mounting world action
Today’s joint statement follows growing anger and frustration in Western countries over the lack of political pressure on Israel to end the suffering in Gaza.
Polling in May showed more than 80% of Australians opposed Israel’s denial of aid as unjustifiable and wanted to see Australia doing more to support civilians in Gaza.
Last week’s meeting of the Hague Group of nations shows more collective concrete action is being taken to exert pressure and uphold international law.
Th 12 member states agreed to a range of diplomatic, legal and economic measures, including a ban on ships transporting arms to Israel.
The time for humanity is now
States will continue to face increased international and domestic pressure to take stronger action to influence Israel’s conduct as more Gazans are killed, injured and stripped of their dignity in an engineered famine.
This moment in Gaza is unprecedented in terms of our knowledge of the scale and gravity of violations being perpetrated and what failing to act means for Palestinians and our shared humanity.
Now is the time to exert diplomatic, legal and economic pressure on Israel to change course.
History tells us we need to act now – international law and our collective moral conscience requires it.
Amra Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Spurred on by hashtags and usernames indicating these feats involve steroids, soon Mark is online, ordering his first “steroid cycle”. No script, no warnings, just vials in the mail and the promise of “gains”.
A few weeks later, he’s posting progress shots and getting tagged as #MegaMark. He’s pleased. But what if I told you Mark was unknowingly injecting toxic chemicals?
In our new research we tested products sold in Australia’s underground steroid market and found many were mislabelled or missing the expected steroid entirely.
Even more concerning, several contained heavy metals such as lead, arsenic and cadmium. These substances are known to cause cancer, heart disease and organ failure.
What are anabolic steroids, and who is using them?
Anabolic steroids are synthetic drugs designed to mimic the effects of testosterone. Medical professionals sometimes prescribe them for specific health conditions (for example, hypogonadism, where the body isn’t making enough sex hormones). But they are more commonly taken by people looking to increase muscle size, improve athletic performance, or elevate feelings of wellbeing.
In Australia, it’s illegal to possess steroids without a prescription. This offence can attract large fines and prison terms (up to 25 years in Queensland).
Despite this, they’re widely available online and from your local “gym bro”. So it’s not surprising we’re seeing escalating use, particularly among young men and women.
People usually take steroids as pills and capsules or injectable oil- or water-based products. But while many people assume these products are safe if used correctly, they’re made outside regulated settings, with no official quality checks.
For this new study, we analysed 28 steroid products acquired from people all over Australia which they’d purchased either online or from peers in the gym. These included 16 injectable oils, ten varieties of oral tablets, and two “raw” powders.
An independent forensic lab tested the samples for active ingredients, contaminants and heavy metals. We then compared the results against what people thought they were taking.
More than half of the samples were mislabelled or contained the wrong drug. For example, one product labelled as testosterone enanthate (200mg/mL) contained 159mg/mL of trenbolone (a potent type of steroid) and no detectable testosterone. Oxandrolone (also known as “Anavar”, another type of steroid) tablets were sold claiming a strength of 10mg but actually contained 6.8mg, showing a disparity in purity.
Just four products matched their expected compound and purity within a 5% margin.
But the biggest concern was that all steroids we analysed were contaminated with some level of heavy metals, including lead, arsenic and cadmium.
While all of the concentrations we detected were within daily exposure limits regarded as safe by health authorities, more frequent and heavier use of these drugs would quickly see people who use steroids exceed safe thresholds. And we know this happens.
If consumed above safe limits, research suggests lead can damage the brain and heart. Arsenic is a proven carcinogen, having been linked to the development of skin, liver and lung cancers.
People who use steroids often dose for weeks or months, and sometimes stack multiple drugs, so these metals would build up. This means long‑term steroid use could be quietly fuelling cognitive decline, organ failure, and even cancer.
What needs to happen next?
Heavy metals such as lead, arsenic and cadmium often contaminate anabolic steroid products because raw powders sourced from some manufacturers, particularly those in China, may be produced with poor quality control and impure starting materials. These metals can enter the supply chain during synthesis, handling, or from contaminated equipment and solvents, leading to their presence in the final products.
Steroid use isn’t going away, so we need to address the potential health harms from these contaminants.
While pill testing is now common at festivals for drugs such as ecstasy, testing anabolic steroids requires more complex chemical analysis that cannot be conducted on-site. Current steroid testing relies on advanced laboratory techniques, which limits availability mostly to specialised research programs such as those in Australia and Switzerland.
We need to invest properly in a national steroid surveillance and testing network, which will give us data‑driven insights to inform targeted interventions.
We also need to see peer‑led support through trusted programs to educate people who use steroids around the risks. The programs should be based in real evidence, and developed by people with lived experience of steroid use, in partnership with researchers and clinicians.
Timothy Piatkowski receives funding from Queensland Mental Health Commission. He is affiliated with Queensland Injectors Voice for Advocacy and Action as the Vice President. He is affiliated with The Loop Australia as the research lead (Queensland).