Toxic pollution builds up in snake scales: what we learnt from black mambas

Source: The Conversation – Africa – By Cormac Price, Post-doctoral fellow the HerpHealth lab, office 218, Building G23. Unit for Environmental Sciences and Management, North-West University; University of KwaZulu-Natal

Black mambas (Dendroaspis polylepis) are Africa’s longest, most famous venomous snakes. Despite their fearsome reputation, these misunderstood snakes are vital players in their ecosystems. They keep rodent populations in check and, in turn, help to protect crops and limit disease spread. The species ranges widely across sub-Saharan Africa, from Senegal to Somalia and south into South Africa. They can adapt to many environments.

Zoologist Cormac Price, in new research with professors Marc Humphries and Graham Alexander and reptile conservationist Nick Evans, found that black mambas can be indicators of heavy metal pollution. We asked him about it.

How do black mambas indicate toxic pollution?

It’s about bio-accumulation. Bioaccumulation happens when chemicals, like pesticides or heavy metals, build up in an organism’s body. These toxins come from polluted environments, from waste products of human activities like manufacturing. They pollute water or soil and gradually accumulate in plants and animals.

If toxins are present in the environment, they may first be taken in by plants, and then by animals that eat the plants, and animals that eat those animals. Black mambas are quite high up the food chain, so a lot of the toxins would accumulate in their bodies. These poisonous substances can reach dangerous levels, causing health problems for whatever eats them.

We tested the presence of four types of heavy metals (arsenic, cadmium, lead and mercury) in the bodies of black mambas.

All our samples were from the eThekwini Municipality (greater Durban area) in South Africa. Durban is a busy shipping container port and has a large industrial sector that includes chemicals, petrochemicals and automotive manufacturing. Alongside all this industry the municipality also has a network of conservancies and green spaces, known as the Durban Metropolitan Open Space System.

We chose to test for these metals because they are widely used in different industries and can cause drastic negative effects in the body. Mercury primarily damages the nervous system, arsenic can cause cancer and skin lesions, cadmium harms kidneys and bones and lead mainly affects brain development and blood functions. Because these metals accumulate over time and are difficult to break down, even low-level exposure can lead to chronic poisoning and long-term health problems.

Black mambas appear to be doing well in Durban and taking advantage of the abundance of rodents, which they eat. Wherever there is human settlement there will be waste and discarded food which rodents take full advantage of. Black mambas can also be quite site-specific when not disturbed, living in the same refuge for many years, giving a clearer indication of pollution levels at that specific site. This makes the snakes potentially good bioindicator species.

A bioindicator species is one that helps us understand the health of an environment. Because they are sensitive to changes like pollution or habitat damage, their presence, absence or condition can reveal if an ecosystem is in good condition or is experiencing increases of pollution or degradation.

The pollutants can be detected and calculated from a non-invasive, harmless scale clipping. Snake scales are composed mostly of keratin, the same sort of protein that produces human hair and nails. To clip a very thin slice of snake scale is as harmless as clipping a human finger nail.

We collected 31 mambas that had already been killed by vehicles, people or dogs, and tested muscle and liver samples from them for toxins. We also took scale clippings from 61 live snakes.

This was the first time in Africa that a species of snake was tested to see if it could be used as an indicator species of heavy metal pollution.

What did you find?

We found that the heavy metal concentrations in scales correlated with those found in the muscle and liver samples. For three of the four metals, scales were as accurate for testing as muscle and liver samples. So the harmless testing method is as good as the more invasive one.

For arsenic, cadmium and lead, the snakes were accumulating significantly lower concentrations of these toxins in the open, natural sites of the Durban Metropolitan Open Space System compared to more industrial and commercial areas. Mercury was less significantly different due to its more volatile nature and its capacity to travel through the environment.

What made you test mamba scales in the first place?

In 2020, I attended a conference on amphibians and reptiles, where a friend of mine presented his work on heavy metal pollutants in tiger snakes in the city of Perth, Australia.

I’ve also been working with Nick Evans of KZN Amphibian & Reptile Conservation for some years, on urban reptile ecology. Nick began collecting scale clippings, and I began to realise, while looking through the literature, how novel this was on a continental scale. Snakes had never been tested as a potential bioindicator species of heavy metal pollution in Africa previously.

Marc Humphries is a professor of environmental chemistry, and I was aware of his work on lead exposure in Nile crocodiles at St Lucia, a wetland in South Africa. When he expressed interest in examining the scale clippings, we were thrilled. Graham Alexander’s expertise in snake behaviour in general and specifically snakes in Durban was also instrumental in the success of this research.

How can this help fight pollution?

The fight against pollution is in the hands of the municipality and city managers. What the snakes are doing is warning us of the increasing danger these pollutants pose to environmental health and ultimately human health. They are also showing us how important open spaces are to the overall environmental and human health of the city of Durban. The snakes are telling us a story; what people in authority decide to do with this story rests with them.

Nick Evans of KZN Amphibian & Reptile Conservation made valuable contributions to the research and was a co-author on the article.

The Conversation

Cormac Price does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Toxic pollution builds up in snake scales: what we learnt from black mambas – https://theconversation.com/toxic-pollution-builds-up-in-snake-scales-what-we-learnt-from-black-mambas-265802

Science costs money – research is guided by who funds it and why

Source: The Conversation – USA – By Ryan Summers, Associate Professor of Science Education, University of North Dakota

NSF is one federal agency that funds a wide range of basic science research. Nicole Fuller/National Science Foundation, CC BY

Scientists have always needed someone to help foot the bill for their work.

In the 19th century, for example, Charles Darwin made an expensive voyage to the southernmost tip of the Americas, visiting many other places en route, including his famous trek through the Galapagos Islands. The fossil evidence Darwin collected over his five-year journey eventually helped him to think about an infinite variety of species, both past and present.

The HMS Beagle and its crew traversed these places while testing clocks and drawing maps for the Royal Navy, and the voyage was funded by the British government. Darwin’s position as a naturalist aboard the ship was unpaid, but, fortunately, his family’s private assets were enough to cover his living expenses while he focused on his scientific work.

Today, government and private funding both remain important for scientific discoveries and translating knowledge into practical applications.

As a professor of science education, one of my goals while preparing future teachers is to introduce them to the characteristics of scientific knowledge and how it is developed. For decades, there has been a strong consensus in my field that educated citizens also need to know about the nature of the scientific enterprise. This includes understanding who pays for science, which can differ depending on the type of research, and why it matters.

Funding for science is more than just the amount of money. To a large extent, the organizations that fund research set the agenda, and different funders have different priorities. It can also be hard to see the downstream benefits of scientific research, but they typically outweigh the upfront costs.

Basic research leads to new knowledge

Basic research, also called fundamental research, involves systematic study aimed at acquiring new knowledge. Scientists often pursue research that falls into this category without specific applications or commercial objectives in mind.

Of course, it costs money to follow where curiosity leads; scientists need funding to pursue questions about the natural and material world.

About 40% of basic research in the U.S. has been federally funded in recent years. The government makes this investment because basic research is the foundation of long-term innovation, economic growth and societal well-being.

Funding for basic research is distributed by the federal government through several agencies and institutes. For more than a century, the U.S. National Institutes of Health have sponsored a breadth of scientific and health research and education programs. Since 1950, the National Science Foundation has advanced basic research and education programs, including the training of the next generation of scientists.

Other federal agencies have complementary missions, such as the Defense Advanced Research Projects Agency, created in response to the Soviet Union’s launch of Sputnik in 1957. DARPA focuses on technological innovations for national security, many of which have become fixtures of civilian life.

Through a competitive review process at these agencies, subject experts vet research proposals and make funding recommendations. The amount of funding available from the NIH, NSF and DARPA varies annually, depending on congressional appropriations. Most of the awarded funds go to universities, research institutions and other health and science organizations that conduct research. The sum of research dollars awarded differs among states.

Applying research

Scientists undertake basic research to generate new knowledge with no specific end goal in mind. Applied research is different in that it aims to find solutions to real-world problems.

Research that investigates specific, practical objectives or improvements with commercial potential is more likely to attract private investors. Companies directly invest in research and development to gain a competitive edge and turn a profit. Private industry is more likely to sink dollars into applied rather than basic research because the potential payoff in the form of a new product or advance is more visible.

From discovery to real-world implementation

As applied research addresses problems, promising findings are moved toward clinical application or mainstream use. This research and development process can lead to tangible benefits for individuals and society.

Federal agencies such as the NIH make substantial investments in the basic and applied science underlying new drugs. Pharmaceutical and biotechnology companies heavily invest in the development of drug candidates. Recent reports have shown that industry has been responsible for 50% or more of the dollars invested in health and biomedical research in recent years. This expenditure includes significant spending to advance clinical trials – the studies that test new medical treatments before they get approved for use.

The NIH funded basic research that contributed to every single drug approved by the U.S. Food and Drug Administration between 2010 and 2016. This includes key work that led to COVID-19 vaccines. The COVID-19 vaccination campaign likely saved the U.S. more than $1 trillion in health care expenses that would have otherwise been incurred and also saved lives.

Initial NSF investments in research was instrumental in capturing images of black holes and exploring deep oceans. Basic research funded by NSF paved the way for everyday conveniences such as smartphones, the Google search engine and artificial intelligence. Other funded projects led to quality of life improvements such as American Sign Language and kidney matching for transplants. Educational programming, such as “Bill Nye the Science Guy” and “The Magic School Bus,” were NSF-backed projects, too.

It matters who pays: Funding shapes science

Funders and financial systems shape the trajectory of research across fields. Institutions advertise funding opportunities based on their current priorities. Changes in the amount of funding available ultimately direct the attention of researchers. Any interruptions to basic research, such as changes to financial supports or institutions, may threaten future discoveries and potential payoffs for years to come.

According to numbers reported by a coalition of research institutions, every dollar that NIH spends on research leads to $2.56 of new economic activity. For the 2024 fiscal year, this means, of the $47.35 billion Congress appropriated for NIH, the $36.94 billion awarded to U.S. researchers fueled $94 billion in activity through employment and the purchase of research-related goods and services.

Economist Pierre Azoulay and colleagues recently imagined an alternative history where NIH was 40% smaller and dispersed less money – a budget akin to current federal proposals. They argued that more than half of the drugs FDA approved since 2000 are tied to NIH-funded research that would have been cut under this scenario. This thought experiment underscores how valuable those basic research dollars are.

‘Last Week Tonight with John Oliver’ points out some seemingly outlandish basic research that has yielded surprising real-world applications.

Even seemingly out-of-touch or abstract studies may precede discoveries with major impact. Basic research into bee nectar foraging and movement around the colony, recently mentioned on “Last Week Tonight with John Oliver,” led to the development of an algorithm that distributes internet traffic between computer servers, which now powers the multibillion-dollar web-hosting industry. Learning about applications of research with visible societal impacts can help people understand and appreciate the role of funding in the scientific enterprise.

The Conversation

Ryan Summers receives funding from the National Science Foundation (NSF) and the National Institutes of Health (NIH). He is affiliated with the Association for Science Teacher Education (ASTE), NARST, which is a global organization for improving science education through research, and the National Science Teaching Association (NSTA).

ref. Science costs money – research is guided by who funds it and why – https://theconversation.com/science-costs-money-research-is-guided-by-who-funds-it-and-why-262587

Children can be systematic problem-solvers at younger ages than psychologists had thought – new research

Source: The Conversation – USA – By Celeste Kidd, Professor of Psychology, University of California, Berkeley

How do kids figure out how to sort things by order? Celeste Kidd

I’m in a coffee shop when a young child dumps out his mother’s bag in search of fruit snacks. The contents spill onto the table, bench and floor. It’s a chaotic – but functional – solution to the problem.

Children have a penchant for unconventional thinking that, at first glance, can look disordered. This kind of apparently chaotic behavior served as the inspiration for developmental psychologist Jean Piaget’s best-known theory: that children construct their knowledge through experience and must pass through four sequential stages, the first two of which lack the ability to use structured logic.

Piaget remains the GOAT of developmental psychology. He fundamentally and forever changed the world’s view of children by showing that kids do not enter the world with the same conceptual building blocks as adults, but must construct them through experience. No one before or since has amassed such a catalog of quirky child behaviors that researchers even today can replicate within individual children.

While Piaget was certainly correct in observing that children engage in a host of unusual behaviors, my lab recently uncovered evidence that upends some long-standing assumptions about the limits of children’s logical capabilities that originated with his work. Our new paper in the journal Nature Human Behaviour describes how young children are capable of finding systematic solutions to complex problems without any instruction.

Jean Piaget describes how children of different ages tackle a sorting task, with varying success.

Putting things in order

Throughout the 1960s, Piaget observed that young children rely on clunky trial-and-error methods rather than systematic strategies when attempting to order objects according to some continuous quantitative dimension, like length. For instance, a 4-year-old child asked to organize sticks from shortest to longest will move them around randomly and usually not achieve the desired final order.

Psychologists have interpreted young children’s inefficient behavior in this kind of ordering task – what we call a seriation task – as an indicator that kids can’t use systematic strategies in problem-solving until at least age 7.

Somewhat counterintuitively, my colleagues and I found that increasing the difficulty and cognitive demands of the seriation task actually prompted young children to discover and use algorithmic solutions to solve it.

Piaget’s classic study asked children to put some visible items like wooden sticks in order by height. Huiwen Alex Yang, a psychology Ph.D. candidate who works on computational models of learning in my lab, cranked up the difficulty for our version of the task. With advice from our collaborator Bill Thompson, Yang designed a computer game that required children to use feedback clues to infer the height order of items hidden behind a wall, .

The game asked children to order bunnylike creatures from shortest to tallest by clicking on their sneakers to swap their places. The creatures only changed places if they were in the wrong order; otherwise they stayed put. Because they could only see the bunnies’ shoes and not their heights, children had to rely on logical inference rather than direct observation to solve the task. Yang tested 123 children between the ages of 4 and 10.

Researcher Huiwen Alex Yang tests 8-year-old Miro on the bunny sorting task. The bunnies are hidden behind a wall with only their sneakers visible. Miro’s selections exemplify use of selection sort, a classic efficient sorting algorithm from computer science. Kidd Lab at UC Berkeley.

Figuring out a strategy

We found that children independently discovered and applied at least two well-known sorting algorithms. These strategies – called selection sort and shaker sort – are typically studied in computer science.

More than half the children we tested demonstrated evidence of structured algorithmic thinking, and at ages as young as 4 years old. While older kids were more likely to use algorithmic strategies, our finding contrasts with Piaget’s belief that children were incapable of this kind of systematic strategizing before 7 years of age. He thought kids needed to reach what he called the concrete operational stage of development first.

Our results suggest that children are actually capable of spontaneous logical strategy discovery much earlier when circumstances require it. In our task, a trial-and-error strategy could not work because the objects to be ordered were not directly observable; children could not rely on perceptual feedback.

Explaining our results requires a more nuanced interpretation of Piaget’s original data. While children may still favor apparently less logical solutions to problems during the first two Piagetian stages, it’s not because they are incapable of doing otherwise if the situation requires it.

A systematic approach to life

Algorithmic thinking is crucial not only in high-level math classes, but also in everyday life. Imagine that you need to bake two dozen cookies, but your go-to recipe yields only one. You could go through all the steps of making the recipe twice, washing the bowl in between, but you’d never do that because you know that would be inefficient. Instead, you’d double the ingredients and perform each step only once. Algorithmic thinking allows you to identify a systematic way of approaching the need for twice as many cookies that improves the efficiency of your baking.

Algorithmic thinking is an important capacity that’s useful to children as they learn to move and operate in the world – and we now know they have access to these abilities far earlier than psychologists had believed.

That children can engage with algorithmic thinking before formal instruction has important implications for STEM – science, technology, engineering and math –education. Caregivers and educators now need to reconsider when and how they give children the opportunity to tackle more abstract problems and concepts. Knowing that children’s minds are ready for structured problems as early as preschool means we can nurture these abilities earlier in support of stronger math and computational skills.

And have some patience next time you encounter children interacting with the world in ways that are perhaps not super convenient. As you pick up your belongings from a café floor, remember that it’s all part of how children construct their knowledge. Those seemingly chaotic kids are on their way to more obviously logical behavior soon.

The Conversation

Celeste Kidd receives funding from the National Science Foundation, the John Templeton Foundation, the Jacobs Foundation, and the Advanced Research and Invention Agency.

ref. Children can be systematic problem-solvers at younger ages than psychologists had thought – new research – https://theconversation.com/children-can-be-systematic-problem-solvers-at-younger-ages-than-psychologists-had-thought-new-research-266438

Virtual particles: How physicists’ clever bookkeeping trick could underlie reality

Source: The Conversation – USA – By Dipangkar Dutta, Professor of Nuclear Physics, Mississippi State University

Scientists imagine virtual particles popping in and out of existence to explain how forces transfer between particles. koto_feja/iStock via Getty Images

A clever mathematical tool known as virtual particles unlocks the strange and mysterious inner workings of subatomic particles. What happens to these particles within atoms would stay unexplained without this tool. The calculations using virtual particles predict the bizarre behavior of subatomic particles with such uncanny accuracy that some scientists think “they must really exist.”

Virtual particles are not real – it says so right in their name – but if you want to understand how real particles interact with each other, they are unavoidable. They are essential tools to describe three of the forces found in nature: electromagnetism, and the strong and weak nuclear forces.

Real particles are lumps of energy that can be “seen” or detected by appropriate instruments; this feature is what makes them observable, or real. Virtual particles, on the other hand, are a sophisticated mathematical tool and cannot be seen. Physicist Richard Feynman invented them to describe the interactions between real particles.

But many physicists are not convinced by this cut-and-dried distinction.
Although researchers can’t detect these virtual particles, as tools of calculation they predict many subtle effects that ultrasensitive experiments have confirmed to a mind-boggling 12 decimal places. That precision is like measuring the distance between the North and South poles to better than the width of a single hair.

This level of agreement between measurements and calculations makes virtual particles the most thoroughly vetted idea in science. It forces some physicists to ask: Can a mathematical tool become real?

Virtual particles help scientists follow the interactions between particles.

A bookkeeping tool

Virtual particles are the tool that physicists use to calculate how forces work in the microscopic subatomic world. The forces are real because they can be measured.

But instead of trying to calculate the forces directly, physicists use a bookkeeping system where short-lived virtual particles carry the force. Not only do virtual particles make the calculations more manageable, they also resolve a long-standing problem in physics: How does a force act across empty space?

Virtual particles exploit the natural fuzziness of the subatomic world, where if these ephemeral particles live briefly enough, they can also briefly borrow their energy from empty space. The haziness of the energy balance hides this brief imbalance, which allows the virtual particles to influence the real world.

One big advantage of this tool is that the mathematical operations describing the forces between particles can be visualized as diagrams. They tend to look like stick-figure cartoons of particle pingpong played with virtual particles. The diagrams – dubbed Feynman diagrams – offer an excellent intuitive framework, but they also give virtual particles an aura of reality that is deceiving.

Feynman diagrams help physicists calculate particle interactions.

Amazingly, this virtual particle-based method for calculation produces some of the most precise predictions in all of science.

Reality check

All matter is made of basic building blocks called atoms. Atoms, in turn, are made of small positively charged particles called protons found at their core, surrounded by even smaller negatively charged particles called electrons.

As a professor of physics and astronomy at Mississippi State University, I perform experiments that often rely on the idea that the electrons and protons seen in our instruments interact by swapping virtual particles. My colleagues and I have recently measured the size of the proton very precisely, by bombarding hydrogen atoms with a beam of electrons. This measurement assumes that the electrons can “feel” the proton at the center of the hydrogen atom by exchanging virtual photons: particles of electromagnetic energy.

Physicists use virtual particles to calculate how two electrons repel each other, with exquisite precision. The forces involved are represented as the accumulated effect of the two electrons trading virtual photons.

When two metal plates are placed extremely close together in a vacuum, they attract each other: This is known as the Casimir effect. Physicists can accurately calculate the force that pulls the plates together using virtual particle mathematics. Whether the virtual particles are really there or not, the math predicts exactly what researchers observe in the real world.

An illustration of two black circles merging in space
Virtual particles can help explain how black holes act.
SXS, CC BY-ND

Yet another mysterious prediction made using the virtual particle tool kit is so-called Hawking radiation. When virtual particle pairs pop into existence at the edge of black holes, sometimes the black hole’s gravity grabs one partner while the other escapes. This rift causes the black hole to slowly evaporate. Although Hawking radiation has not yet been directly observed, researchers have recently observed it indirectly.

Useful fiction

Let’s circle back to the question: Can a mathematical tool become real? If you can perfectly predict everything about a force by imagining it is carried by virtual particles, do these particles qualify as real? Does their fictional status matter?

Physicists remain divided on these questions. Some prefer to “just shut up and calculate” – one of Feynman’s famous quips. For now, virtual particles are our best way to describe how particles behave. But researchers are developing alternative methods that do not need them at all.

If successful, these approaches could make virtual particles vanish for good. Successful or not, the fact that alternatives exist at all suggests virtual particles might be useful fiction rather than physical truth. It also fits the pattern of previous revolutions in science – the example of ether comes to mind. Physicists invented ether as a medium through which light waves traveled. Experiments matched well with calculations using this tool, yet they could not actually detect it. Eventually, Einstein’s theory of relativity showed it was unnecessary.

Virtual particles are a striking paradox of modern physics. They shouldn’t exist, yet they are indispensable for calculating everything from the strength of magnets to the behavior of black holes. They represent a profound dilemma: Sometimes the best insights into reality come through carefully constructed illusion. In the end, confusion around virtual particles may be just the price of understanding fundamental forces.

The Conversation

Dipangkar Dutta receives funding from US Dept. of Energy and NSF.

ref. Virtual particles: How physicists’ clever bookkeeping trick could underlie reality – https://theconversation.com/virtual-particles-how-physicists-clever-bookkeeping-trick-could-underlie-reality-264739

History is repeating itself at the FBI as agents resist a director’s political agenda

Source: The Conversation – USA – By Douglas M. Charles, Professor of History, Penn State

FBI Director Kash Patel is sworn in to testify before the Senate Judiciary Committee on Sept. 16, 2025, in Washington, D.C. Chip Somodevilla/Getty Images

Three converging events in the 1970s – the Watergate scandal, the chaotic U.S. withdrawal from the Vietnam War and revelations that FBI Director J. Edgar Hoover had abused his power to persecute people and organizations he viewed as political enemies – destroyed what formerly had been near-automatic trust in the presidency and the FBI.

In response, Congress enacted reforms designed to ensure that legal actions by the Department of Justice and the FBI, the department’s main investigative arm, would be insulated from politics. These included stronger congressional oversight, a 10-year term limit for FBI directors and investigative guidelines issued by the attorney general.

Some of these measures, however, were tenuous. For example, Justice Department leaders could alter FBI investigative guidelines at any time.

Donald Trump’s first presidential term seriously tested DOJ and FBI independence – notably, when Trump fired FBI Director James Comey in May 2017. Trump claimed Comey mishandled a 2016 probe into Democratic presidential nominee Hillary Clinton’s private email server, but Comey also refused to pledge loyalty to the president.

Now, in Trump’s second term, prior guardrails have vanished. The president has installed loyalists at the DOJ and FBI who are dedicated to implementing his political interests.

A lawsuit filed by three former FBI officials fired by the Trump administration asserts that the bureau is being politicized and is supporting Trump’s agenda.

As a historian of the FBI, I recognize the FBI has had only one other overtly political director in the past 50 years: L. Patrick Gray, who served for a year under President Richard Nixon. Gray was held accountable after he tried to help Nixon end the FBI’s Watergate investigation. Whether Trump’s current director, Kash Patel, has more staying power is unclear.

After Hoover

Ever since Hoover’s death in 1972, presidents have typically nominated independent candidates with bipartisan support and law enforcement roots
to run the FBI. Most nominees have been judges, senior prosecutors or former FBI or Justice Department officials.

While Hoover publicly proclaimed his FBI independent of politics, he sometimes did the bidding of presidents, including Nixon. Still, Nixon felt that Hoover had not been compliant enough, so in 1972 he selected Gray, a longtime friend and assistant attorney general, to be Hoover’s successor.

Gray took steps to move the bureau out of Hoover’s shadow. He relaxed strict dress codes for agents, recruited female agents and pointedly hired people from outside the agency – who were not indoctrinated in the Hoover culture – for administrative posts.

Gray asserted his authority with blunt force. FBI agents at field offices and at headquarters who resisted Gray’s power were censured, fired or transferred. Other senior officials opted to leave, including the bureau’s top fraud expert, cryptanalyst and skyjacking expert, and the head of its Crime Information Center.

Agents regarded these moves as a purge, and press reports claimed that bureau morale was at an all-time low, charges that Gray denied. According to FBI Associate Director Mark Felt, who became Gray’s second in command, 10 of 16 top FBI officials chose to retire, most of them notable Hoover men.

Gray surrounded himself with what journalist Jack Anderson called “sharp, but inexperienced, modish, young aides.” FBI insiders called these new hires the “Mod Squad,” a reference to the counterculture TV police series.

A man in a suit answers questions at a microphone.
Attorney L. Patrick Gray meets with reporters at the White House after his selection by President Richard Nixon as FBI acting director on May 3, 1972.
Bettman via Getty Images

Gray helps Nixon

In contrast to Hoover, who had rarely left FBI headquarters and publicly avoided politics, Gray openly stumped for Nixon in the 1972 campaign. He was so rarely spotted at FBI headquarters that bureau insiders dubbed him “Two-Day Gray.” At the request of Nixon aide John Ehrlichman, Gray told field offices to help Nixon campaign surrogates by providing local crime information.

Gray cooperated with Nixon to stymie the FBI’s investigation of the 1972 Watergate break-in and the ensuing cover-up. He provided raw FBI investigative documents to the White House and burned documents from Watergate conspirator E. Howard Hunt’s White House safe.

When Nixon had CIA Deputy Director Vernon Walters ask Gray, in the name of national security, to halt the FBI’s investigation, Felt and other agency insiders demanded that Gray get this order in writing. The White House backed down, but Nixon’s directive had been recorded. That tape became the so-called “smoking gun” evidence of a Watergate cover-up.

Felt, in classic Hoover fashion, then leaked information to discredit Gray, hoping to replace him. Gray resigned in disgrace.

While Felt never got the top job, he is now remembered as the prized anonymous source “Deep Throat,” who helped Washington Post reporters Bob Woodward and Carl Bernstein in their Pulitzer Prize-winning Watergate investigation. But it was internal FBI resistance, from Felt and agents at lower levels, that led to Gray’s departure.

After Democratic National Committee headquarters at Washington, D.C.’s Watergate Hotel was burgled in June 1972, the FBI was charged with investigating the break-in – as Director L. Patrick Gray tried to subvert his own agency’s investigation.

Political from the start

Campaigning in 2024, Donald Trump vowed to “root out” his political opponents from government. Realizing he was a target because of his investigation of the attack on the U.S. Capitol on Jan. 6, 2021, FBI director Christopher Wray, whom Trump had nominated in 2017, resigned in December 2024 before Trump could fire him.

In Wray’s place Trump nominated loyalist Kash Patel, a lawyer who worked as a low-level federal prosecutor from 2013 to 2016 and then as a deputy national security appointee during Trump’s first term.

Patel publicly supported Trump’s vow to purge enemies and claimed the FBI was part of a “deep state” that was resistant to Trump. Patel promised to help dismantle this disloyal core and to “rebuild public trust” in the FBI.

Even before Patel was confirmed on Feb. 20, 2025, in an historically close 51-49 vote, the Justice Department began transferring thousands of agents away from national security matters to immigration duty, which was not a traditional FBI focus.

Hours after taking office, Patel shifted 1,500 agents and staff from FBI headquarters to field offices, claiming that he was streamlining operations.

Patel installed outsider Dan Bongino as deputy director. Bongino, another Trump loyalist, was a former New York City policeman and Secret Service agent who had become a full-time political commentator. He embraced a conspiracy theory positing the FBI was “irredeemably corrupt” and advocated “an absolute housecleaning.”

In February, New York City Special Agent in Charge James Dennehy told FBI staff “to dig in” and oppose expected and unprecedented political intrusions. He was forced out by March.

Patel then used lie-detector tests and carried out a string of high-profile firings of agents who had investigated either Trump or the Jan. 6, 2021, insurrection. Some agents who were fired had been photographed kneeling during a 2020 racial justice protest in Washington, D.C. – an action they said they took to defuse tensions with protesters.

In response, three fired agents are suing Patel for what they call a political retribution campaign. Ex-NFL football player Charles Tillman, who became an FBI agent in 2017, resigned in September 2025 in protest of Trump policies. Once again, there are assertions of a purge.

Will Patel be held accountable?

Patel’s actions as director so far illustrate that he is willing to use his position to implement the president’s political designs. When Gray tried to do this in the 1970s, accountability still held force, and Gray left office in disgrace. Gray participated in a cover-up of illegal behavior that became the subject of an impeachment proceeding. What Patel has done to date, at least what we know about, is not the equivalent – so far.

Today, Patel’s tenure rests solely upon pleasing the president. If formal accountability – a key element of a democracy – is to survive, it will have to come from Congress, whose Republican majority has so far not exercised its power to hold Trump or his administration accountable. Short of that, perhaps internal resistance within the administration or pressure from the public and the media might serve the oversight function that Congress, over the past eight months, has abrogated.

The Conversation

Douglas M. Charles does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. History is repeating itself at the FBI as agents resist a director’s political agenda – https://theconversation.com/history-is-repeating-itself-at-the-fbi-as-agents-resist-a-directors-political-agenda-265637

Florida’s 1,100 natural springs are under threat – a geographer explains how to restore them

Source: The Conversation – USA (2) – By Christopher F. Meindl, Associate Professor of Geography, University of South Florida

Gilchrist Blue Springs, located about 20 miles northwest of Gainesville, Fla., is a popular recreation site known for the clarity of its water. Christopher Meindl, CC BY

“Behold … a vast circular expanse before you, the waters of which are so extremely clear as to be absolutely diaphanous or transparent as the ether.”

Naturalist William Bartram wrote these words in the 18th century as he gazed in wonder at Salt Springs, located in Ocala National Forest in what is now Marion County, Florida.

Springs are points where groundwater emerges at the earth’s surface, and Florida boasts more than 1,100 of them. North and central Florida comprise one of the largest concentrations of freshwater springs in the world.

Many of these springs provide a home to a variety of wild animals and plants. But they are also canaries in the coal mine for Florida’s groundwater system, because they draw upon the same groundwater that many Floridians depend on for drinking water, farm irrigation and industrial use.

Right now, many Florida springs suffer from reduced flow and habitat loss, as well as excessive algae and heavy pressure from human use. Because most of the state’s springs are not monitored by any research institution or government agency, the full scope of the problem remains unclear.

The state Legislature has designated 30 Outstanding Florida Springs whose health must be protected under the Florida Springs and Aquifer Protection Act of 2016. But 24 of the 30 were impaired by pollution – primarily nitrogen – at the time of this designation, and today, their condition has not improved.

In 2025, 26 of the 30 – the same 24 springs, plus two more – have been found to be impaired.

According to multiple reports and my own observation, many other popular springs are impaired by pollution as well. Since 2011, the state of Florida has spent roughly US$357 million on springs restoration.

As a geography professor, I study springs in the context of people and their use of water. My research has taught me that Florida’s springs vary based on location and local circumstances. Because of this, I believe reviving their health will require several multidimensional solutions.

Recalling healthy springs

What should a healthy spring look like? The answer to this can be harder to articulate than you might think. Many springs feature a visible boil at the water surface above the spring vent, crystal clear water, submerged grasses waving in the current, and a range of fish, turtles, snails and other aquatic animals hiding in the grasses.

Yet because many springs are changing slowly, changes in flow and water clarity can go unnoticed. Some scientists call this the shifting baseline syndrome: Each generation perceives springs in a slightly more degraded state, but absent prior observations, we assume that what we see is “normal.”

Fortunately, in the case of Florida springs, historical observations from naturalists and area residents give scientists clues going back centuries.

When Bartram visited Manatee Springs near Chiefland and the Suwannee River in the Big Bend in 1774, he wrote that the spring’s flow was “astonishing” and that “it is impossible to keep the boat or any other floating vessel over the fountain.”

Similarly, senior citizens who grew up in north central Florida in the early 20th century told writer P.C. Zick that spring flow at Ichetucknee Springs was once so strong that they could hear the spring boil before getting close enough to see it.

Both springs’ boils are noticeable today, but they are clearly not what they used to be.

When naturalist John James Audubon visited Volusia County’s De Leon Springs in 1832, he found that “The water was quite transparent, although of dark color.” And Bartram wrote of Salt Springs that the water was so clear, he thought he could reach out and touch fish that were 20 to 30 feet below the surface.

Water clarity in thriving springs fosters plenty of submerged grasses soaking up sunshine, along with a wide variety and large number of fish and other aquatic animals that depend on this vegetation. Bartram wrote that he spotted gar, trout, bream, “the barbed catfish, dreaded sting-ray, skate and flounder, spotted bass, sheeps head and ominous drum” at Salt Springs.

Black-and-white photo of a springs pool with lots of swimmers in and around it.
This 1925 photograph shows Sulphur Springs, a vibrant recreation attraction in the heart of Tampa.
State Archives of Florida/Burgert Brothers, CC BY
standing water in a pool
Sadly, Sulphur Springs is a cautionary tale. Area sinkholes began feeding contaminated urban runoff to the spring in the mid-20th century, leading Tampa authorities to close the spring to swimming in 1986. This photo was taken in May 2025.
Christopher Meindl, CC BY

A multifaceted problem

Many Florida springs and their runs now suffer reduced flow, wear and tear from hundreds of thousands of well-meaning visitors, and excess algae.

And while some Florida springs, such as Polk County’s Kissingen Springs, have completely dried up, many more produce less flow than they used to.

It is easy to assume that bottled water companies are the reason for seriously reduced spring flows, and in at least one case, bottling spring water has raised concerns of overuse.

Yet a state report published in 2021 that examined water-bottling operations associated with springs found that bottlers were permitted to extract just over 5 million gallons per day from Florida’s springs – a tiny fraction of the 2.3 billion gallons of groundwater pumped each day from the Floridan Aquifer, which provides drinking water for more than 10 million people in the southeastern United States.

The most problematic reductions in spring flow are from significant groundwater pumping for agricultural irrigation, heavy urban, mining or industrial water use, or in some cases a long-term rainfall deficit. Various springs suffer from one or more of these problems.

In addition, as Florida’s population and tourism have grown, so have the number of visitors to the state’s most popular springs. In 2019, Florida springs attracted more than 4 million visitors. During the summer, especially on weekends, some springs are so crowded that staff members have to turn away visitors. And in winter, springs that attract manatees can be equally crowded.

In shallow portions of springs and spring runs, this means thousands of happy feet trample and destroy vegetation. And when submerged grasses disappear, so do the aquatic animals that rely on them for food.

clear, fresh water with green trees on either side
Wacissa Springs is the head of the Wacissa River, which flows from just outside Tallahassee into the Gulf of Mexico.
Matthew Zorn, CC BY

Unwanted algae

Finally, there is the mystery of excess algae. Algae naturally occurs in most springs, but today, many springs have so much that it clouds the water, or they have stringy filamentous algae that blankets the soil and rocks around a spring and along its run. Still others have algae that sticks to submerged aquatic plants, blocking vital sunlight.

The predominant narrative among many springs scientists, advocates and government officials is that rising nitrate levels in springs over the past few decades fuels the growth of excess algae. Nitrate, a form of nitrogen, is a plant nutrient.

Yet other scientists have suggested that reduced spring discharge creates slower-moving water, which loses its ability to push excess algae away.

Another hypothesis is that if dissolved oxygen levels temporarily fall below a certain threshold, it can kill off the snails and other animals that graze on the algae and keep it in check.

A balanced restoration plan

More than two-thirds of state-funded springs restoration projects over the past decade have been for some form of enhanced sewage treatment. This is because excess nitrogen is assumed to be the cause of excess algae in Florida springs, and Florida farmers are presumed to be in compliance with water quality regulations if they implement best management practices.

Enhanced sewage treatment is a good thing, especially in cases where human waste is clearly a pressing problem. In some cases, investing in advanced sewage treatment, shifting landowners from septic systems to sewage treatment plants or even enhanced treatment of storm water before it sinks into the ground clearly benefits springs.

However, shifting people from septic tanks to central sewage treatment is expensive. Based on the evidence and my own observations of various springs within Florida’s landscape, I believe that many springs need more than this single solution.

Some need shoreline stabilization to prevent erosion or rules that reduce human pressure on spring vegetation. Others need algae or sediment removed and native vegetation reintroduced.

In still other cases, it would help to purchase property to prevent harmful development or to retire farmland. And in nearly every case, the springs would benefit from Florida residents and businesses reducing water and fertilizer use.

And, restoring and maintaining the health of Florida’s 1,100 springs will require further study to tailor appropriate interventions to each one.

The Conversation

Christopher F. Meindl does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Florida’s 1,100 natural springs are under threat – a geographer explains how to restore them – https://theconversation.com/floridas-1-100-natural-springs-are-under-threat-a-geographer-explains-how-to-restore-them-263704

What past education technology failures can teach us about the future of AI in schools

Source: The Conversation – USA (2) – By Justin Reich, Professor of Digital Media, Massachusetts Institute of Technology (MIT)

Teachers need to be scientists themselves, experimenting and measuring the impact of powerful AI products on education. Hyoung Chang via Getty Images

American technologists have been telling educators to rapidly adopt their new inventions for over a century. In 1922, Thomas Edison declared that in the near future, all school textbooks would be replaced by film strips, because text was 2% efficient, but film was 100% efficient. Those bogus statistics are a good reminder that people can be brilliant technologists, while also being inept education reformers.

I think of Edison whenever I hear technologists insisting that educators have to adopt artificial intelligence as rapidly as possible to get ahead of the transformation that’s about to wash over schools and society.

At MIT, I study the history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.

New education technologies are only as powerful as the communities that guide their use. Opening a new browser tab is easy; creating the conditions for good learning is hard.

It takes years for educators to develop new practices and norms, for students to adopt new routines, and for families to identify new support mechanisms in order for a novel invention to reliably improve learning. But as AI spreads through schools, both historical analysis and new research conducted with K-12 teachers and students offer some guidance on navigating uncertainties and minimizing harm.

We’ve been wrong and overconfident before

I started teaching high school history students to search the web in 2003. At the time, experts in library and information science developed a pedagogy for web evaluation that encouraged students to closely read websites looking for markers of credibility: citations, proper formatting, and an “about” page. We gave students checklists like the CRAAP test – currency, reliability, authority, accuracy and purpose – to guide their evaluation. We taught students to avoid Wikipedia and to trust websites with .org or .edu domains over .com domains. It all seemed reasonable and evidence-informed at the time.

The first peer-reviewed article demonstrating effective methods for teaching students how to search the web was published in 2019. It showed that novices who used these commonly taught techniques performed miserably in tests evaluating their ability to sort truth from fiction on the web. It also showed that experts in online information evaluation used a completely different approach: quickly leaving a page to see how other sources characterize it. That method, now called lateral reading, resulted in faster, more accurate searching. The work was a gut punch for an old teacher like me. We’d spent nearly two decades teaching millions of students demonstrably ineffective ways of searching.

Today, there is a cottage industry of consultants, keynoters and “thought leaders” traveling the country purporting to train educators on how to use AI in schools. National and international organizations publish AI literacy frameworks claiming to know what skills students need for their future. Technologists invent apps that encourage teachers and students to use generative AI as tutors, as lesson planners, as writing editors, or as conversation partners. These approaches have about as much evidential support today as the CRAAP test did when it was invented.

There is a better approach than making overconfident guesses: rigorously testing new practices and strategies and only widely advocating for the ones that have robust evidence of effectiveness. As with web literacy, that evidence will take a decade or more to emerge.

But there’s a difference this time. AI is what I have called an “arrival technology.” AI is not invited into schools through a process of adoption, like buying a desktop computer or smartboard – it crashes the party and then starts rearranging the furniture. That means schools have to do something. Teachers feel this urgently. Yet they also need support: Over the past two years, my team has interviewed nearly 100 educators from across the U.S., and one widespread refrain is “don’t make us go it alone.”

3 strategies for prudent path forward

While waiting for better answers from the education science community, which will take years, teachers will have to be scientists themselves. I recommend three guideposts for moving forward with AI under conditions of uncertainty: humility, experimentation and assessment.

First, regularly remind students and teachers that anything schools try – literacy frameworks, teaching practices, new assessments – is a best guess. In four years, students might hear that what they were first taught about using AI has since proved to be quite wrong. We all need to be ready to revise our thinking.

Second, schools need to examine their students and curriculum, and decide what kinds of experiments they’d like to conduct with AI. Some parts of your curriculum might invite playfulness and bold new efforts, while others deserve more caution.

In our podcast “The Homework Machine,” we interviewed Eric Timmons, a teacher in Santa Ana, California, who teaches elective filmmaking courses. His students’ final assessments are complex movies that require multiple technical and artistic skills to produce. An AI enthusiast, Timmons uses AI to develop his curriculum, and he encourages students to use AI tools to solve filmmaking problems, from scripting to technical design. He’s not worried about AI doing everything for students: As he says, “My students love to make movies. … So why would they replace that with AI?”

It’s among the best, most thoughtful examples of an “all in” approach that I’ve encountered. I also can’t imagine recommending a similar approach for a course like ninth grade English, where the pivotal introduction to secondary school writing probably should be treated with more cautious approaches.

Third, when teachers do launch new experiments, they should recognize that local assessment will happen much faster than rigorous science. Every time schools launch a new AI policy or teaching practice, educators should collect a pile of related student work that was developed before AI was used during teaching. If you let students use AI tools for formative feedback on science labs, grab a pile of circa-2022 lab reports. Then, collect the new lab reports. Review whether the post-AI lab reports show an improvement on the outcomes you care about, and revise practices accordingly.

Between local educators and the international community of education scientists, people will learn a lot by 2035 about AI in schools. We might find that AI is like the web, a place with some risks but ultimately so full of important, useful resources that we continue to invite it into schools. Or we might find that AI is like cellphones, and the negative effects on well-being and learning ultimately outweigh the potential gains, and thus are best treated with more aggressive restrictions.

Everyone in education feels an urgency to resolve the uncertainty around generative AI. But we don’t need a race to generate answers first – we need a race to be right.

The Conversation

Justin Reich has received funding from Google, Microsoft, Apple, the Bill and Melinda Gates Foundation, the Chan/Zuckerberg Initiative, the Hewlett Foundation, education publishers, and other organizations that are involved in technology and schools.

ref. What past education technology failures can teach us about the future of AI in schools – https://theconversation.com/what-past-education-technology-failures-can-teach-us-about-the-future-of-ai-in-schools-265172

As an OB-GYN, I see firsthand how misleading statements on acetaminophen leave expectant parents confused, fearful and lacking in options

Source: The Conversation – USA (3) – By Tami S. Rowen, Associate Professor of Obstetrics, Gynecology and Gynecologic Surgery, University of California, San Francisco

About 20% of patients report experiencing a fever during pregnancy. John Fedele/Tetra images via Getty Images Plus

When President Donald Trump adamantly proclaimed in a press conference on Sept. 22, 2025, that pregnant women should not take Tylenol, I immediately thought about my own experiences during my second labor. While pushing for nearly three hours, I developed an infection in my uterus called chorioamnionitis, which occurs when bacteria infect the uterus, placenta and sometimes the baby’s bloodstream. I had a fever, and my baby’s heart rate was significantly elevated.

I remember feeling delirious; my colleague and friend, while delivering my baby, said she had never seen me in such a state. I couldn’t focus on pushing. I felt faint, and I worried about my baby.

And I remember the incredible relief that acetaminophen, the active ingredient in Tylenol, brought me when it lowered my fever and decreased my and my baby’s heart rate. After taking it, I was able to push with confidence and welcome my healthy daughter, who is now 7 and thriving.

As a practicing obstetrician and medical researcher with nearly two decades of experience taking care of pregnant patients, I have to make a dozen decisions about acetaminophen use on any given day when I am working in the hospital. I have examined the data as a researcher, clinician and educator. Central to our jobs is balancing the risks and benefits of any treatments.

The president’s words will not change how I practice, but I worry they will sow confusion in my patients and create fear of potential lawsuits for all practicing health care providers.

The American College of Obstetricians and Gynecologists, the leading organization that guides medical decisions on pregnancy and childbirth, has reiterated the safety and efficacy of acetaminophen use during pregnancy in light of the confusion surrounding Trump’s claims.

Mixed messages

I first looked into the data on the possible links between acetaminophen and developmental disorders a few years ago when I received a call from a woman who had recently learned she was pregnant and had caught the flu from her toddler child. She was concerned that Tylenol was dangerous for her developing baby.

Some studies do suggest links between acetaminophen use in pregnancy and neurodevelopmental disorders such as attention deficit hyperactivity disorder and autism. But they lack a crucial distinction.

For one, they cannot pin down whether acetaminophen use during pregnancy itself was associated with the neurodevelopmental conditions in the child, or whether the fevers and other symptoms that led people to use the painkiller were playing a role in the outcome. Secondly, because those studies are based on statistical associations rather than controlled experiments, they cannot show cause and effect.

Since it is both unethical and nonfeasible to perform a controlled study evaluating the actual risks of acetaminophen use, the best proxy to control for environmental or genetic factors is to look at maternal exposure to acetaminophen and outcomes of more than one child in individual families.

That’s exactly what was done in a 2024 Swedish study that analyzed nearly 2.5 million children born from 1995 to 2019 in Sweden to mothers who had documented use of any medication during pregnancy. When looking at individual children, the researchers found up to a 5% increase in autism for those exposed to acetaminophen during pregnancy. However, when siblings were included in the analysis – controlling for environmental, medical and genetic factors that could have contributed – the small, elevated risk disappeared.

A young boy and older girl stand together smiling in front of a house.
A 2024 Swedish study found that when siblings were taken into account, the association between acetaminophen use and autism became insignificant.
MoMo Productions/DigitalVision via Getty Images

Fever during pregnancy is dangerous for mother and baby

There are many important reasons why doctors like me may recommend acetaminophen to a pregnant patient. One pregnant patient I treated who had the flu was so sick that she was septic, meaning an infection had spread throughout her body. Her 103-degree fever and dangerously low blood pressure threatened her and her fetus’s life.

My colleagues and I did not hesitate to treat her with acetaminophen. Our goal was to bring down not only her body temperature but also the fetus’s heart rate, since a high heart rate can place dangerous stress on the fetus. I shudder at the thought of what would have happened to her and her baby had she been denied this medication, or had she been afraid to use it as a result of hearing a statement from Trump and his health officials.

Fevers are very common during pregnancy, with about 20% of patients reporting they experienced one.

In fact, the evidence for a connection between fevers during pregnancy and autism is actually far stronger than any study connecting acetaminophen and autism. Recurrent fevers during pregnancy can increase the risk of autism by up to 300%, particularly in pregnant patients with severe or prolonged infections. This is especially true if a patient is hospitalized, as are most of my patients whose cases are serious enough to require hospitalization.

A man and woman, both dressed in protective gowns, hold up a newborn baby.
Repeated fevers during pregnancy can greatly increase the risk of autism in the child.
Iuliia Burmistrova/Moment via Getty Images

Pain during pregnancy

Beyond fevers, which can occur throughout pregnancy as well as during delivery, as I experienced myself, pregnant patients may seek to manage pain, which can occur for any number of reasons over the course of nine months. Pregnant people suffer from kidney stones, appendicitis or dental cavities that require root canal, just like people who are not pregnant. Up to 70% of pregnant people experience back pain, which can leave them unable to perform normal daily activities and care for their children. Should they be denied pain relief and told to tough it out?

The safest and most strongly recommended pain reliever for them is acetaminophen.

Other pain-relieving options such as nonsteroidal anti-inflammatory drugs, or NSAIDs, such as ibuprofen, are generally off-limits during pregnancy because they can lead to closure of an important heart valve in the fetus as well as low amniotic fluid and other complications. Opioids carry the risk of the fetus developing an addiction and withdrawal, not to mention the risk of addiction in the mother.

The ability to guide people in pregnancy, childbirth and beyond is, for me, the most intimate and fulfilling part of medicine. The anxiety and fear that people bring to my office and to the delivery room about the many uncertainties associated with pregnancy and childbirth is palpable and legitimate.

That’s why it is critical that all recommendations are sound and evidence-based, with a clear understanding of the nuances and limitations of research studies. I know every time I look at my children I think of everything I can do to keep them safe, and I wonder what I could have done in the past to prevent any problems we currently face. We owe it to parents like me and all future parents to give them the most honest and scientific information possible.

The Conversation

Tami S. Rowen is an advisor for Roon, a health education company, and a health consultant for MCG, a health guidelines company.

ref. As an OB-GYN, I see firsthand how misleading statements on acetaminophen leave expectant parents confused, fearful and lacking in options – https://theconversation.com/as-an-ob-gyn-i-see-firsthand-how-misleading-statements-on-acetaminophen-leave-expectant-parents-confused-fearful-and-lacking-in-options-265947

Comment un médicament essentiel de la médecine moderne a été découvert sur l’île de Pâques

Source: The Conversation – France in French (3) – By Ted Powers, Professor of Molecular and Cellular Biology, University of California, Davis

Le peuple Rapa Nui est pratiquement absent de l’histoire de la découverte de la rapamycine telle qu’elle est généralement racontée. Posnov/Moment/Getty

La découverte en 1964, sur l’île de Pâques, de la rapamycine, un nouvel antibiotique, a marqué le début d’une success story pharmaceutique à plusieurs milliards de dollars. Pourtant, l’histoire a complètement occulté les individus et les dynamiques politiques qui ont rendu possible l’identification de ce « médicament miracle ».


Baptisé du nom autochtone de l’île, Rapa Nui, la rapamycine a initialement été employée comme immunosuppresseur, afin de prévenir le rejet des greffes d’organes et d’améliorer le taux de succès de l’implantation de stents, de petits treillis métalliques destinés à étayer les artères dans le cadre de la lutte contre la maladie coronarienne (laquelle se traduit par rétrécissement progressif des artères qui nourrissent le cœur, ndlr).

Son usage s’est depuis étendu au traitement de divers types de cancers, et les chercheurs explorent aujourd’hui son potentiel dans le contexte de la prise en charge du diabète,

des maladies neurodégénératives, voire de la lutte contre les méfaits du vieillissement. Ainsi, des études mettant en évidence la capacité de la rapamycine à prolonger la durée de vie ou à combattre les maladies liées à l’âge semblent paraître presque quotidiennement depuis quelque temps… Une requête sur PubMed, le moteur de recherche recense plus de 59 000 articles mentionnant la rapamycine. Il s’agit de l’un des médicaments qui fait le plus parler de lui dans le domaine médical.

Cependant, bien que la rapamycine soit omniprésente en science et en médecine, la façon dont elle a été découverte demeure largement méconnue du public. En tant que scientifique ayant consacré sa carrière à l’étude de ses effets sur les cellules, j’ai ressenti le besoin de mieux comprendre son histoire.

À ce titre, les travaux de l’historienne Jacalyn Duffin portant sur la Medical Expedition to Easter Island (METEI), une expédition scientifique mise sur pied dans les années 1960, ont complètement changé la manière dont nombre de mes collègues et moi-même envisageons désormais notre domaine de recherche.

La découverte du complexe héritage de la rapamycine soulève en effet d’importantes questions sur les biais systémiques qui existent dans le secteur de la recherche biomédicale, ainsi que sur la dette des entreprises pharmaceutiques envers les territoires autochtones d’où elles extraient leurs molécules phares.

Pourquoi un tel intérêt pour la rapamycine ?

L’action de la rapamycine s’explique par sa capacité à inhiber une protéine appelée target of rapamycin kinase, ou TOR. Cette dernière est l’un des principaux régulateurs de la croissance et du métabolisme cellulaires. De concert avec d’autres protéines partenaires, TOR contrôle la manière dont les cellules répondent aux nutriments, au stress et aux signaux environnementaux, influençant ainsi des processus majeurs tels que la synthèse protéique et la fonction immunitaire.

Compte tenu de son rôle central dans ces activités cellulaires fondamentales, il n’est guère surprenant qu’un dysfonctionnement de TOR puisse se traduire par la survenue de cancers, de troubles métaboliques ou de maladies liées à l’âge.

Structure chimique de la rapamycine
Structure chimique de la rapamycine.
Fvasconcellos/Wikimedia

Un grand nombre des spécialistes du domaine savent que cette molécule a été isolée au milieu des années 1970 par des scientifiques travaillant au sein du laboratoire pharmaceutique Ayerst Research Laboratories, à partir d’un échantillon de sol contenant la bactérie Streptomyces hydroscopicus. Ce que l’on sait moins, c’est que cet échantillon a été prélevé dans le cadre d’une mission canadienne appelée Medical Expedition to Easter Island, ou METEI, menée à Rapa Nui – l’Île de Pâques – en 1964.

Histoire de la METEI

L’idée de la Medical Expedition to Easter Island (METEI) a germé au sein d’une équipe de scientifiques canadiens composée du chirurgien Stanley Skoryna et du bactériologiste Georges Nogrady. Leur objectif était de comprendre comment une population isolée s’adaptait au stress environnemental. Ils estimaient que la prévision de la construction d’un aéroport international sur l’île de Pâques offrait une occasion unique d’éclairer cette question. Selon eux, en accroissant les contacts de la population de l’île avec l’extérieur, l’aéroport risquait d’entraîner des changements dans sa santé et son bien-être.

Financée par l’Organisation mondiale de la santé, et soutenue logistiquement par la Marine royale canadienne, la METEI arriva à Rapa Nui en décembre 1964. Durant trois mois, l’équipe fit passer à la quasi-totalité des 1 000 habitants de l’île toute une batterie d’examens médicaux, collectant des échantillons biologiques et procédant à un inventaire systématique de la flore et de la faune insulaires.

Dans le cadre de ces travaux, Georges Nogrady recueillit plus de 200 échantillons de sol, dont l’un s’est avéré contenir la souche de bactéries Streptomyces productrice de rapamycine.

Affiche du mot METEI écrit verticalement entre l’arrière de deux têtes de moaï, avec l’inscription « 1964-1965 RAPA NUI INA KA HOA (N’abandonnez pas le navire) »
Logo du METEI.
Georges Nogrady, CC BY-NC-ND

Il est important de comprendre que l’objectif premier de l’expédition était d’étudier le peuple de Rapa Nui, dans un contexte qui était vu comme celui d’un laboratoire à ciel ouvert. Pour encourager les habitants à participer, les chercheurs n’ont pas hésité à recourir à la corruption, leur offrant des cadeaux, de la nourriture et diverses fournitures. Ils ont également eu recours à la coercition : à cet effet, ils se sont assuré les services d’un prêtre franciscain en poste de longue date sur l’île pour les aider au recrutement. Si leurs intentions étaient peut-être honorables, il s’agit néanmoins là d’un exemple de colonialisme scientifique dans lequel une équipe d’enquêteurs blancs choisit d’étudier un groupe majoritairement non blanc sans son concours, ce qui crée un déséquilibre de pouvoir. Un biais inhérent à l’expédition existait donc dès la conception de la METEI.

Par ailleurs, plusieurs des hypothèses de départ avaient été formulées sur des bases erronées. D’une part, les chercheurs supposaient que les habitants de Rapa Nui avaient été relativement isolés du reste du monde, alors qu’il existait en réalité une longue histoire d’interactions avec des pays extérieurs, comme en témoignaient divers récits dont les plus anciens remontaient au début du XVIIIe siècle, et dont les publications s’étalaient jusqu’à la fin du XIXe siècle.

D’autre part, les organisateurs de la METEI partaient du postulat que le bagage génétique de la population de Rapa Nui était homogène, sans tenir compte de la complexe histoire de l’île en matière de migrations, d’esclavage et de maladies (certains habitants étaient en effet les descendants de survivants de la traite des esclaves africains qui furent renvoyés sur l’île et y apportèrent certaines maladies, dont la variole). La population moderne de Rapa Nui est en réalité métissée, issue à la fois d’ancêtres polynésiens, sud-américains, voire africains.

Cette erreur d’appréciation a sapé l’un des objectifs clés du METEI : évaluer l’influence de la génétique sur le risque de maladie. Si l’équipe a publié un certain nombre d’études décrivant la faune associée à Rapa Nui, son incapacité à établir une base de référence est probablement l’une des raisons pour lesquelles aucune étude de suivi n’a été menée après l’achèvement de l’aéroport de l’île de Pâques en 1967.

Rendre crédit à qui de droit

Les omissions qui existent dans les récits sur les origines de la rapamycine sont le reflet d’angles morts éthiques fréquemment présents dans la manière dont on se souvient des découvertes scientifiques.

Georges Nogrady rapporta de Rapa Nui des échantillons de sol, dont l’un parvint à Ayerst Research Laboratories. Là, Surendra Sehgal et son équipe isolèrent ce qui fut nommé rapamycine, qu’ils finirent par commercialiser à la fin des années 1990 en tant qu’immunosuppresseur, sous le nom Rapamune. Si l’on connaît bien l’obstination de Sehgal, qui fut déterminante pour mener à bien le projet en dépit des bouleversements qui agitaient à cette époque la société pharmaceutique pour laquelle il travaillait – il alla même jusqu’à dissimuler une culture de bactéries chez lui – ni Nogrady ni la METEI ne furent jamais crédités dans les principaux articles scientifiques qu’il publia.

Bien que la rapamycine ait généré des milliards de dollars de revenus, le peuple de Rapa Nui n’en a tiré aucun bénéfice financier à ce jour. Cela soulève des questions sur les droits des peuples autochtones ainsi que sur la biopiraterie (qui peut être définie comme « l’appropriation illégitime par un sujet, notamment par voie de propriété intellectuelle, parfois de façon illicite, de ressources naturelles, et/ou éventuellement de ressources culturelles en lien avec elles, au détriment d’un autre sujet », ndlr), autrement dit dans ce contexte la commercialisation de connaissances autochtones sans contrepartie.

Des accords tels que la Convention des Nations unies de 1992 sur la diversité biologique et la Déclaration de 2007 sur les droits des peuples autochtones visent à protéger les revendications autochtones sur les ressources biologiques, en incitant tous les pays à obtenir le consentement et la participation des populations concernées, et à prévoir des réparations pour les préjudices potentiels avant d’entreprendre des projets.

Ces principes n’étaient cependant pas en vigueur à l’époque du METEI.

Gros plans de visages alignés portant des couronnes de fleurs dans une pièce sombre
Les habitants de Rapa Nui n’ont reçu que peu ou pas de reconnaissance pour leur rôle dans la découverte de la rapamycine.
Esteban Felix/AP Photo

Certaines personnes soutiennent que, puisque la bactérie productrice de rapamycine a été trouvée ailleurs que dans le sol de l’île de Pâques, ce dernier n’était ni unique ni essentiel à la découverte du médicament. D’autres avancent aussi qu’étant donné que les insulaires n’utilisaient pas la rapamycine et n’en connaissaient pas l’existence sur leur île, cette molécule ne constituait pas une ressource susceptible d’être « volée ».

Cependant, la découverte de la rapamycine à Rapa Nui a jeté les bases de l’ensemble des recherches et de la commercialisation ultérieures autour de cette molécule. Cela n’a été possible que parce que la population a été l’objet de l’étude montée par l’équipe canadienne. La reconnaissance formelle du rôle essentiel joué par les habitants de Rapa Nui dans la découverte de la rapamycine, ainsi que la sensibilisation du public à ce sujet, sont essentielles pour les indemniser à hauteur de leur contribution.

Ces dernières années, l’industrie pharmaceutique a commencé à reconnaître l’importance d’indemniser équitablement les contributions autochtones. Certaines sociétés se sont engagées à réinvestir dans les communautés d’où proviennent les précieux produits naturels qu’elles exploitent.

Toutefois, s’agissant des Rapa Nui, les entreprises qui ont directement tiré profit de la rapamycine n’ont pas encore fait un tel geste.

Si la découverte de la rapamycine a sans conteste transformé la médecine, il est plus complexe d’évaluer les conséquences pour le peuple de Rapa Nui de l’expédition METEI. En définitive, son histoire est à la fois celle d’un triomphe scientifique et d’ambiguïtés sociales.

Je suis convaincu que les questions qu’elle soulève (consentement biomédical, colonialisme scientifique et occultation de certaines contributions) doivent nous faire prendre conscience qu’il est nécessaire d’examiner de façon plus critique qu’ils ne l’ont été jusqu’à présent les héritages des découvertes scientifiques majeures.

The Conversation

Ted Powers ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Comment un médicament essentiel de la médecine moderne a été découvert sur l’île de Pâques – https://theconversation.com/comment-un-medicament-essentiel-de-la-medecine-moderne-a-ete-decouvert-sur-lile-de-paques-266381

Comment un médicament à un milliard de dollars a été découvert dans le sol de l’Île de Pâques (et ce que les scientifiques et l’industrie doivent aux peuples autochtones)

Source: The Conversation – France in French (3) – By Ted Powers, Professor of Molecular and Cellular Biology, University of California, Davis

Le peuple Rapa Nui est pratiquement absent de l’histoire de la découverte de la rapamycine telle qu’elle est généralement racontée. Posnov/Moment/Getty

La découverte en 1964, sur l’île de Pâques, de la rapamycine, un nouvel antibiotique, a marqué le début d’une success story pharmaceutique à plusieurs milliards de dollars. Pourtant, l’histoire a complètement occulté les individus et les dynamiques politiques qui ont rendu possible l’identification de ce « médicament miracle ».


Baptisé du nom autochtone de l’île, Rapa Nui, la rapamycine a initialement été employée comme immunosuppresseur, afin de prévenir le rejet des greffes d’organes et d’améliorer le taux de succès de l’implantation de stents, de petits treillis métalliques destinés à étayer les artères dans le cadre de la lutte contre la maladie coronarienne (laquelle se traduit par rétrécissement progressif des artères qui nourrissent le cœur, ndlr).

Son usage s’est depuis étendu au traitement de divers types de cancers, et les chercheurs explorent aujourd’hui son potentiel dans le contexte de la prise en charge du diabète,

des maladies neurodégénératives, voire de la lutte contre les méfaits du vieillissement. Ainsi, des études mettant en évidence la capacité de la rapamycine à prolonger la durée de vie ou à combattre les maladies liées à l’âge semblent paraître presque quotidiennement depuis quelque temps… Une requête sur PubMed, le moteur de recherche recense plus de 59 000 articles mentionnant la rapamycine. Il s’agit de l’un des médicaments qui fait le plus parler de lui dans le domaine médical.

Cependant, bien que la rapamycine soit omniprésente en science et en médecine, la façon dont elle a été découverte demeure largement méconnue du public. En tant que scientifique ayant consacré sa carrière à l’étude de ses effets sur les cellules, j’ai ressenti le besoin de mieux comprendre son histoire.

À ce titre, les travaux de l’historienne Jacalyn Duffin portant sur la Medical Expedition to Easter Island (METEI), une expédition scientifique mise sur pied dans les années 1960, ont complètement changé la manière dont nombre de mes collègues et moi-même envisageons désormais notre domaine de recherche.

La découverte du complexe héritage de la rapamycine soulève en effet d’importantes questions sur les biais systémiques qui existent dans le secteur de la recherche biomédicale, ainsi que sur la dette des entreprises pharmaceutiques envers les territoires autochtones d’où elles extraient leurs molécules phares.

Pourquoi un tel intérêt pour la rapamycine ?

L’action de la rapamycine s’explique par sa capacité à inhiber une protéine appelée target of rapamycin kinase, ou TOR. Cette dernière est l’un des principaux régulateurs de la croissance et du métabolisme cellulaires. De concert avec d’autres protéines partenaires, TOR contrôle la manière dont les cellules répondent aux nutriments, au stress et aux signaux environnementaux, influençant ainsi des processus majeurs tels que la synthèse protéique et la fonction immunitaire.

Compte tenu de son rôle central dans ces activités cellulaires fondamentales, il n’est guère surprenant qu’un dysfonctionnement de TOR puisse se traduire par la survenue de cancers, de troubles métaboliques ou de maladies liées à l’âge.

Structure chimique de la rapamycine
Structure chimique de la rapamycine.
Fvasconcellos/Wikimedia

Un grand nombre des spécialistes du domaine savent que cette molécule a été isolée au milieu des années 1970 par des scientifiques travaillant au sein du laboratoire pharmaceutique Ayerst Research Laboratories, à partir d’un échantillon de sol contenant la bactérie Streptomyces hydroscopicus. Ce que l’on sait moins, c’est que cet échantillon a été prélevé dans le cadre d’une mission canadienne appelée Medical Expedition to Easter Island, ou METEI, menée à Rapa Nui – l’Île de Pâques – en 1964.

Histoire de la METEI

L’idée de la Medical Expedition to Easter Island (METEI) a germé au sein d’une équipe de scientifiques canadiens composée du chirurgien Stanley Skoryna et du bactériologiste Georges Nogrady. Leur objectif était de comprendre comment une population isolée s’adaptait au stress environnemental. Ils estimaient que la prévision de la construction d’un aéroport international sur l’île de Pâques offrait une occasion unique d’éclairer cette question. Selon eux, en accroissant les contacts de la population de l’île avec l’extérieur, l’aéroport risquait d’entraîner des changements dans sa santé et son bien-être.

Financée par l’Organisation mondiale de la santé, et soutenue logistiquement par la Marine royale canadienne, la METEI arriva à Rapa Nui en décembre 1964. Durant trois mois, l’équipe fit passer à la quasi-totalité des 1 000 habitants de l’île toute une batterie d’examens médicaux, collectant des échantillons biologiques et procédant à un inventaire systématique de la flore et de la faune insulaires.

Dans le cadre de ces travaux, Georges Nogrady recueillit plus de 200 échantillons de sol, dont l’un s’est avéré contenir la souche de bactéries Streptomyces productrice de rapamycine.

Affiche du mot METEI écrit verticalement entre l’arrière de deux têtes de moaï, avec l’inscription « 1964-1965 RAPA NUI INA KA HOA (N’abandonnez pas le navire) »
Logo du METEI.
Georges Nogrady, CC BY-NC-ND

Il est important de comprendre que l’objectif premier de l’expédition était d’étudier le peuple de Rapa Nui, dans un contexte qui était vu comme celui d’un laboratoire à ciel ouvert. Pour encourager les habitants à participer, les chercheurs n’ont pas hésité à recourir à la corruption, leur offrant des cadeaux, de la nourriture et diverses fournitures. Ils ont également eu recours à la coercition : à cet effet, ils se sont assuré les services d’un prêtre franciscain en poste de longue date sur l’île pour les aider au recrutement. Si leurs intentions étaient peut-être honorables, il s’agit néanmoins là d’un exemple de colonialisme scientifique dans lequel une équipe d’enquêteurs blancs choisit d’étudier un groupe majoritairement non blanc sans son concours, ce qui crée un déséquilibre de pouvoir. Un biais inhérent à l’expédition existait donc dès la conception de la METEI.

Par ailleurs, plusieurs des hypothèses de départ avaient été formulées sur des bases erronées. D’une part, les chercheurs supposaient que les habitants de Rapa Nui avaient été relativement isolés du reste du monde, alors qu’il existait en réalité une longue histoire d’interactions avec des pays extérieurs, comme en témoignaient divers récits dont les plus anciens remontaient au début du XVIIIe siècle, et dont les publications s’étalaient jusqu’à la fin du XIXe siècle.

D’autre part, les organisateurs de la METEI partaient du postulat que le bagage génétique de la population de Rapa Nui était homogène, sans tenir compte de la complexe histoire de l’île en matière de migrations, d’esclavage et de maladies (certains habitants étaient en effet les descendants de survivants de la traite des esclaves africains qui furent renvoyés sur l’île et y apportèrent certaines maladies, dont la variole). La population moderne de Rapa Nui est en réalité métissée, issue à la fois d’ancêtres polynésiens, sud-américains, voire africains.

Cette erreur d’appréciation a sapé l’un des objectifs clés du METEI : évaluer l’influence de la génétique sur le risque de maladie. Si l’équipe a publié un certain nombre d’études décrivant la faune associée à Rapa Nui, son incapacité à établir une base de référence est probablement l’une des raisons pour lesquelles aucune étude de suivi n’a été menée après l’achèvement de l’aéroport de l’île de Pâques en 1967.

Rendre crédit à qui de droit

Les omissions qui existent dans les récits sur les origines de la rapamycine sont le reflet d’angles morts éthiques fréquemment présents dans la manière dont on se souvient des découvertes scientifiques.

Georges Nogrady rapporta de Rapa Nui des échantillons de sol, dont l’un parvint à Ayerst Research Laboratories. Là, Surendra Sehgal et son équipe isolèrent ce qui fut nommé rapamycine, qu’ils finirent par commercialiser à la fin des années 1990 en tant qu’immunosuppresseur, sous le nom Rapamune. Si l’on connaît bien l’obstination de Sehgal, qui fut déterminante pour mener à bien le projet en dépit des bouleversements qui agitaient à cette époque la société pharmaceutique pour laquelle il travaillait – il alla même jusqu’à dissimuler une culture de bactéries chez lui – ni Nogrady ni la METEI ne furent jamais crédités dans les principaux articles scientifiques qu’il publia.

Bien que la rapamycine ait généré des milliards de dollars de revenus, le peuple de Rapa Nui n’en a tiré aucun bénéfice financier à ce jour. Cela soulève des questions sur les droits des peuples autochtones ainsi que sur la biopiraterie (qui peut être définie comme « l’appropriation illégitime par un sujet, notamment par voie de propriété intellectuelle, parfois de façon illicite, de ressources naturelles, et/ou éventuellement de ressources culturelles en lien avec elles, au détriment d’un autre sujet », ndlr), autrement dit dans ce contexte la commercialisation de connaissances autochtones sans contrepartie.

Des accords tels que la Convention des Nations unies de 1992 sur la diversité biologique et la Déclaration de 2007 sur les droits des peuples autochtones visent à protéger les revendications autochtones sur les ressources biologiques, en incitant tous les pays à obtenir le consentement et la participation des populations concernées, et à prévoir des réparations pour les préjudices potentiels avant d’entreprendre des projets.

Ces principes n’étaient cependant pas en vigueur à l’époque du METEI.

Gros plans de visages alignés portant des couronnes de fleurs dans une pièce sombre
Les habitants de Rapa Nui n’ont reçu que peu ou pas de reconnaissance pour leur rôle dans la découverte de la rapamycine.
Esteban Felix/AP Photo

Certaines personnes soutiennent que, puisque la bactérie productrice de rapamycine a été trouvée ailleurs que dans le sol de l’île de Pâques, ce dernier n’était ni unique ni essentiel à la découverte du médicament. D’autres avancent aussi qu’étant donné que les insulaires n’utilisaient pas la rapamycine et n’en connaissaient pas l’existence sur leur île, cette molécule ne constituait pas une ressource susceptible d’être « volée ».

Cependant, la découverte de la rapamycine à Rapa Nui a jeté les bases de l’ensemble des recherches et de la commercialisation ultérieures autour de cette molécule. Cela n’a été possible que parce que la population a été l’objet de l’étude montée par l’équipe canadienne. La reconnaissance formelle du rôle essentiel joué par les habitants de Rapa Nui dans la découverte de la rapamycine, ainsi que la sensibilisation du public à ce sujet, sont essentielles pour les indemniser à hauteur de leur contribution.

Ces dernières années, l’industrie pharmaceutique a commencé à reconnaître l’importance d’indemniser équitablement les contributions autochtones. Certaines sociétés se sont engagées à réinvestir dans les communautés d’où proviennent les précieux produits naturels qu’elles exploitent.

Toutefois, s’agissant des Rapa Nui, les entreprises qui ont directement tiré profit de la rapamycine n’ont pas encore fait un tel geste.

Si la découverte de la rapamycine a sans conteste transformé la médecine, il est plus complexe d’évaluer les conséquences pour le peuple de Rapa Nui de l’expédition METEI. En définitive, son histoire est à la fois celle d’un triomphe scientifique et d’ambiguïtés sociales.

Je suis convaincu que les questions qu’elle soulève (consentement biomédical, colonialisme scientifique et occultation de certaines contributions) doivent nous faire prendre conscience qu’il est nécessaire d’examiner de façon plus critique qu’ils ne l’ont été jusqu’à présent les héritages des découvertes scientifiques majeures.

The Conversation

Ted Powers ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Comment un médicament à un milliard de dollars a été découvert dans le sol de l’Île de Pâques (et ce que les scientifiques et l’industrie doivent aux peuples autochtones) – https://theconversation.com/comment-un-medicament-a-un-milliard-de-dollars-a-ete-decouvert-dans-le-sol-de-lile-de-paques-et-ce-que-les-scientifiques-et-lindustrie-doivent-aux-peuples-autochtones-266381