We are hardwired to sing − and it’s good for us, too

Source: The Conversation – USA (3) – By Elinor Harrison, Faculty Affiliate, Philosophy-Neuroscience-Psychology, Washington University in St. Louis

Gospel choir director Clyde Lawrence performs at the New Orleans Jazz & Heritage Festival on April 25, 2025.
AP Photo/Gerald Herbert

On the first Sunday after being named leader of the Catholic Church in May 2025, Pope Leo XIV stood on the balcony of St. Peter’s Basilica in Rome and addressed the tens of thousands of people gathered. Invoking tradition, he led the people in noontime prayer. But rather than reciting it, as his predecessors generally did, he sang.

In chanting the traditional Regina Caeli, the pope inspired what some have called a rebirth of Gregorian chant, a type of monophonic and unaccompanied singing done in Latin that dates back more than a thousand years.

The Vatican has been at the forefront of that push, launching an online initiative to teach Gregorian chant through short educational tutorials called “Let’s Sing with the Pope.” The stated goals of the initiative are to give Catholics worldwide an opportunity to “participate actively in the liturgy” and to “make the rich heritage of Gregorian chant accessible to all.”

These goals resonated with me. As a performing artist and scientist of human movement, I spent the past decade developing therapeutic techniques involving singing and dancing to help people with neurological disorders. Much like the pope’s initiative, these arts-based therapies require active participation, promote connection, and are accessible to anyone. Indeed, not only is singing a deeply ingrained human cultural activity, research increasingly shows how good it is for us.

The same old song and dance

For 15 years, I worked as a professional dancer and singer. In the course of that career, I became convinced that creating art through movement and song was integral to my well-being. Eventually, I decided to shift gears and study the science underpinning my longtime passion by looking at the benefits of dance for people with Parkinson’s disease.

The neurological condition, which affects over 10 million people worldwide, is caused by neuron loss in an area of the brain that is involved in movement and rhythmic processing – the basal ganglia. The disease causes a range of debilitating motor impairments, including walking instability.

An older woman sings from her music sheet.
A woman sings as part of a chorus for Parkinson’s patients and their care partners at a YMCA in Hanover, Mass., on Feb. 13, 2019.
David L. Ryan/The Boston Globe via Getty Images

Early on in my training, I suggested that people with Parkinson’s could improve the rhythm of their steps if they sang while they walked. Even as we began publishing our initial feasibility studies, people remained skeptical. Wouldn’t it be too hard for people with motor impairment to do two things at once?

But my own experience of singing and dancing simultaneously since I was a child suggested it could be innate. While Broadway performers do this at an extremely high level of artistry, singing and dancing are not limited to professionals. We teach children nursery rhymes with gestures; we spontaneously nod our heads to a favorite song; we sway to the beat while singing at a baseball game. Although people with Parkinson’s typically struggle to do two tasks at once, perhaps singing and moving were such natural activities that they could reinforce each other rather than distract.

A scientific case for song

Humans are, in effect, hardwired to sing and dance, and we likely evolved to do so. In every known culture, evidence exists of music, singing or chanting. The oldest discovered musical instruments are ivory and bone flutes dating back over 40,000 years. Before people played music, they likely sang. The discovery of a 60,000-year-old hyoid bone shaped like a modern human’s suggests our Neanderthal ancestors could sing.

In “The Descent of Man,” Charles Darwin speculated that a musical protolanguage, analogous to birdsong, was driven by sexual selection. Whatever the reason, singing and chanting have been integral parts of spiritual, cultural and healing practices around the world for thousands of years. Chanting practices, in which repetitive sounds are used to induce altered states of consciousness and connect with the spiritual realm, are ancient and diverse in their roots.

Though the evolutionary reasons remain disputed, modern science is increasingly validating what many traditions have long held: Singing and chanting can have profound benefits to physical, mental and social health, with both immediate and long-term effects.

Physically, the act of producing sound can strengthen the lungs and diaphragm and increase the amount of oxygen in the blood. Singing can also lower heart rate and blood pressure, reducing the risk of cardiovascular diseases.

Vocalizing can even improve your immune system, as active music participation can increase levels of immunoglobulin A, one of the body’s key antibodies to stave off illness.

Singing also improves mood and reduces stress.

Studies have shown that singing lowers cortisol levels, the primary stress hormone, in healthy adults and people with cancer or neurologic disorders. Singing may also rebalance autonomic nervous system activity by stimulating the vagus nerve and improving the body’s ability to respond to environmental stresses. Perhaps this is why singing has been called “the world’s most accessible stress reliever.”

A woman sings from a church podium.
A woman performs a Gregorian chant on Christmas Eve in 2023 in Spain.
saac Buj/Europa Press via Getty Images

Moreover, chanting may make you aware of your inner states while connecting to something larger. Repetitive chanting, as is common in rosary recitation and yogic mantras, can induce a meditative state, inducing mindfulness and altered states of consciousness. Neuroimaging studies show that chanting activates brainwaves associated with suspension of self-oriented and stress-related thoughts.

Singing as community

Singing alone is one thing, but singing with others brings about a host of other benefits, as anyone who has sung in a choir can likely attest.

Group singing provides a mood boost and improves overall well-being. Increased levels of neurotransmitters such as dopamine, serotonin and oxytocin during singing may promote feelings of social connection and bonding.

When people sing in unison, they synchronize not just their breath but also their heart rates. Heart rate variability, a measure of the body’s adaptability to stress, also improves during group singing, whether you’re an expert or a novice.

In my own research, singing has proven useful in yet another way: as a cue for movement. Matching footfalls to one’s own singing is an effective tool for improving walking that is better than passive listening. Seemingly, active vocalization requires a level of engagement, attention and effort that can translate into improved motor patterns. For people with Parkinson’s, for example, this simple activity can help them avoid a fall. We have shown that people with the disease, in spite of neural degeneration, activate similar brain regions as healthy controls. And it works even when you sing in your head.

Whether you choose to sing with the pope or not, you don’t need a mellifluous voice like his to raise your voice in song. You can sing in the shower. Join a choir. Chant that “om” at the end of yoga class. Releasing your voice might be easier than you think.

And, besides, it’s good for you.

The Conversation

Elinor Harrison received funding from the National Institutes of Health, the National Endowment for the Arts and the Grammy Museum Foundation. She is affiliated with the International Association of Dance Medicine and Science and the Society for Music Perception and Cognition.

ref. We are hardwired to sing − and it’s good for us, too – https://theconversation.com/we-are-hardwired-to-sing-and-its-good-for-us-too-262861

DNA from soil could soon reveal who lived in ice age caves

Source: The Conversation – UK – By Gerlinde Bigga, Scientific Coordinator of the Leibniz Science Campus "Geogenomic Archaeology Campus Tübingen", University of Tübingen

The team at GACT has been analysing sediments from Hohle Fels cave in Germany.

The last two decades have seen a revolution in scientists’ ability to reconstruct the past. This has been made possible through technological advances in the way DNA is extracted from ancient bones and analysed.

These advances have revealed that Neanderthals and modern humans interbred – something that wasn’t previously thought to have happened. It has allowed researchers to disentangle the various migrations that shaped modern people. It has also allowed teams to sequence the genomes of extinct animals, such as the mammoth, and extinct agents of disease, such as defunct strains of plague.

While much of this work has been carried out by analysing the physical remains of humans or animals, there is another way to obtain ancient DNA from the environment. Researchers can now extract and sequence DNA (determine the order of “letters” in the molecule) directly from cave sediments rather than relying on bones. This is transforming the field, known as palaeogenetics.




Read more:
When did kissing evolve and did humans and Neanderthals get off with each other? New research


Caves can preserve tens of thousands of years of genetic history, providing ideal archives for studying long-term human–ecosystem interactions. The deposits beneath our feet become biological time capsules.

It is something we are exploring here at the Geogenomic Archaeology Campus Tübingen (GACT) in Germany. Analysing DNA from cave sediments allows us to reconstruct who lived in ice age Europe, how ecosystems changed and what role humans played. For example, did modern humans and Neanderthals overlap in the same caves? It’s also possible to obtain genetic material from faeces left in caves. At the moment we are analysing DNA from the droppings of a cave hyena that lived in Europe around 40,000 years ago.

The oldest sediment DNA discovered so far comes from Greenland and is two million years old.

Palaeogenetics has come a long way since the first genome of an extinct animal, the quagga, a close relative of modern zebras, was sequenced in 1984. Over the past two decades, next-generation genetic sequencing machines, laboratory robotics and bioinformatics (the ability to analyse large, complex biological datasets) have turned ancient DNA from a fragile curiosity into a high-throughput scientific tool.

The sediment samples from Hohle Fels are divided up for different analysis methods. Some go to the clean room, some to the geochemical laboratory.
The sediment samples from Hohle Fels are divided up for different analysis methods. Some go to the clean room, some to the geochemical laboratory.

Today, sequencing machines can decode up to a hundred million times more DNA than their early predecessors. Where the first human genome took over a decade to complete, modern laboratories can now sequence hundreds of full human genomes in a single day.

In 2022, the Nobel prize in physiology or medicine was awarded to Svante Pääbo, a leading light in this field. It highlighted the global significance of this research. Ancient DNA now regularly makes headlines, from attempts to recreate mammoth-like elephants, to tracing hundreds of thousands of years of human presence in parts of the world. Crucially, advances in robotics and computing have allowed us to recover DNA from sediments as well as bones.

GACT is a growing research network based in Tübingen, Germany, where three institutions collaborate across disciplines to establish new methods for finding DNA in sediments. Archaeologists, geoscientists, bioinformaticians, microbiologists and ancient-DNA specialists combine their expertise to uncover insights that no single field could achieve alone — a collaboration in which the whole genuinely becomes greater than the sum of its parts.

The network extends well beyond Germany. International partners enable fieldwork in
archaeological cave sites and natural caves all over the world. This summer, for example, the team investigated cave sites in Serbia, collecting several hundred sediment samples for ancient DNA and related ecological analyses. Future work is planned in South Africa and the western United States to test the limits of ancient DNA preservation in sediments from different environments and time periods.

Excavation in Serbia
Work underway at a cave site in Serbia.

A needle in a haystack

Recovering DNA from sediments sounds simple: take a scoop, extract, sequence. In reality, it is far more complex. The molecules are scarce, degraded and fragmented, and mixed with modern contamination from cave visitors and wildlife. Detecting authentic ice age molecules relies on subtle chemical damage patterns to the DNA itself, ultra-clean laboratories, robotic extraction, and specialised bioinformatics. Every positive identification is a small triumph, revealing patterns invisible to conventional archaeology.

Much of GACT’s work takes place in the caves of the Swabian Jura within Unesco World Heritage sites such as Hohle Fels, home to the world’s oldest musical instruments and figurative art. Neanderthals and Homo sapiens left behind stone artefacts, bones, ivory and sediments that accumulated over tens of millennia. Caves are natural DNA archives, where stable conditions preserve fragile biomolecules, enabling researchers to build up a genetic history of ice age Europe.

One of the most exciting aspects of sediment DNA research is its ability to detect species long gone, even when no bones or artefacts remain. A particular focus lies on humans: who lived in the cave, and when? How modern humans and Neanderthals use the caves and, as mentioned, were they there at the same times? Did cave bears and humans compete for shelter and resources? And what might the microbes that lived alongside them reveal about the impact humans had on past ecosystems?

Sediment DNA also traces life outside the cave. Predators dragged prey into sheltered chambers, humans left waste behind. By following changes in human, animal and microbial DNA over time, researchers can examine ancient extinctions and ecosystem shifts, offering insights relevant to today’s biodiversity crisis.

The work is ambitious: using sedimentary DNA to reconstruct ice age ecosystems and to understand the ecological consequences of human presence. Only two years into GACT, every dataset generates new questions. Every cave layer adds another twist to the story.

With hundreds of samples now being processed, major discoveries lie ahead. Researchers expect soon to detect the first cave bear genomes, the earliest human traces, and complex microbial communities that once thrived in darkness. Will the sediments reveal all their secrets? Time will tell – but the prospects are exhilarating.

The Conversation

Gerlinde Bigga does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. DNA from soil could soon reveal who lived in ice age caves – https://theconversation.com/dna-from-soil-could-soon-reveal-who-lived-in-ice-age-caves-270318

Google is relying on its own chips for its AI system Gemini. Here’s why that’s a seismic change for the industry

Source: The Conversation – UK – By Alaa Mohasseb, Senior Lecturer in Artificial Intelligence and Machine Learning, University of Portsmouth

For many years, the US company Nvidia shaped the foundations of modern artificial intelligence. Its graphics processing units (GPUs) are a specialised type of computer chip originally designed to handle the processing demands of graphics and animation. But they’re also great for the repetitive calculations required by AI systems.

Thus, these chips have powered the rapid rise of large language models – the technology behind AI chatbots – and they have became the familiar engine behind almost every major AI breakthrough.

This hardware sat quietly in the background while most of the attention was focused on algorithms and data. Google’s decision to train Gemini on its own chips, called tensor processing units (TPUs) changes that picture. It invites the industry to look directly at the machines behind the models and to reconsider assumptions that long seemed fixed.

This moment matters because the scale of AI models has begun to expose the limits
of general purpose chips. As models grow, the demands placed on processing systems
increases to levels that make hidden inefficiencies impossible to ignore.

Google’s reliance on TPUs reveals an industry that is starting to understand that hardware choices are not simply technical preferences but strategic commitments that determine who can lead the next wave of AI development.

Google’s Gemini relies on cloud systems that simplify the challenging task of coordinating devices during large-scale training (improvement) of AI models.

The design of these different chips reflects a fundamental difference in intention. Nvidia’s GPUs are general purpose and flexible enough to run a wide range of tasks. TPUs were created for the narrow mathematical operations at the heart of AI models.

Independent comparisons highlight that TPU v5p pods can outperform high-end Nvidia systems on workloads tuned for Google’s software ecosystem. When the chip architecture, model structure and software stack align so closely, improvements in speed and efficiency become natural rather than forced.

These performance characteristics also reshape how quickly teams can experiment. When hardware works in concert with the models it is designed to train, iteration becomes faster and more scalable. This matters because the ability to test ideas quickly often determines which organisations innovate first.

These technical gains are only one part of the story. Training cutting-edge AI systems is expensive and requires enormous computing resources. Organisations that rely only on GPUs face high costs and increasing competition for supply. By developing and depending on its own hardware, Google gains more control over pricing, availability and long-term strategy.

Analysts have noted that this internal approach positions Google with lower operational costs while reducing dependence on external suppliers for chips. A particularly notable development came from Meta as it explored a multi-billion dollar agreement to use TPU capacity.

When one of the largest consumers of GPUs evaluates a shift toward custom accelerators, it signals more than curiosity. It suggests growing recognition that relying on a single supplier may no longer be the safest or most efficient strategy in an industry where hardware availability shapes competitiveness.

These moves also raise questions about how cloud providers will position themselves. If TPUs become more widely available through Google’s cloud services, the rest of the market may gain access to hardware that was once considered proprietary. The ripple effects could reshape the economics of AI training far beyond Google’s internal research.

What This Means for Nvidia

Financial markets reacted quickly to the news. Nvidia’s stock fell as investors weighed the potential for cloud providers to split their hardware needs across more than one supplier. Even if TPUs do not replace GPUs entirely, their presence introduces competition that may influence pricing and development timelines.

The existence of credible alternatives pressures Nvidia to move faster, refine its offerings and appeal to customers who now see more than one viable path forward.
Even so, Nvidia retains a strong position. Many organisations depend heavily on CUDA (a computing platform and programming model developed by NVidia) and the large ecosystem of tools and workflows built around it.

Moving away from that environment requires significant engineering effort and may not be feasible for many teams. GPUs continue to offer unmatched flexibility for diverse workloads and will remain essential in many contexts.

However, the conversation around hardware has begun to shift. Companies building
cutting-edge AI models are increasingly interested in specialised chips tuned to their exact needs. As models grow larger and more complex, organisations want greater control over the systems that support them. The idea that one chip family can meet every requirement is becoming harder to justify.

Google’s commitment to TPUs for Gemini illustrates this shift clearly. It shows that custom chips can train world-class AI models and that hardware purpose-built for AI is becoming central to future progress.

It also makes visible the growing diversification of AI infrastructure. Nvidia remains dominant, but it now shares the field with alternatives that are increasingly capable of shaping the direction of AI development.

The foundations of AI are becoming more varied and more competitive. Performance
gains will come not only from new model architectures but from the hardware designed to support them.

Google’s TPU strategy marks the beginning of a new phase in which the path forward will be defined by a wider range of chips and by the organisations willing to rethink the assumptions that once held the industry together.

The Conversation

Alaa Mohasseb does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Google is relying on its own chips for its AI system Gemini. Here’s why that’s a seismic change for the industry – https://theconversation.com/google-is-relying-on-its-own-chips-for-its-ai-system-gemini-heres-why-thats-a-seismic-change-for-the-industry-270818

Before trips to Mars, we need better protection from cosmic rays

Source: The Conversation – UK – By Zahida Sultanova, Post Doctoral Research Fellow, School of Biological Sciences, University of East Anglia

Frame Stock Footage/Shutterstock.com

The first step on the Moon was one of humanity’s most exciting accomplishments. Now scientists are planning return trips – and dreaming of Mars beyond.

Next year, Nasa’s Artemis II mission will send four astronauts to fly around the Moon to test the spacecraft before future landings. The following year, two astronauts are expected to explore the surface of the Moon for a week as part of Nasa’s Artemis III mission.

And finally, the trip to Mars is planned for the 2030s. But there’s an invisible threat standing in the way: cosmic rays.

When we look at the night sky, we see stars and nearby planets. If we’re lucky enough to live somewhere without light pollution, we might catch meteors sliding across the sky. But cosmic rays – consisting of protons, helium nuclei, heavy ions and electrons – remain hidden. They stream in from exploding stars (galactic cosmic rays) and our very own sun (solar particle events).

They don’t discriminate. These particles carry so much energy and move so fast that they can knock electrons off atoms and disrupt molecular structures of any material. That way, they can damage everything in their path, machines and humans alike.

The Earth’s magnetic field and atmosphere shield us from most of this danger. But outside Earth’s protection, space travellers will be routinely exposed. In deep space, cosmic rays can break DNA strands, disrupt proteins and damage other cellular components, increasing the risk of serious diseases such as cancer.

The research challenge is straightforward: measure how cosmic rays affect living organisms, then design strategies to reduce their damage.

Ideally, scientists would study these effects by sending tissues, organoids (artificially made organ-like structures) or lab animals (such as mice) directly into space. That does happen, but it’s expensive and difficult. A more practical approach is to simulate cosmic radiation on Earth using particle accelerators.

Cosmic ray simulators in the US and Germany expose tissues, plants and animals to different components of cosmic rays in sequence. A new international accelerator facility being built in Germany will reach even higher energies, matching levels found in space that have never been tested on living organisms.

But these simulations aren’t fully realistic. Many experiments deliver the entire mission dose in a single treatment. This is like using a tsunami to study the effects of rain.

In real space, cosmic rays arrive as a mixture of high-energy particles hitting simultaneously, not one type at a time. My colleagues and I have suggested building a multi-branch accelerator that could fire several tuneable particle beams at once, recreating the mixed radiation of deep space under controlled laboratory conditions. For now, though, this kind of facility exists only as a proposal.

Beyond better testing, we need better protection. Physical shields seem like the obvious first defence. Hydrogen-rich materials such as polyethylene and water-absorbing hydrogels can slow charged particles. Although they are used, or planned to be used, as spacecraft materials, their benefits are limited.

Particularly galactic cosmic rays, the ones that arrive from far exploding stars, are so energetic that they can penetrate through physical shielding. They can even generate secondary radiation that increases exposure. So, effective protection by using solely physical shields remains a major challenge.

Nature’s armour

That’s why scientists are exploring biological strategies. One approach is to use antioxidants. These molecules can protect DNA from harmful chemicals that are produced when cosmic rays hit living cells.

Supplementing with CDDO-EA, a synthetic antioxidant, reduces cognitive damage caused by simulated cosmic radiation in female mice. In the study, mice exposed to simulated cosmic radiation learned a simple task more slowly compared to unexposed mice. However, mice that received the synthetic antioxidant performed normally despite being exposed to simulated cosmic radiation.

Another approach involves learning from organisms with extraordinary abilities. Hibernating organisms become more resistant to radiation during hibernation. The mechanisms on how hibernation protects from radiation are not fully understood yet. Still, inducing hibernation-like conditions in non-hibernating animals is possible and can make them more radioresistant.

Tardigrades – microscopic creatures also known as water bears – are also extremely radioresistant, especially when dehydrated. Although we can’t hibernate or dehydrate astronauts, the strategies these organisms use to protect cellular components might help us preserve other organisms during long space journeys.

Microbes, seeds, simple food sources and even animals that could later become our companions might be kept in a protected state for a while. Under calmer conditions, they could then be brought back to full activity. Therefore, understanding and harnessing these protective mechanisms could prove crucial for future space journeys.

A third strategy focuses on supporting organisms’ own stress responses. Stressors on Earth, such as starvation or heat, have driven organisms to evolve cellular defences that protect DNA and other cellular components. In a recent preprint (a paper that is yet to be peer reviewed), my colleague and I suggest that activating these mechanisms through specific diets or drugs may offer additional protection in space.

Physical shields alone won’t be enough. But with biological strategies, more experiments in space and on Earth, and the construction of new dedicated accelerator complexes, humanity is getting closer to making routine space travel a reality. With current speed, we are probably decades away from fully solving cosmic-ray protection. Greater investment in space radiation research could shorten that timeline.

The ultimate goal is to journey beyond Earth’s protective bubble without the constant threat of invisible, high-energy particles damaging our bodies and our spacecraft.

The Conversation

Dr. Zahida Sultanova works for the University of East Anglia and is funded by the Leverhulme Trust. She is a member of European Society of Evolutionary Biology (ESEB) and Ecology and Evolutionary Biology Society of Turkey (EkoEvo).

ref. Before trips to Mars, we need better protection from cosmic rays – https://theconversation.com/before-trips-to-mars-we-need-better-protection-from-cosmic-rays-268934

The Beatles’ movie Help! featured crude racial stereotypes – but it shouldn’t be hidden away

Source: The Conversation – UK – By Philip Murphy, Director of History & Policy at the Institute of Historical Research and Professor of British and Commonwealth History, School of Advanced Study, University of London

I sometimes think that my teenage fascination with the Beatles is what drew me to becoming a professional historian. Piecing together what I could find out about them in the years before the internet was a sort of gateway to the history of Britain and the world in the decade in which I was born – Carnaby Street, the Vietnam War, Richard Nixon and counter-cultural figures like Timothy Leary.

The original Beatles Anthology released in 1995 in the form of three albums and a documentary series was undoubtedly a treat for Beatles fans. Recently an updated version of the Anthology arrived in the shops. The new release includes a fourth CD of unreleased tracks.

It is always thrilling to hear iconic tracks at an earlier stage of their gestation. And this is essentially what the recently released additional material offers. However, the true obsessives among us have heard much of the Anthology material from bootleg collections. What does remain as hard to come by as ever are some of the films.

At a time when a fan had to be grateful for what they were given, I can still remember the excitement of learning that the BBC was planning to show all the band’s films over the Christmas of 1979. If we are, indeed, in the “barrel-scraping” phase, as one critic dubbed the new Anthology, of Beatles commemoration, it is surprising how difficult it now is to access some of those films.

In the last ten years some work has done to make some of the films more accessible. The band’s first movie, A Hard Day’s Night (1964), directed by Richard Lester, was re-released in cinemas in 2014 to mark its 50th anniversary. It is currently available on the BFI’s streaming service.

Their swan-song film, Let it Be (1970) was remastered for Disney+ in 2024 by Peter Jackson. Jackson had previously spliced together hours of unused footage from director Michael Lindsay-Hogg’s original recording sessions to create the 2021 documentary Get Back.

Although some are yet to be remastered. The band’s 1968 well-regarded animated movie Yellow Submarine, is still awaiting an authorised re-release, although no doubt this will come. Their dismal 1967 BBC Christmas special Magical Mystery Tour was notoriously dead-on-arrival and if there are commercial reasons for resuscitating it, there certainly aren’t any artistic ones.

But what about Help! (1965), the band’s second outing with Richard Lester? The film’s madcap plot involves the group being chased around the globe by the comically inept members of a religious cult keen to recover a sacred ring from the finger of the band’s drummer, Ringo Starr.

It is a far more accomplished piece of filmmaking than Magical Mystery Tour. Indeed, it was in many ways ground-breaking for pioneering the music video format and rock musicals. But the last DVD release of the movie was a 2007 two-disc set.




Read more:
Anthology 4 shows there’s still more to discover about the Beatles


The reason why the film has missed out on more recent celebratory repackaging isn’t difficult to surmise: it features three familiar European character actors – Leo McKern, Eleanor Bron and John Bluthal – adopting “brown-face” to portray Indian cult members. And their quest for the ring is in order to enable them to carry out a human sacrifice. No mention is made of any of this in the documentary series that accompanied the Anthology recordings, and which has also been re-released.

As the 1960s progressed, these sorts of crude orientalist stereotypes faced parody and criticism. But apologists for the movie would struggle to demonstrate that Help! is doing anything other than reinforcing those attitudes. Some contemporary critics have simply labelled it as racist.

I wonder if an attempt could be made to salvage Help! by re-releasing it with its own documentary package exploring the historical context of the film. If the choice is probably between either contextualisation or simply allowing the movie to languish in the far-corners of eBay to avoid offending global consumers, then I think we should go for the former.

While it would be difficult to make Help! genuinely palatable to contemporary viewers, it could be used as the starting point for a fascinating exploration of the ways in which Britain in the Swinging Sixties viewed its colonial past.

Some of the era’s more radical writers and filmmakers like Edward Bond and Tony Richardson were keen to question the values of their parents’ generation. But as Help! demonstrates, imperial assumptions of white racial superiority continued to permeate popular culture, not least in the area of comedy.

If the Beatles of 1965 had passively accepted the plot of Help!, the Beatles of the late 60s would almost certainly have baulked at it. George Harrison famously embraced Indian music, culture and religion. John Lennon would subsequently feature in another Richard Lester film, How I Won the War (1967), which mocked British jingoism and its rigid class system.

A year later, Lennon had to face racist abuse from the British press and fans when he left his wife for the Japanese artist, Yoko Ono. Also, the track Commonwealth, which surfaced in the 2021 documentary Get Back, is an improvisation satirising Enoch Powell’s Rivers of Blood speech, which was held responsible for inspiring a spate of racist attacks.

The Beatles were on a journey, and Help! deserves to be seen and discussed as part of that journey, rather than being hidden away.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Philip Murphy has received funding from the AHRC and is a member of the European Movement UK.

ref. The Beatles’ movie Help! featured crude racial stereotypes – but it shouldn’t be hidden away – https://theconversation.com/the-beatles-movie-help-featured-crude-racial-stereotypes-but-it-shouldnt-be-hidden-away-270526

Crise de la dette: les quatre leviers qui peuvent aider le Sénégal à éviter la restructuration

Source: The Conversation – in French – By Souleymane Gueye, Professor of Economics and Statistics, City College of San Francisco

Le Fonds monétaire international (FMI) a réévalué la dette du Sénégal à 132 % du PIB, entraînant une dégradation de sa note souveraine. Les discussions avec le FMI avancent lentement et le pays rencontre des difficultés pour lever des financements sur les marchés internationaux.

Malgré tout, la dette reste soutenable depuis 2024, mais elle place le Sénégal devant un dilemme : restructurer immédiatement ou poursuivre la stratégie actuelle, qui permet de rembourser les échéances sans accord avec le FMI et de maintenir son Plan de redressement économique et social.
.
En tant qu’économiste ayant étudié les relations entre le FMI et le Sénégal sur quarante ans, je ne préconise pas une restructuration de cette dette.

En dépit d’un besoin de financement de 5 800 milliards de F CFA pour 2025, il est préférable de poursuivre la trajectoire actuelle et le plan de redressement, tout en recherchant un accord avec le FMI, aligné sur la “Vision 2050”.

Qu’est-ce qu’une restructuration de la dette ?

Une restructuration de la dette est un « réaménagement négocié des obligations d’un État pour restaurer la soutenabilité de sa dette » . Elle intervient lorsque le pays n’est plus en mesure de payer ses dettes sans compromettre sa stabilité économique et sociale. Autrement dit, elle intervient en cas de défaut de paiement. Or, ce n’est pas la situation actuelle du Sénégal. Une restructuration vise surtout à alléger la pression financière et à redonner des marges budgétaires.




Read more:
Pourquoi la dernière dégradation de la note du Sénégal par Moody’s ne tient pas la route


Elle repose sur deux principes simples : un pays ne peut pas continuer à emprunter indéfiniment, et les créanciers ont parfois du mal à se coordonner entre eux.

Une restructuration peut prendre plusieurs formes :

  • réduction du principal (haircut) : le créancier accepte d’abandonner une partie de la somme due. C’est la solution la plus lourde, réservée aux situations de détresse sévère.

  • reprofilage : le pays paie tout, mais plus lentement et moins cher, parfois avec une période de grâce. Cela facilite l’ajustement budgétaire sans choc majeur.

  • rééchelonnement : report des échéances dans le temps, souvent sous l’égide du Club de Paris – un groupe informel de pays créanciers dont le rôle est coordonner la restructuration de la dette publique des États qui n’arrivent plus à rembourser leurs emprunts – ou de créanciers privés, avec des conditions financières assouplies.

  • cadre commun du G20 : restructuration globale et coordonnée entre tous les créanciers, visant à éviter les blocages, garantir la transparence et s’accompagner d’un programme macroéconomique avec le FMI.




Read more:
Sortir du piège de la dette : les alternatives au modèle FMI pour le Sénégal


Les implications économiques

Plusieurs études comme celle de l’économiste américain Barry Eichengreen montrent qu’une restructuration de la dette peut réduire fortement l’investissement privé, fragiliser les banques exposées aux titres publics et limiter l’accès du pays aux financements extérieurs. Elle augmente aussi les primes de risque et peut entraîner une crise de confiance.

Dans une économie comme celle du Sénégal, intégrée à l’Union économique et monétaire ouest-africaine (Uemoa) et sans politique monétaire indépendante, ces chocs sont très difficiles à absorber.

À long terme, une restructuration peut être utile si elle est bien pensée et accompagnée de réformes crédibles. Elle peut améliorer la structure de la dette, réduire les intérêts, libérer des marges pour les dépenses prioritaires et restaurer la confiance.

Mais ces avantages dépendent du maintien d’une dynamique de croissance non entravée par des ajustements trop brutaux. Ce qui est impossible à avoir avec le FMI.

Le Sénégal doit-il restructurer sa dette publique ?

Le Sénégal a un endettement très élevé, mais il reste soutenable selon le mode de calcul courant. On compare les dépenses obligatoires de l’État (salaires, transferts, dépenses de fonctionnement et d’investissement) à ses recettes. Puis, on vérifie si le service de la dette reste supportable. Selon ce calcul, basé sur la dernière loi de finances initiale, le Sénégal se situe dans la catégorie des pays à haut risque, mais pas en situation de défaut de paiement.

En revanche, les besoins de financement augmentent, et la confiance des marchés s’est affaiblie après la révélation des dettes cachées et l’absence d’accord avec le FMI.




Read more:
Sénégal : ce que révèle la dégradation de la note sur la dette cachée et les notations de crédit


Malgré ce contexte difficile, le Sénégal continue de payer sa dette, mais les taux d’intérêt très élevés aggravent le coût du service de la dette et réduisent les marges pour investir. Aujourd’hui, 16 % des recettes fiscales et 50 % des revenus de l’État servent à rembourser la dette, contre 25 % en moyenne en Afrique.

Cette situation peut-elle durer ? Une restructuration pourrait alléger la pression, car refuser d’y recourir expose le pays à un risque de crise de liquidité pouvant se transformer en crise financière et économique.

Ce contexte financier est très tendu, mais il reste, selon nous, soutenable. Il peut même s’améliorer si le pays exploite pleinement ses atouts. Le Sénégal dispose en effet d’un environnement prometteur qui peut éviter une restructuration immédiate de la dette.

Ces atouts tournent autour de ces facteurs :

  • Une population jeune, dynamique et entreprenante

  • Une stabilité politique et institutionnelle dans une région instable.

  • Une croissance potentielle solide (8 % -10 %), renforcée par le gaz, le pétrole et l’or.

  • Une administration fiscale et financière en constante amélioration grâce
    aux réformes récentes.

. Un potentiel enviable dans l’agrobusiness, les ressources halieutiques, les services et le numérique, et l’accès encore possible aux financements extérieurs confessionnels.

Eviter la restructuration du type FMI

Fort de ces atouts, le Sénégal doit éviter une restructuration encadrée par le FMI, car impliquant forcément un programme du FMI. Celui-ci est fondé sur des politiques bien connues :

• Une politique fiscale restrictive : forte réduction du déficit, diminution des dépenses publiques, élimination des subventions, hausse des impôts, licenciements administratifs,

• Une privatisation des entreprises publiques et para publiques,

• Une politique monétaire restrictive (augmentation du coût du crédit) si le pays dispose d’une politique monétaire autonome,

• Une libéralisation totale du secteur financier et du commerce extérieur.

Ces politiques risqueraient d’exacerber la contraction de l’activité économique au lieu de la résoudre car, du fait du régime de change fixe, le Sénégal ne dispose pas du levier monétaire et du taux de change pour atténuer les effets négatifs que ces mesures auraient sur l’activité économique.

Autrement dit, les déséquilibres budgétaires vont être rétablis au détriment d’une croissance économique endogène et inclusive, exacerbant les inégalités économiques et sociales, et la paupérisation des populations.

Entrer dans une restructuration maintenant créerait un cercle vicieux : une contraction de l’activité économique occasionnant une baisse des recettes et des difficultés à atteindre les objectifs du programme. Ce qui entraînera alors davantage d’austérité.

Il faudra donc tout faire pour éviter ce scénario et concevoir une autre stratégie pour desserrer la pression sur les finances publiques et mieux gérer la dette.

Une stratégie alternative

Éviter une restructuration immédiate ne veut pas dire rester passif. Il faut au contraire bâtir une stratégie solide autour de quatre leviers :

  • D’abord, privilégier une approche basée sur la transparence : terminer l’inventaire de la dette, publier tous les rapports et élaborer un plan de gestion conforme aux recommandations de la Cour des comptes. Cette approche devrait idéalement s’accompagner d’un accord formel avec le FMI pour valider le cadre macroéconomique proposé par le Sénégal et renforcer la transparence budgétaire, envoyant ainsi un signal fort de confiance aux marchés internationaux.

  • Le deuxième levier consiste à distinguer clairement la dette de l’État central de celle des entreprises publiques et des garanties souveraines.

Pour la dette centrale, le gouvernement sénégalais peut renégocier les maturités des Euro-obligations et prêts commerciaux à taux élevés sur le long terme sans une restructuration formelle. L’État doit chercher à convertir les prêts bilatéraux à court terme en financements concessionnels ou partenariats sectoriels, tout en privilégiant les financements auprès de partenaires bilatéraux et multilatéraux (Chine, Turquie, Inde, pays du Golfe).

Ce reprofilage partiel, conçu indépendamment du FMI, pourrait améliorer la liquidité et libérer des marges pour soutenir l’économie. L’État doit se concentrer sur les obligations à court terme (bons du Trésor et titres de moins de deux ans) pour refinancer, renégocier les taux ou prolonger les échéances, afin de ne pas déstabiliser le système bancaire de l’Uemoa.

  • Le troisième levier consiste à étaler l’ajustement fiscal sur plusieurs années, élargir l’assiette fiscale et engager une réforme axée sur l’équité. Le gouvernement doit aussi renforcer la gouvernance des régies financières et mettre en place une gestion efficace des revenus tirés du gaz, du pétrole et des autres ressources naturelles.

  • L’exploitation du gaz et du pétrole offre au Sénégal un levier supplémentaire : valoriser les revenus futurs pour négocier de meilleures conditions. La création d’un fonds de stabilisation et d’un cadre transparent d’exploitation de ces ressources peut réduire la charge de la dette et permettre d’investir dans la formation des jeunes, le capital humain et les services publics.




Read more:
Comment le Sénégal peut financer son économie sans s’endetter davantage


Que retenir ?

Tant que le FMI maintient ses conditions classiques ou retarde les négociations, le Sénégal ne devrait pas restructurer sa dette. La stratégie à suivre consiste à :

• Consolider graduellement les finances publiques en réduisant les dépenses de fonctionnement de l’État central et en fusionnant les directions et les agences publiques

• Préserver l’investissement public, relancer l’investissement privé et élaborer des mesures spécifiques de relance de la croissance economique;

• Négocier des conditions moins procycliques et contraignantes;

• Finaliser un programme avec le FMI basé sur la Vision 2050;

• Et n’envisager une restructuration qu’en dernier recours, dans un cadre structuré, repensé et adossé aux priorités économiques du gouvernement sénégalais et plus favorable.

Une dette bien analysée et bien gérée ne peut pas déboucher sur une défaillance économique. Au contraire, elle peut devenir un outil puissant de souveraineté pour favoriser le plein emploi, réduire la pauvreté et promouvoir la justice économique et sociale.

The Conversation

Souleymane Gueye receives funding from Monterey Institute of International Studies and the Fullbright foundation

ref. Crise de la dette: les quatre leviers qui peuvent aider le Sénégal à éviter la restructuration – https://theconversation.com/crise-de-la-dette-les-quatre-leviers-qui-peuvent-aider-le-senegal-a-eviter-la-restructuration-270177

Larry Summers’ sexism is jeopardizing his power and privilege, but the entire economics profession hinders progress for women

Source: The Conversation – USA (2) – By Yana van der Meulen Rodgers, Professor of Labor Studies, Rutgers University

Larry Summers attends a prestigious conference in July 2025 in Sun Valley, Idaho. Kevin Dietsch/Getty Images

House lawmakers released damning correspondence between economist Larry Summers and the late convicted sex offender Jeffrey Epstein on Nov. 12, 2025. The exchanges, which were among more than 20,000 newly released public documents, documented how Summers – a former U.S. Treasury secretary and Harvard University president – repeatedly sought Epstein’s advice while pursuing an intimate relationship with a woman he was mentoring.

The two men exchanged texts and emails until July 5, 2019, the day before Epstein was arrested on federal charges of the sex trafficking of minors. That was more than a decade after Epstein pleaded guilty to soliciting prostitution from a girl who was under 18. Epstein died by suicide that August, while in jail.

“As I have said before, my association with Jeffrey Epstein was a major error of judgement,” Summers wrote in a statement to The Crimson, Harvard’s newspaper, after the documents came to light. “I am deeply ashamed of my actions and recognize the pain they have caused,” he said in another statement.

The texts have ignited a new round of scrutiny of Summers and calls for Harvard to revoke his tenure.

Four women hold photos of Jeffrey Epstein aloft.
Protesters hold signs bearing photos of convicted sex criminal and Larry Summers confidante Jeffrey Epstein in front of a federal courthouse on July 8, 2019, in New York City.
Stephanie Keith/Getty Images

Prestigious career is unraveling

These revelations are leading to the unraveling of Summers’ prestigious career.

The 70-year-old economist went on leave from teaching at Harvard on Nov. 19. He has also stepped down from several boards on which he was serving, including Yale University’s Budget Lab, OpenAI and two think tanks – the Center for American Progress and the Center for Global Development.

In addition, Harvard has launched an investigation into whether Summers and other people affiliated with the university broke university policies through their interactions with Epstein and should be subject to disciplinary action.

Many organizations have severed their ties with Summers. Summers’ withdrawal from public commitments include his role as a paid contributor to Bloomberg TV and as a contributing opinion writer at The New York Times. He also withdrew from the Group of 30, an international group of financial and economics experts.

Choice of a wingman was problematic

The correspondence that surfaced in late 2025 indicated that the prominent economist had engaged in more than casual banter with a convicted sex criminal.

Epstein called himself Summers’ “wing man.” Summers asked Epstein about “getting horizontal” with his mentee – a female economist who had studied at Harvard. And, not for the first time, Summers questioned the intelligence of women.

Summers, who is one of the nation’s most influential economists, also complained about the growing intolerance among the “American elite” of sexual misconduct.

These comments call into question Summers’ judgment, behavior and beliefs and the power dynamics between him and the women he has mentored.

As a female economist and a board member of the Committee on the Status of Women in the Economics Profession, I wasn’t surprised by the latest revelations, shocking as they may appear.

After all, it was Summers’ disparaging remarks about what he said was women’s relative inability to do math that led him to relinquish the Harvard presidency in 2006. And researchers have been documenting for years the gender bias that pervades the profession of economics.

A leaky pipeline in higher education

Summers taught my first-year Ph.D. macroeconomics course before he became a prominent policymaker during the Clinton administration, and he advised me during his office hours. Thankfully I did not experience any sexual harassment, but as an economics doctoral candidate at Harvard in the late 1980s, I did gain firsthand insight into the elitist culture of the nation’s top economics program.

Back then, only about 1 in 5 of the people who earned a Ph.D. in economics in the U.S. were women. This percentage rose to 30.5% by 1995 and has barely budged since then.

In 2024, according to the National Science Foundation, 34.2% of newly minted economics Ph.D.s – about 1 in 3 – in the U.S. were women, a considerably lower share than in other social sciences, business, the humanities and science.

After earning doctoral degrees in economics, women face a leaky pipeline in the tenure track, the highest-paid, most secure and prestigious academic jobs. The higher the rank, the lower the representation of women.

In 2024, 34% of assistant professors in economics were women, but only 28% of tenured associate professors – the next step on the ladder – were women. And just 18% of tenured full professors in economics were women.

The gender gap is wider in influential positions, such as economics department chairs and the editorial board members of economics journals. As of 2019, only 24% of the 55,035 editorial board members of economics journals were women. A brief look at the websites of the top 10 economics departments in late 2025 indicates that only one of those 10 department chairs is a woman.

Publication patterns also reflect this inequality. Women are substantially underrepresented as authors in the top economics journals, and this imbalance is not explained by quality differences. Rather, studies have found that women face higher hurdles in peer review, departmental support and finding productive co-authors.

Chilly climate

The data paints a clear picture of systemic bias in the profession’s practices and culture. That bias influences who succeeds and who is sidelined.

A 2019 survey by the American Economic Association, a professional association for economists, documented widespread sexual discrimination and harassment. Almost half of the women surveyed among the association’s members said that they had experienced sexual discrimination that interfered with their careers in some way, and 43% reported having experienced offensive sexual behavior from another economist.

A follow-up survey in 2023 indicated that the association’s new initiatives to improve the professional climate had resulted in little improvement.

Beyond academia

Economists can influence policymakers’ decisions on interest rates, taxation and social spending. In turn, the underrepresentation of women in economics can hamper policymaking by limiting the range of perspectives that inform economic decisions.

Researchers have found that arguments from female economists are roughly 20% more persuasive in shaping public opinion than identical arguments from men.

And yet the gender gap still pervades economics outside academia. At the 12 regional Federal Reserve banks, for example, women constituted just 23% of 411 research track economists in 2022.

Following its own code of conduct

“Economists have a professional obligation to conduct civil and respectful discourse in all forums,” the American Economic Association’s code of conduct states. The code gives organizations in economics a clear basis for deciding whether to keep or cut ties with Summers.

The Committee on the Status of Women in the Economics Profession has called for all economic institutions to undertake investigations into Summers’ conduct.

As of early December, the extent to which economic journals and other economics groups are responding to the controversy was still unclear.

I believe that eliminating inequity in economics would take more than an investigation of Summers’ conduct. In my view, institutions and professional associations, including the American Economic Association, should strengthen and enforce codes of conduct that cover harassment, conflicts of interest and misuse of mentorship roles.

In addition, I think that Summers’ ties to Epstein are a powerful reminder of why university economics departments need clearer standards and more transparency in hiring, promotions and leadership appointments. Strengthening those standards would help them root out the sexism and other forms of elitism that have historically marked the profession so that academic success is driven more by merit than self-perpetuating privilege.

It makes little sense to me that the economics profession is claiming to wield authority while tolerating inequity and ethical lapses. Taking these steps toward greater accountability would help to restore trust.

The Conversation

Yana Rodgers serves on the board of the Committee on the Status of Women in the Economics Profession. Dr. Summers taught her PhD macroeconomics course at Harvard University in 1988.

ref. Larry Summers’ sexism is jeopardizing his power and privilege, but the entire economics profession hinders progress for women – https://theconversation.com/larry-summers-sexism-is-jeopardizing-his-power-and-privilege-but-the-entire-economics-profession-hinders-progress-for-women-270367

Planning life after high school isn’t easy – 4 tips to help students and families navigate the process

Source: The Conversation – USA (2) – By Shannon Pickett, Professor of Psychology and Licensed Mental Health Counselor, Purdue University

While many high school students think mostly about four-year college opportunities, some students might be less certain about what is best. iStock/Getty Images Plus

Many high school seniors are now focusing on what they will do once they graduate – or how they don’t at all know what is to come.

Families trying to guide and support these students at the juncture of a major life transition likely also feel nervous about the open-ended possibilities, from starting at a standard four-year college to not attending college at all.

I am a mental health counselor and psychology professor.

Here are four tips to help make deciding what comes after high school a little easier for everyone involved:

1. Shadow someone with a job you might want

I have worked with many college students who are interested in a particular career path, but are not familiar with the job’s day-to-day workings.

A parent, teacher or another adult in this student’s life could connect them with someone they shadow at work, even for a day, so the student can better understand what the job entails.

High school students may also find that interviewing someone who works in a particular field is another helpful way to narrow down career path options, or finalize their college decisions.

Research published in 2025 shows that high school students who complete an internship are better able to decide whether certain careers are a good fit for them.

2. Look at the numbers

Full-time students can pay anywhere from about US$4,000 for in-state tuition at a public state school per semester to just shy of $50,000 per semester at a private college or university. The average annual cost of tuition alone at a public college or university in 2025 is $10,340, while the average cost of a private school is $39,307.

Tuition continues to rise, though the rate of growth has slowed in the past few years.

About 56% of 2024 college graduates had taken out loans to pay for college.

Concerns about affording college often come up with clients who are deciding on whether or not to get a degree. Research has shown that financial stress and debt load are leading to an increase in students dropping out of college.

It can be helpful for some students to look at tuition costs and project what their monthly student loan payments would be like after graduation, given the expected salary range in particular careers. Financial planning could also help students consider the benefits and drawbacks of public, private, community colleges or vocational schools.

Even with planning, there is no guarantee that students will be able to get a job in their desired field, or quickly earn what they hope to make. No matter how prepared students might be, they should recognize that there are still factors outside their control.

A blue circular maze shows people from above walking on different paths.
No matter what route graduating high school students take, it’s often a stressful period of time.
Klaus Vedfelt/Royalty-Free

3. Normalize other kinds of schools

I have found that some students feel they should go to a four-year college right after they graduate because it is what their families expect. Some students and parents see a four-year college as more prestigious than a two-year program, and believe it is more valuable in terms of long-term career growth.

That isn’t the right fit for everyone, though.

Enrollment at trade-focused schools increased almost 20% from the spring of 2020 through 2025, and now comprises 19.4% of public two-year college enrollment.

Going to a trade school or seeking a two-year associate’s degree can put students on a direct path to get a job in a technical area, such as becoming a registered nurse or electrician.

But there are also reasons for students to think carefully about trade schools.

In some cases, trade schools are for-profit institutions and have been subjected to federal investigation for wrongdoing. Some of these schools have been fined and forced to close.

Still, it is important for students to consider which path is personally best for them.

Research has shown that job satisfaction has a positive impact on mental health, and having a longer history with a career field leads to higher levels of job satisfaction.

4. Consider a gap year before shutting down the idea

One strategy that high school graduates have used in recent years is taking a year off between high school and college in order to better determine what is the right fit for a student. Approximately 2% to 3% of high school graduates take a gap year – typically before going on to enroll in college.

Some young people may travel during a gap year, volunteer, or get a job in their hometown.

Whatever the reason students take gap years, I have seen that the time off can be beneficial in certain situations. Taking a year off before starting college has also been shown to lead to better academic performance in college.

The Conversation

Shannon Pickett does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Planning life after high school isn’t easy – 4 tips to help students and families navigate the process – https://theconversation.com/planning-life-after-high-school-isnt-easy-4-tips-to-help-students-and-families-navigate-the-process-263864

Pourquoi l’IA oblige les entreprises à repenser la valeur du travail

Source: The Conversation – France (in French) – By Caroline Gans Combe, Associate professor Data, econometrics, ethics, OMNES Education

Une étude souligne que sur 178 références citées par une IA, 69 renvoyaient à des références incorrectes ou inexistantes. GoldenDayz/Shutterstock

Le rapport entre intelligence artificielle et emploi nécessite de repenser en profondeur l’analyse des tâches dans une entreprise. Il se joue à deux niveaux : dans la compréhension des chaînes de valeur de l’entreprise et dans la capacité des dirigeants à l’appréhender. L’enjeu ? Identifier avec précision où et comment injecter l’IA. Car celle-ci peut mentir, inventer des références et se tromper.


Les sombres prédictions sur la disparition des emplois intellectuels d’entrée de carrière alimentent un débat déjà ancien sur la fongibilité du travail face aux avancées de l’intelligence artificielle (IA) – soit le remplacement d’un emploi par un autre.

Et si la véritable question n’est pas ce qui peut être substitué, mais par où et comment cette substitution crée ou détruit de la valeur pour l’entreprise ? C’est que nous soulignons dans notre étude réalisée par le Global Partnership on AI (GPAI) et le Centre d’expertise international de Montréal en intelligence artificielle (Ceimia).

L’enjeu pour l’intelligence artificielle est de dépasser l’identification par catégorie des emplois, et plus finement des tâches automatisables, pour comprendre leur position stratégique dans la chaîne de création de valeur.

Encore aujourd’hui, l’essentiel des études sur l’impact de l’IA dans le domaine procède par décomposition : identifier des tâches, évaluer la capacité de celles-ci à être automatisées, agréger les résultats. Cette méthodologie, héritée de Carl Benedikt Frey et Michael Osborne, qui estimaient que l’automatisation présentait un risque pour 47 % des emplois, comprend des limites.

Elle ignore la fonction économique spécifique de chaque tâche prise individuellement dans la définition d’un emploi, mais aussi le processus de création de valeur.

Alors où et comment l’IA peut-elle avoir une valeur ajoutée dans l’entreprise ? Comment les dirigeants peuvent-ils s’en emparer pour être le meilleur architecte des interactions homme-machine ? Comment accompagner cette transition ?

Scandale Deloitte Australie

Le scandale Deloitte d’octobre 2025 illustre cette problématique. Deloitte Australie a dû rembourser partiellement une facture de 440 000 dollars australiens (environ 248 000 euros). Pourquoi ? Un rapport commandé par le gouvernement s’est révélé avoir été produit avec Azure OpenAI GPT-4o… sans divulgation initiale.

Le travail contenait des références académiques inexistantes, des citations inventées, et des experts fictifs. Qui plus est, une fois ces problèmes détectés, le cabinet a substitué aux fausses références d’autres bien réelles, qui ne soutenaient pas les conclusions initiales du document.




À lire aussi :
Et si votre prochain collègue était un agent IA ?


Deloitte avait été choisi, non pas pour ses capacités rédactionnelles, mais parce qu’il apportait une assurance d’expertise indépendante, une garantie de fiabilité des sources, un engagement de responsabilité professionnelle. En automatisant sans contrôle, le cabinet a détruit précisément ce pour quoi il était payé.

Références inexistantes

Ce phénomène n’est pas isolé. Une étude du Cureus Journal of Medical Science montre que sur 178 références citées par une IA, 69 renvoyaient à des références incorrectes ou inexistantes. Plus troublant encore : des termes fictifs se propagent désormais dans la littérature scientifique réelle après avoir été générés par l’IA.

Cette asymétrie révèle que la « valeur » d’une tâche dépend autant de sa place dans la chaîne de production que de son « rôle » à l’égard des autres tâches, de la manière dont elle les influence.

L’impact délétère de l’usage de l’IA dans ce type de contexte a été illustré par le cas de l’assistant médical Nabla. Fin 2024, l’outil proposé par cette société permettant une prise de note automatisée dans le domaine médical avait été utilisé par plus de 230 000 médecins et 40 organisations de santé. Il avait permis la transcription de 7 millions de consultations.

À cette date, une étude a révélé que le logiciel avait inventé des phrases entières, faisant référence à des médicaments inexistants, comme « hyperactivated antibiotics », des commentaires non prononcés… le tout dans un contexte où tous les enregistrements audio des patients concernés avaient été effacés, rendant impossible une quelconque vérification a posteriori.

Cerner la tâche automatisable avec l’IA

À l’ère de l’IA, il faut dépasser les seuls critères de destruction d’emplois ou de potentiel d’automatisation pour évaluer chaque tâche selon trois dimensions complémentaires.

Dépendance opérationnelle

La première dimension concerne la dépendance opérationnelle, c’est-à-dire la façon dont la qualité d’une tâche impacte les tâches suivantes. Une forte dépendance, comme l’extraction de données servant à définir une stratégie, exige la prudence car les erreurs se propagent dans toute la chaîne. À l’inverse, une faible dépendance, comme la simple mise en forme d’un document, tolère mieux l’automatisation.

Connaissance non codifiable

La deuxième dimension évalue la part de connaissance non codifiable nécessaire à la tâche. Il s’agit de tout ce qui relève de l’expérience, de l’intuition et du jugement contextuel, impossible à traduire en règles explicites. Plus cette part est élevée, plus il faut maintenir un couplage étroit humain-machine pour interpréter les signaux faibles et mobiliser le discernement humain.

Réversibilité

La troisième dimension concerne la réversibilité, soit la capacité à corriger rapidement une erreur. Les tâches à faible réversibilité, comme un diagnostic médical préopératoire ou la gestion d’infrastructures critiques, exigent une supervision humaine forte, car une erreur peut avoir des conséquences graves. Les tâches réversibles, comme les brouillons ou l’exploration de pistes, acceptent davantage d’autonomie.

Quatre interactions avec une IA

Ces trois dimensions dessinent quatre modalités d’interaction avec l’IA, recommandées en fonction des tâches à effectuer.

L’automatisation est recommandée pour les tâches peu interdépendantes, réversibles et codifiables, comme la mise en forme, l’extraction de données ou les premiers jets.

La collaboration humain-machine convient aux situations de dépendance modérée, mais de haute réversibilité, où les erreurs peuvent être gérées, comme l’analyse exploratoire ou la recherche documentaire.

Certaines tâches demeurent du ressort exclusif de l’humain, du moins pour l’heure. Il s’agit notamment des décisions stratégiques qui cumulent une forte interdépendance des tâches, une part importante de connaissance non codifiable issue de l’expérience et une faible réversibilité des choix effectués.

Le chatbot de relation client d’Air Canada a commis des erreurs de tarification.
Miguel Lagoa/Shutterstock

La supervision inversée s’impose lorsque l’IA produit, mais que l’humain doit valider systématiquement, notamment en cas de forte dépendance ou de faible réversibilité. Le cas Air Canada montre que lâcher la bride à une IA dans un tel contexte est hautement dommageable. Ici, le chatbot de la compagnie aérienne avait affirmé qu’on pouvait demander rétroactivement un tarif spécifique lié à des évènements familiaux, ce qui s’est révélé totalement faux.

Attaquée en justice par le passager qui s’estimait trompé, la compagnie a été condamnée au motif qu’elle était l’entité responsable de l’IA et de son usage. Or, elle ne la supervisait pas. L’impact financier de cette condamnation peut sembler faible (le remboursement du passager), mais le coût tant en termes de réputation que pour l’actionnaire est loin d’avoir été négligeable.

Quatre compétences clés pour un manager

Chaque chaîne de valeur rassemble une grande variété de tâches qui ne se distribuent pas selon une logique uniforme : les quatre modalités d’automatisation se retrouvent entremêlées de manière hétérogène.

Le manager devient alors l’architecte de ces chaînes de valeur hybrides, et doit développer quatre compétences clés pour les piloter efficacement.

  • Il lui faut maîtriser l’ingénierie de workflows cognitifs, c’est-à-dire identifier avec précision où et comment injecter l’IA de manière optimale dans les processus.

  • Il doit être capable de diagnostiquer les interdépendances opérationnelles propres à chaque contexte, plutôt que d’appliquer mécaniquement des grilles d’analyse externes focalisées uniquement sur le coût du travail.

  • « Désintermédiation cognitive » : il s’agit d’orchestrer les nouveaux rapports au savoir créés par l’IA tout en préservant la transmission des compétences tacites qui font la richesse d’une organisation.

  • Le manager doit porter une éthique de la substitution, en arbitrant constamment entre l’efficience immédiate qu’offre l’automatisation et la préservation du capital humain sur le long terme.

Un paradoxe technique éclaire ces enjeux. Les modèles de raisonnement les plus avancés hallucinent paradoxalement plus que leurs prédécesseurs, révélant un arbitrage inhérent entre capacité de raisonnement et fiabilité factuelle. Cette réalité confirme que l’impact de l’IA sur le monde du travail ne peut se réduire à une simple liste de métiers condamnés à disparaître.

Les dimensions analytiques présentées ici offrent précisément un cadre pour dépasser les approches simplistes. Elles permettent de positionner le management dans un rôle nouveau : celui d’arbitre et d’urbaniste cognitif, capable de concevoir l’architecture des interactions humain-machine au sein de l’organisation.

Bien conduite, cette transformation peut enrichir l’expérience humaine du travail, au lieu de l’appauvrir.

The Conversation

Caroline Gans Combe a reçu des financements de l’Union Européenne dans le cadre de ses projets de recherche.

ref. Pourquoi l’IA oblige les entreprises à repenser la valeur du travail – https://theconversation.com/pourquoi-lia-oblige-les-entreprises-a-repenser-la-valeur-du-travail-268253

Quand l’IA fait n’importe quoi, le cas du gratte-ciel et du trombone à coulisse

Source: The Conversation – France in French (2) – By Frédéric Prost, Maître de conférences en informatique, INSA Lyon – Université de Lyon

Images générées par IA en réponse à la requête « Dessine-moi un gratte-ciel et un trombone à coulisse côte à côte pour qu’on puisse apprécier leur taille respective » (par ChatGPT à gauche, par Gemini à droite). CC BY

Une expérience relativement simple consistant à demander à une intelligence artificielle générative de comparer deux objets de tailles très différentes permet de réfléchir aux limites de ces technologies.


Les intelligence artificielle (IA) génératives font désormais partie de notre quotidien. Elles sont perçues comme des « intelligences », mais reposent en fait fondamentalement sur des statistiques. Les résultats de ces IA dépendent des exemples sur lesquels elles ont été entraînées. Dès qu’on s’éloigne du domaine d’apprentissage, on peut constater qu’elles n’ont pas grand-chose d’intelligent. Une question simple comme « Dessine-moi un gratte-ciel et un trombone à coulisse côte à côte pour qu’on puisse apprécier leurs tailles respectives » vous donnera quelque chose comme ça (cette image a été générée par Gemini) :

Sur l’image générée par IA on voit que le gratte ciel et le trombone à coulisse ont presque la même taille
Image générée par l’IA Gemini en réponse au prompt (la requête) : Dessine-moi un gratte-ciel et un trombone à coulisse côte à côte pour qu’on puisse apprécier leur taille respective.
Fourni par l’auteur

L’exemple provient du modèle de Google, Gemini, mais le début de l’ère des IA génératives remonte au lancement de ChatGPT en novembre 2022 et ne date que d’il y a trois ans. C’est une technologie qui a changé le monde et qui n’a pas de précédent dans son taux d’adoption. Actuellement ce sont 800 millions d’utilisateurs, selon OpenAI, qui chaque semaine, utilisent cette IA pour diverses tâches. On notera que le nombre de requêtes diminue fortement pendant les vacances scolaires. Même s’il est difficile d’avoir des chiffres précis, cela montre à quel point l’utilisation des IA est devenue courante. À peu près un étudiant sur deux utilise régulièrement des IA.

Les IA : des technologies indispensables ou des gadgets ?

Trois ans, c’est à la fois long et court. C’est long dans un domaine où les technologies évoluent en permanence, et court en termes sociétaux. Même si on commence à mieux comprendre comment utiliser ces IA, leur place dans la société n’est pas encore quelque chose d’assuré. De même la représentation mentale qu’ont ces IA dans la culture populaire n’est pas établie. Nous en sommes encore à une alternance entre des positions extrêmes : les IA vont devenir plus intelligentes que les humains ou, inversement, ce ne sont que des technologies tape-à-l’œil qui ne servent à rien.

En effet, un nouvel appel à faire une pause dans les recherches liées aux IA a été publié sur fond de peur liée à une superintelligence artificielle. De l’autre côté sont promis monts et merveilles, par exemple un essai récent propose de ne plus faire d’études, car l’enseignement supérieur serait devenu inutile à cause de ces IA.

Difficile de sortir de leurs domaines d’apprentissage

Depuis que les IA génératives sont disponibles, je mène cette expérience récurrente de demander de produire un dessin représentant deux objets très différents et de voir le résultat. Mon but par ce genre de prompt (requête) est de voir comment le modèle se comporte quand il doit gérer des questions qui sortent de son domaine d’apprentissage. Typiquement cela ressemble à un prompt du type « Dessine-moi une banane et un porte-avions côte à côte pour qu’on se rende compte de la différence de taille entre les deux objets ». Ce prompt en utilisant Mistral donne le résultat suivant :

L’IA génère une image d’une banane qui a la même taille qu’un porte-avions
Capture d’écran d’un prompt et de l’image générée par l’IA Mistral.
Fourni par l’auteur

À ce jour je n’ai jamais trouvé un modèle qui donne un résultat censé. L’image donnée en illustration ci-dessus (ou en tête de l’article) est parfaite pour comprendre comment fonctionnent ce type d’IA et quelles sont ses limites. Le fait qu’il s’agisse d’une image est intéressant, car cela rend palpables des limites qui seraient moins facilement discernables dans un long texte.

Ce qui frappe est le manque de crédibilité dans le résultat. Même un enfant de 5 ans voit que c’est n’importe quoi. C’est d’autant plus choquant qu’avec la même IA il est tout à fait possible d’avoir de longues conversations complexes sans pour autant qu’on ait l’impression d’avoir affaire à une machine stupide. D’ailleurs ce même type d’IA peut tout à fait réussir l’examen du barreau ou répondre avec une meilleure précision que des professionnels sur l’interprétation de résultats médicaux (typiquement, repérer des tumeurs sur des radiographies).

D’où vient l’erreur ?

La première chose à remarquer est qu’il est difficile de savoir à quoi on est confronté exactement. Si les composants théoriques de ces IA sont connus dans la réalité, un projet comme celui de Gemini (mais cela s’applique aussi aux autres modèles que sont ChatGPT, Grok, Mistral, Claude, etc.) est bien plus compliqué qu’un simple LLM couplé à un modèle de diffusion.

Un LLM est une IA qui a été entraînée sur des masses énormes de textes et qui produit une représentation statistique de ces derniers. En gros, la machine est entraînée à deviner le mot qui fera le plus de sens, en termes statistiques, à la suite d’autres mots (votre prompt).

Les modèles de diffusion qui sont utilisés pour engendrer des images fonctionnent sur un principe différent. Le processus de diffusion est basé sur des notions provenant de la thermodynamique : on prend une image (ou un son) et on ajoute du bruit aléatoire (la neige sur un écran) jusqu’à ce que l’image disparaisse, puis ensuite on fait apprendre à un réseau de neurones à inverser ce processus en lui présentant ces images dans le sens inverse du rajout de bruit. Cet aspect aléatoire explique pourquoi avec le même prompt le modèle va générer des images différentes.

Un autre point à considérer est que ces modèles sont en constante évolution, ce qui explique que le même prompt ne donnera pas le même résultat d’un jour à l’autre. De nombreuses modifications sont introduites à la main pour gérer des cas particuliers en fonction du retour des utilisateurs, par exemple.

À l’image des physiciens, je vais donc simplifier le problème et considérer que nous avons affaire à un modèle de diffusion. Ces modèles sont entraînés sur des paires images-textes. Donc on peut penser que les modèles de Gemini et de Mistral ont été entraînés sur des dizaines (des centaines ?) de milliers de photos et d’images de gratte-ciel (ou de porte-avions) d’un côté, et sur une grande masse d’exemples de trombone à coulisse (ou de bananes) de l’autre. Typiquement des photos où le trombone à coulisse est en gros plan. Il est très peu probable que, dans le matériel d’apprentissage, ces deux objets soient représentés ensemble. Donc le modèle n’a en fait aucune idée des dimensions relatives de ces deux objets.

Pas de « compréhension » dans les modèles

Les exemples illustrent à quel point les modèles n’ont pas de représentation interne du monde. Le « pour bien comparer leurs tailles » montre qu’il n’y a aucune compréhension de ce qui est écrit par les machines. En fait les modèles n’ont pas de représentation interne de ce que « comparer » signifie qui vienne d’ailleurs que des textes dans lesquels ce terme a été employé. Ainsi toute comparaison entre des concepts qui ne sont pas dans le matériel d’apprentissage sera du même genre que les illustrations données en exemple. Ce sera moins visible mais tout aussi ridicule. Par exemple, cette interaction avec Gemini « Considérez cette question simple : “Le jour où les États-Unis ont été établis est-il dans une année bissextile ou une année normale ?”. »

Lorsqu’il a été invoqué avec le préfixe CoT (Chain of Thought, une évolution récente des LLMs dont le but est de décomposer une question complexe en une suite de sous-questions plus simples), le modèle de langage moderne Gemini a répondu : « Les États-Unis ont été établis en 1776. 1776 est divisible par 4, mais ce n’est pas une année séculaire (de cent ans), c’est donc une année bissextile. Par conséquent, le jour où les États-Unis ont été établis était dans une année normale. »

On voit bien que le modèle déroule la règle des années bissextiles correctement, donnant par là même une bonne illustration de la technique CoT, mais il conclut de manière erronée à la dernière étape ! Ces modèles n’ont en effet pas de représentation logique du monde, mais seulement une approche statistique qui crée en permanence ce type de glitchs qui peuvent paraître surprenants.

Cette prise de conscience est d’autant plus salutaire qu’aujourd’hui, les IA écrivent à peu près autant d’articles publiés sur Internet que les humains. Ne vous étonnez donc pas d’être étonné par la lecture de certains articles.

The Conversation

Frédéric Prost ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Quand l’IA fait n’importe quoi, le cas du gratte-ciel et du trombone à coulisse – https://theconversation.com/quand-lia-fait-nimporte-quoi-le-cas-du-gratte-ciel-et-du-trombone-a-coulisse-268033