Source: The Conversation – UK – By Dawn Llewellyn, Associate professor of Religion and Gender, Theology and Religious Studies, University of Chester
What happens when the women immortalised in old master paintings step out of their gilded frames and into the chaos of modern domestic life? That’s the question artist Sarah Lightman tackles, with wit, irreverence and insight, in her exhibition Biblical Women Ageing Disgracefully, now on at Chester Visual Arts, Grovesnor Shopping Centre.
In works from her Biblical Domestic (2021–2024) and Menstrual Hystery (2024) series, Lightman trades halos for housework, and heavenly glory for the cluttered reality of her own everyday life. Her saints and heroines aren’t meditating in divine serenity – they’re battling menopause, messy kitchens and midlife malaise.
With humour and intimacy, Lightman probes the distance between the idealised women of religious art and the ageing bodies we’re taught to hide. Her characters, drawn from both the canon of western Christian art and the sacred Jewish texts of her upbringing, are lovingly reimagined through a feminist lens.
What if Mary hated soft play as much as the rest of us? What if Eve was just trying to get through another basket of laundry? What if biblical women aged in real time?
With bold colours, absurdist touches and deep empathy, Biblical Women Ageing Disgracefully reframes these archetypes for today – and starts fresh conversations about visibility, care and womanhood.
Old masters, new messes
In Fridge Frustrations (2022), Caravaggio’s Judith Beheading Holofernes (1599) becomes a scene of domestic dread. Judith still holds Holofernes’ severed head – but now her crisis is storage, not salvation:
Judith can’t find anywhere in the fridge for her organic and fresh cut of Holofernes.
Lightman retains the dramatic composition of the original but shifts its meaning entirely. Her watercolour medium softens the baroque oil intensity, introducing levity without losing emotional depth.
Here, Mary’s serene acceptance is swapped for something far more visceral: she sits beside an exam table mid heavy bleed, not in graceful surrender but bodily discomfort. Gabriel is gone, replaced by a gynaecologist in latex gloves. The walls? Tiled not with gold leaf but with packets of Always. This is no divine encounter – just hot flushes, greasy hair and hormonal chaos. No spiritual serenity in sight.
Instead of youthful grace, Lightman gives us perimenopausal truth: gritty, awkward, real.
Not a rejection, but a rewriting
Lightman’s work is unabashedly feminist and unapologetically funny – but it’s also rooted in reverence. Her reinterpretations of women from Hebrew scripture honour the complexity of these figures and draw from the feminist Jewish tradition of midrash: creative interpretation that fills in the biblical silences.
Lightman isn’t discarding these sacred stories: she’s inhabiting them. She paints the parts we were never told, the thoughts and struggles left out of the male-dominated canon. Her canvases ask: what if we didn’t accept the gaps in these women’s lives? What if we imagined them into our own?
Context matters – and Biblical Women Ageing Disgracefully is exhibited not in a white-walled gallery but in Chester’s Grosvenor Precinct, having previously shown at Chester’s cultural centre Storyhouse. The location is deliberate. These Madonnas and menopausal saints appear exactly where they live now: among shopping bags, toddler tantrums and the quiet sighs of women holding it all together.
Meeting Eve, Mary, Bathsheba, Susanna and Lot’s wife in a shopping centre creates a surreal and poignant dissonance. It collapses the sacred and the ordinary, and invites viewers to see their own lives reflected in these ancient figures.
Messy, mortal and magnificent
It’s a risk, of course, putting menopause, motherhood, grief, housework and rape culture centre stage. There’s a version of this exhibition that could have been grim. But Lightman’s palette is anything but dour. Her watercolours are vibrant and playful, her titles sharp with satire. These women aren’t tragic martyrs; they’re exhausted, yes, but also knowing, cheeky and in on the joke.
Lightman treats art history not as a fixed monument, but as a toolkit to be deconstructed and rebuilt. She gives her saints their bodies back – saggy, sweaty, miraculous – and their agency too.
What makes Biblical Women Ageing Disgracefully so powerful is its embrace of contradiction. It is sacred and silly, sincere and subversive, heartbreaking and hilarious. It is, in essence, a feminist midrash in watercolour: retelling holy stories through the grit and glory of contemporary womanhood, and holding them close even as it pushes them open.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Martin L. Olsson, Medical Director of the Nordic Reference Laboratory for Blood Group Genomics, Region Skåne & Professor of Transfusion Medicine, Head of the Division, Lund University
In a routine blood test that turned extraordinary, French scientists have identified the world’s newest and rarest blood group. The sole known carrier is a woman from Guadeloupe whose blood is so unique that doctors couldn’t find a single compatible donor.
The discovery of the 48th recognised blood group, called “Gwada-negative”, began when the woman’s blood plasma reacted against every potential donor sample tested, including those from her own siblings. Consequently, it was impossible to find a suitable blood donor for her.
Most people know their blood type – A, B, AB or O – along with whether they are Rh-positive or negative. But these familiar categories (those letters plus “positive” or “negative”) represent just two of several dozens of blood group systems that determine compatibility for transfusions. Each system reflects subtle but crucial differences in the proteins and sugars coating our red blood cells.
To solve the mystery of the Guadeloupian woman’s incompatible blood, scientists turned to cutting-edge genetic analysis. Using whole exome sequencing – a technique that examines all 20,000-plus human genes – they discovered a mutation in a gene called PIGZ.
This gene produces an enzyme responsible for adding a specific sugar to an important molecule on cell membranes. The missing sugar changes the structure of a molecule on the surface of red blood cells. This change creates a new antigen – a key feature that defines a blood group – resulting in an entirely new classification: Gwada-positive (having the antigen) or -negative (lacking it).
Using gene editing technology, the team confirmed their discovery by recreating the mutation in a lab. So red blood cells from all blood donors tested are Gwada-positive and the Guadeloupean patient is the only known Gwada-negative person on the planet.
The implications of the discovery extend beyond blood transfusions. The patient suffers from mild intellectual disability, and tragically, she lost two babies at birth – outcomes that may be connected to her rare genetic mutation.
The enzyme produced by the PIGZ gene operates at the final stage of building a complex molecule called GPI (glycosylphosphatidylinositol). Previous research has shown that people with defects in other enzymes needed for GPI assembly can experience neurological problems ranging from developmental delays to seizures. Stillbirths are also common among women with these inherited disorders.
Although the Caribbean patient is the only person in the world so far with this rare blood type, neurological conditions including developmental delay, intellectual disability and seizures have been noted in other people with defects in enzymes needed earlier in the GPI assembly line.
The Gwada discovery highlights both the marvels and challenges of human genetic diversity. Blood groups evolved partly as protection against infectious diseases (many bacteria, viruses and parasites use blood group molecules as entry points into cells). This means your blood type can influence your susceptibility to certain diseases.
But extreme rarity creates medical dilemmas. The French researchers acknowledge they cannot predict what would happen if Gwada-incompatible blood were transfused into the Guadeloupian woman. Even if other Gwada-negative people exist, they would be extremely difficult to locate. It is also unclear if they can become blood donors.
This reality points towards a futuristic solution: lab-grown blood cells. Scientists are already working on growing red blood cells from stem cells that could be genetically modified to match ultra-rare blood types. In the case of Gwada, researchers could artificially create Gwada-negative red blood cells by mutating the PIGZ gene.
Gwada is a colloquial term for Guadeloupe, a Caribbean island. Shutterstock.com
A growing field
Gwada joins 47 other blood group systems recognised by the International Society of Blood Transfusion. Like most of these blood-group systems, it was discovered in a hospital lab where technicians were trying to find compatible blood for a patient.
The name reflects the case’s Caribbean roots: Gwada is slang for someone from Guadeloupe, giving this blood group both scientific relevance and cultural resonance.
As genetic sequencing becomes more advanced and widely used, researchers expect to uncover more rare blood types. Each discovery expands our understanding of human variation and raises fresh challenges for transfusion and other types of personalised medicine.
Martin L Olsson is a Wallenberg Clinical Scholar who receives research funding from Knut and Alice Wallenberg Foundation (grant no. 2020.0234). He holds other major grants from the Swedish Research Council (grant no. 2024-03772), the Novo Nordisk Foundation (grant no. NNF22OC0077684) and the Swedish government funds to university healthcare for clinical research (ALF grant no. 2022.0287). He is also a member of the International Society of Blood Transfusion (ISBT)’s Working Party on Red Cell Immunogenetics and Blood Group Terminology.
Jill Storry receives funding from the Swedish Research Council (grant no. 2024-03772). She is affiliated with, and the current senior Vice-President, of the International Society of Blood Transfusion, as well as a member of the society’s Working Party on Red Cell Immunogenetics and Blood Group Terminology.
Imagine you’re standing at a bottle depot with an empty pop can. You can get a dime back, or you can take a chance at winning $1,000. Which would you choose?
To increase recycling rates, many countries have adopted deposit refund systems, where you pay a small deposit, say 10 cents, when you buy an eligible beverage container and get this deposit back when you return it to a local depot.
Through this system, approximately 80 per cent of containers in British Columbia and almost 85 per cent of containers in Alberta are recovered. Still, that leaves millions of containers as litter, in landfills or incinerated every year, contributing to pollution and greenhouse gas emissions.
With Canada’s goal of zero plastic waste by 2030 drawing near, a new approach to recycling beverage containers could make a difference.
Psychology research shows that people tend to prefer a small chance to win a large reward over a guaranteed small reward. For example, people would more often prefer a small chance to win $5,000 over receiving a $5 reward.
Applying this insight to recycling, we turned the small guaranteed refund of $0.10 in B.C. and Alberta into a 0.01 per cent chance of getting $1,000. We set up recycling tables at food courts in Vancouver and at a RibFest event in Spruce Grove, Alta.
When people brought their beverage containers to us to recycle, we presented them with five options for a refund. They could get their guaranteed 10 cents, or a chance to win a larger amount of money, the highest option being $1,000.
We found that people preferred the chance to win $1,000 over the other options, and they felt the happiest after making this choice.
To see if the lottery option actually increased recycling, we conducted an experiment where we told people ahead of time that they would get their guaranteed 10-cent refund or that they had a chance to win $1,000 for each bottle they brought to our study.
We found that people brought 47 per cent more beverage containers when we offered them a chance to win $1,000 than when we offered them the guaranteed refund.
Overall, our findings suggest that offering a chance to win a larger amount of money can meaningfully boost beverage container recycling. The excitement of a potential big win can motivate people who may not be enticed by the typical small, guaranteed refund.
Choice matters
A one-size-fits-all approach won’t work. People recycle for different reasons. They also have different risk tolerances, and some may rely on the guaranteed refund for additional income. To capture diverse preferences and needs, it’s vital that the lottery-style refund is offered in addition to the guaranteed refund, not instead of it.
It would also be beneficial to include smaller, more frequent prizes alongside the grand prize, so people win relatively frequently to keep motivations high.
This is Norway’s approach to their recycling lottery, with 39 per cent of people choosing the lottery option when they recycle. In 2023, Norway’s recycling lottery achieved a 92.3 per cent container return rate.
Importantly, our research does not capture people who collect large bags of containers to return to the depot. It’s possible that this demographic may have different preferences for the refund, and future research should examine this group in particular.
Green lottery for good
The lottery-style refund has the same expected payout as the 10-cent refund per bottle. This means that, on average, people will take home the same amount of money as with the guaranteed option, without incurring additional losses or gains. This benevolent factor distinguishes the lottery-style refund from other types of lotteries or gambling that often profit off the players.
Since the only way to enter this lottery-style refund is to recycle beverage containers, it’s impossible to directly re-enter any winnings into the lottery. There are also no near-misses, losses disguised as wins, exciting lights and sounds or other sensory stimulation often associated with gambling.
Some might be apprehensive about potential gambling dangers of creating a lottery system. However, there has not been a single case linking the recycling lottery to gambling addiction. There is also no evidence that purchases of beverage containers would increase as a result of the lottery-style refund.
Our study’s transparent design, with clear odds, ensures fairness, unlike casino games built to take players’ cash. For this approach to be successful, deposit refund systems must maintain this transparency in lottery-style program operations and payouts.
If done right, offering a chance to win a higher amount of money for recycling can meaningfully increase recycling rates, contribute to a circular economy and allow people to choose the refund option that works best for them.
Jiaying Zhao receives funding from the Social Sciences and Humanities Research Council of Canada.
Jade Radke receives funding from the Social Sciences and Humanities Research Council of Canada Doctoral Fellowship and the University of British Columbia Indigenous Graduate Fellowship.
Source: The Conversation – Canada – By Chris Houser, Professor in Department of Earth and Environmental Science, and Dean of Science, University of Waterloo
Between 2010 and 2017, there were approximately 50 drowning fatalities each year associated with rough surf and strong currents in the Great Lakes.
Rip currents — commonly referred to as rips or colloquially as rip tides — are driven by the breaking of waves. These currents extend away from the shoreline and can flow at speeds easily capable of carrying swimmers far from the beach.
Structural rips are common throughout the Great Lakes (Grand Haven on the eastern shore of Lake Michigan, for example) and develop when groynes, jetties and rock structures deflect the alongshore current offshore, beyond the breaking waves. Depending on the waves and the structure, a shadow rip can also develop on the other side of the groyne or jetty.
Rips can also develop anywhere that variations in the bathymetry (the topography of the sand underwater) — such as nearshore bars — causes wave-breaking to vary along the beach, which makes the water thrown landward by the breaking waves return offshore as a concentrated flow at the water’s surface. These are known as channel or bathymetric rips and are they can form along sand beaches in the Great Lakes.
While it can be difficult to spot a channel rip, they can be identified by an area of relatively calm water between breaking waves, a patch of darker water or the offshore flow of water, sediment and debris.
A person caught in a rip is transported away from shore into deeper water, but they are not pulled under the water. If they are a weak swimmer or try to fight the current, they may panic and fail to find a way out of the rip and back to shore before submerging.
Rip current hazards
Most rip fatalities occur on unsupervised beaches or on supervised beaches when and where lifeguards are not present. While many popular beaches near large urban centres have lifeguards, many beaches don’t. Along just the east coast of Lake Huron, there are more than 40 public beaches, including Goderich, Bayfield, Southampton and Sauble Beach, but only two have lifeguard programs (Sarnia and Grand Bend).
Recent findings from a popular beach on Lake Huron suggest that those with less experience at the beach tend to make decisions of convenience rather than based on beach safety. Residents with greater knowledge of the local hazards tend to avoid swimming near where the rip can develop.
But even when people are aware of rip currents and other beach hazards, they may not make the right decisions. Despite the presence of warnings, people’s actions are greatly influenced by the behaviour of others, peer pressure and group-think. The social cost of not entering the water with the group may appear to outweigh the risk posed by entering the water.
Rip channel and current on Lake Huron. (Chris Houser)
The behaviour of beach users is affected by confirmation bias, a cognitive shortcut where a person selectively pays attention to evidence confirming their pre-existing beliefs and ignores evidence to the contrary. When someone enters the water and does not encounter strong waves or currents, they’re more likely to engage in risky behaviour on their next visit to that beach or a similar beach.
In the United States, the National Oceanographic and Atmospheric Administration runs programs designed to educate beach users about surf and rip hazards. But Canada hasn’t implemented a national beach safety strategy.
Education about rips and dangerous surf falls on the shoulders of advocates, many of whom have been impacted by a drowning in the Great Lakes. The Great Lakes Surf Rescue Project has been tracking and educating school and community groups about rip currents and rough surf in the Great Lakes since 2010.
Several new advocacy groups have started in recent years, including Kincardine Beach Safety on Lake Huron and the Rip Current Information Project on Lake Erie. Given that there is limited public interest in surf-related drownings and limited media coverage, these advocacy groups are helping to increase awareness of rip currents and rough surf across the Great Lakes.
To ensure a safe trip to the beach, beachgoers should seek out more information about rip currents and other surf hazards in the Great Lakes.
Who said that an organization’s main resource and true competitive advantage lies in its employees, their talent or their motivation? After all, maybe your real goal is to empty out your offices, permanently discourage your staff and methodically sabotage your human capital.
If that’s the case, research in performance management offers everything you need.
Originally rooted in early 20th-century rationalization methods, performance management has become a cornerstone of modern management. It has evolved to adapt to contemporary HR needs, focusing more on employee development, engagement and strategic alignment. In theory, it should help guide team efforts, clarify expectations and support individual development. But if poorly implemented, it can become a powerful tool to demotivate, exhaust and push out your most valuable employees.
Here’s how to scare off your best talent. Although the following guidelines are meant to be taken tongue-in-cheek, they remain active in the daily work of some managers.
Management by ‘vague’ objectives
Start by setting vague, unrealistic or contradictory goals. Above all, avoid giving goals meaning, linking them to a clear strategy or backing them with appropriate resources. In short, embrace the “real” SMART goals: stressful, arbitrary, ambiguous, repetitive, and totally disconnected from the field!
According to research in organizational psychology, this approach guarantees anxiety, confusion and disengagement among your teams, significantly increasing their intention to leave the company.
Silence Is Golden
Avoid all forms of dialogue and communication. Never give feedback. And if you absolutely must, do it rarely and irregularly, make sure it’s disconnected from actual work, and preferably in the form of personal criticism. The absence of regular, task-focused and actionable feedback leaves employees in uncertainty, catches them off-guard during evaluations and gradually undermines their engagement.
How your employees interpret your intentions and feedback matters most. Be careful though: if feedback is perceived as constructive, it may actually boost motivation and learning engagement. But if the same feedback is seen as driven by a manager’s personal agenda (or, ego-based attribution), it backfires, leading to demotivation, withdrawal and exit.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
Performance evaluation ‘trials’
Hold annual performance review meetings in which you focus solely on mistakes and completely ignore successes or invisible efforts. Be rigid, critical and concentrate only on weaknesses. Make sure to take full credit when the team succeeds; after all, without you, nothing would have been possible. On the other hand, when results fall short, don’t hesitate to highlight errors, assign individual blame and remind them that “you did warn them!”
This kind of performance evaluation, better described as a punitive trial, ensures deep demotivation and accelerates team turnover.
Internal competition, maxed out
Promote a culture of rivalry among colleagues: circulate internal rankings regularly, reward only the top performers, systematically eliminate the lowest ranked without even thinking of helping them improve, devalue the importance of cooperation and let internal competition do the rest. After all, these are the core features of the “famous” method popularized by the late Jack Welch at General Electric.
If you notice a short-term boost of motivation, don’t worry. The long-term effects of Welch’s “vitality curve” will be far more harmful than beneficial. Fierce internal competition is a great tool for destroying trust among teammates and creating a persistently toxic atmosphere, leading to an increase in the number of voluntary departures.
Ignore wellbeing and do not listen, no matter what
We’ve already established that feedback and dialogue should be avoided. But if, by misfortune, they do occur, make sure not to listen to complaints or warning signs related to stress or exhaustion. Offer no support or assistance, and of course, completely ignore the right to disconnect.
In addition, always favour highly variable and poorly designed performance bonuses: this will heighten income instability and kill off whatever engagement remains.
Want to take your talent-repelling skills even further? Draw inspiration from what research identifies as practices and experiences belonging to the three major forms of workplace violence. These include micromanagement, constant pressure, lack of recognition, social isolation and others that generate long-term suffering. Though often invisible, their reoccurence gradually wears employees down mentally, then physically, until they finally break.
Obviously, these tips are meant to be taken ironically.
Yet, unfortunately, these toxic practices are all too real in the daily routines of certain managers. If the goal is truly to retain talent and ensure lasting business success, it is essential to centre performance management practices around meaning, fairness and the genuine development of human potential.
George Kassar ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
A new review of antidepressant withdrawal effects – written by academics, many of whom have close ties to drug manufacturers – risks underestimating the potential harms to long-term antidepressant users by focusing on short-term, industry-funded studies.
There is growing recognition that stopping antidepressants – especially after long-term use – can cause severe and sometimes debilitating withdrawal symptoms, and it is now acknowledged by the UK government as a public health issue.
One of the main reasons this issue took decades to recognise after the release of modern antidepressants onto the market is because medical guidelines, such as those produced by Nice (England’s National Institute for Health and Care Excellence), had for many years declared withdrawal effects to be “brief and mild”.
This description was based on studies run by drug companies, where people had only taken the medication for eight to 12 weeks. As a result, when patients later showed up with severe, long-lasting symptoms, many doctors didn’t take them seriously because these experiences contradicted what the guidelines led them to expect.
Our recent research helps explain this mismatch. We found a clear link between how long someone takes antidepressants and how likely they are to experience withdrawal symptoms – and how severe these symptoms are.
We surveyed NHS patients and found that people who had used antidepressants for more than two years were ten times more likely to have withdrawal effects, five times more likely for those effects to be severe, and 18 times more likely for them to be long lasting compared with those who had taken the drugs for six months or less.
For patients who used antidepressants for less than six months, withdrawal symptoms were mostly mild and brief. Three-quarters reported no or mild symptoms, most of which lasted less than four weeks.
Only one in four of these patients was unable to stop when they wanted to. However, for long-term users (more than two years), two-thirds reported moderate or severe withdrawal effects, with one-quarter reporting severe withdrawal effects. Almost one-third of long-term users reported symptoms that lasted for more than three months. Four-fifths of these patients were unable to stop their antidepressants despite trying.
About 2 million people on antidepressants in England have been taking them for over five years, according to a BBC investigation. And in the US at least 25 million people have taken antidepressants for more than five years. What happens to people in eight-to-12-week studies is a far cry from what happens to millions of people when they stop.
Studying what happens to people after just eight to 12 weeks on antidepressants is like testing car safety by crashing a vehicle into a wall at 5km/h – ignoring the fact that real drivers are out on the roads doing 60km/h.
History repeating itself?
Against this backdrop, a review has just been published in Jama Psychiatry. Several of the senior authors declare payments from drug companies. In what looks like history repeating itself, the review draws on short-term trials – many funded by the pharmaceutical industry – that were similar to those used to shape early treatment guidelines. The authors conclude that antidepressants do not cause significant withdrawal effects.
Their main analysis is based on eleven trials that compared withdrawal symptoms in people who had stopped antidepressants with those who had continued them or stopped taking a placebo. Six of these trials had people on antidepressants for eight weeks, four for 12 weeks and just one for 26 weeks.
They reported a slightly higher number of withdrawal symptoms in people who had stopped antidepressants, which they say does not constitute a “clinically significant” withdrawal syndrome. They also suggest the symptoms could be explained by the “nocebo effect” – where negative expectations cause people to feel worse.
In our view, the results are likely to greatly underestimate the risk of withdrawal for the millions of people on these drugs for years. The review found no relationship between the duration of use of antidepressants and withdrawal symptoms, but there were too few long-term studies to test this association properly.
The review probably underestimates, in our view, short-term withdrawal effects too by assuming that the fact that people experience withdrawal-like symptoms when stopping a placebo or continuing an antidepressant cancels out withdrawal effects from antidepressants. But this is not a valid assumption.
We know that antidepressant withdrawal effects overlap with side-effects and with everyday symptoms, but this does not mean they are the same thing. People stopping a placebo report symptoms such as dizziness and headache, because these are common occurrences. However, as was shown in another recent review, symptoms following discontinuation of a placebo tend to be milder than those experienced when stopping antidepressants, which can be intense enough to require emergency care.
So deducting the rate of symptoms after stopping a placebo or continuing an antidepressant from antidepressant withdrawal symptoms is likely to underestimate the true extent of withdrawal.
The review also doesn’t include several well-designed drug company studies that found high rates of withdrawal symptoms. For example, an American study found that more than 60% of people who stopped antidepressants (after eleven months) experienced withdrawal symptoms.
The authors suggest that depression after stopping antidepressants is probably a return of the original condition, not withdrawal symptoms, because similar rates of depression were seen in people who stopped taking a placebo. But this conclusion is based on limited and unreliable data (that is, relying on participants in studies to report such events without prompting, rather than assessing them systematically) from just five studies.
We hope uncritical reporting of a review based on the sort of short-term studies that led to under-recognition of withdrawal effects in the first place, does not disrupt the growing acceptance of the problem and slow efforts by the health system to help potentially millions of people who may be severely affected.
The authors and publisher of the new review have been approached for comment.
Mark Horowitz is the author of the Maudsley Deprescribing Guidelines which outlines how to safely stop antidepressants, benzodiazepines, gabapentinoids and z-drugs, for which he receives royalties. He is co-applicant on the RELEASE and RELEASE+ trials in Australia funded by the NHMRC and MRFF examining hyperbolic tapering of antidepressants. He is co-founder and consultant to Outro Health, a digital clinic which helps people to safely stop no longer needed antidepressants in the US. He is a member of the Critical Psychiatry Network, an informal group of psychiatrists.
Joanna Moncrieff was a co-applicant on a study of antidepressant discontinuation funded by the UK’s National Institute for Health Research. She is co-applicant on the RELEASE and RELEASE+ trials in Australia funded by the NHMRC and MRFF examining hyperbolic tapering of antidepressants. She receives modest royalties for books about psychiatric drugs. She is co-chair person of the Critical Psychiatry Network, an informal group of psychiatrists.
Source: The Conversation – in French – By Laurence Grondin-Robillard, Professeure associée à l’École des médias et doctorante en communication, Université du Québec à Montréal (UQAM)
En s’engageant dans le sillage de X, Meta pourrait avoir précarisé la fiabilité de l’information sur ses plateformes.(Shutterstock)
Meta souhaitait « restaurer la liberté d’expression » sur ses plates-formes.
Les notes de la communauté sont un système de modération dit « participatif » permettant aux utilisateurs d’ajouter des annotations pour corriger ou contextualiser des publications. D’un média socionumérique à l’autre, les conditions pour devenir un contributeur de cette communauté varient peu : être majeur, actif sur la plate-forme depuis un certain temps et n’avoir jamais enfreint ses règles.
Sans tambour ni trompette, même YouTube et TikTok essayent désormais ce type de modération aux États-Unis. Dévoilé comme une réponse innovante aux défis posés par la circulation de fausses nouvelles, ce modèle mise sur l’autonomisation des utilisateurs pour arbitrer la qualité de l’information. Pourtant, cette tendance révèle un mouvement plus large : le désengagement progressif des médias socionumériques face à la vérification des faits et au journalisme.
D’ailleurs, que sait-on vraiment des notes de la communauté ?
Professeure associée et doctorante en communication à l’Université du Québec à Montréal, je m’intéresse aux transformations qui redéfinissent nos rapports aux technologies et à l’information, tout en reconfigurant les modes de gouvernance des médias socionumériques.
La modération communautaire : ce que dit la recherche
Après le rachat de Twitter par Elon Musk la même année et les licenciements massifs qui en ont suivi, notamment dans les équipes de modération, ce système devient primordial dans la stratégie de modération décentralisée de la plate-forme.
La littérature scientifique traitant de la question est limités, non seulement parce que le modèle est récent, mais également parce que la plate-forme X est son unique objet d’étude. Cependant, elle met en lumière des éléments intéressants sur ce type de modération.
D’abord, les notes de la communauté contribueraient à freiner la circulation de la mésinformation, réduisant jusqu’à 62 % les repartages. Elles augmenteraient également de 103,4 % les probabilités que les utilisateurs suppriment le contenu ciblé en plus de diminuer son engagement global.
Toutefois, il importe de distinguer mésinformation et désinformation. Les études se concentrent sur la première, car l’intention malveillante propre à la désinformation est difficile à démontrer méthodologiquement. Celle-ci est même absente des catégories imposées aux noteurs par X, qui doivent classifier les contenus comme misinformed (mésinformé), potentially misleading (potentiellement trompeur) et not misleading (non trompeur). Ce cadrage restreint contribue à invisibiliser un phénomène pourtant central dans les dynamiques de manipulation de l’information.
Les vérificateurs et journalistes assurent rigueur, rapidité, fiabilité, tandis que les notes, plus lentes à se diffuser, bénéficient d’un capital de confiance sur une plate-forme où journalisme et médias d’information sont souvent contestés. Leur rôle conjoint s’impose donc comme une évidence, contrairement aux idées prônées par Musk et Zuckerberg.
L’illusion d’une communauté au service de la rentabilité
Les bénéfices tirés de l’adoption de ce modèle par les géants du Web sont loin d’être négligeables : non seulement on mise sur les utilisateurs eux-mêmes pour contrer la « désinformation », mais on stimule en même temps leur activité et leur engagement au sein de la plate-forme.
Or, plus les usagers y passent du temps, plus leur attention devient monétisable pour les annonceurs, et donc profitable pour ces médias socionumériques. Ce modèle permet en outre de réaliser des économies substantielles en réduisant les besoins en personnel de modération et en limitant les investissements dans des programmes de vérifications des faits.
Malgré son apparente ouverture, ce système, comme déployé sur X, n’est pas réellement « communautaire » au sens où peut l’être un projet comme Wikipédia. Il ne repose ni sur la transparence des contributions ni sur un processus collaboratif et un but commun.
En réalité, il s’agit davantage d’un système algorithmique de tri, soit un filtre sélectif fondé sur des critères de visibilité optimisés pour préserver un équilibre perçu entre opinions divergentes. Bien que les notes soient factuelles, elles ne sont rendues visibles qu’à condition de franchir une série d’étapes comme celle de l’algorithme dit de « pontage » (bridging algorithm), qui n’affiche une note à l’ensemble des utilisateurs que si elle est approuvée à la fois par des utilisateurs aux opinions opposées.
De plus, il n’existe aucune mesure concernant l’exactitude ou la qualité des notes. Leur visibilité dépend uniquement de leur perception comme « utile » par des utilisateurs issus de courants idéologiques variés. Or, ce n’est pas parce qu’un consensus se forme autour d’une note qu’elle reflète nécessairement un fait.
L’information de qualité n’est pas la priorité
Les rhétoriques de « liberté d’expression » portées par ceux qui contrôlent les canaux de diffusion sur les médias socionumériques relèvent au mieux du contresens, au pire de l’hypocrisie. Les géants du Web, par le biais d’algorithmes opaques, décident de la visibilité et de la portée des notes de la communauté.
Face à cette réalité, la lutte à la « désinformation » est un combat noble, mais inégal, contre un ennemi insaisissable, alimenté par la mécanique impitoyable des algorithmes et de l’idéologie d’une broligarchie bien ancrée.
Comme le notaient déjà en 2017 les professeurs et économistes américains Hunt Allcott et Matthew Gentzkow, les fausses nouvelles prospèrent parce qu’elles sont moins coûteuses à produire que les vraies, plus virales et davantage gratifiantes pour certains publics. Tant que les plates-formes continueront de privilégier la circulation de contenu au détriment de la qualité, la bataille contre la « désinformation » restera profondément déséquilibrée quelle que soit la stratégie.
Repenser la liberté d’expression à l’ère des algorithmes
Si l’exportation des notes de la communauté au-delà des frontières américaines se confirme, elle représentera un progrès uniquement pour les propriétaires de ces plates-formes. Le modèle se présente comme ouvert, mais il repose sur une délégation contrôlée, balisée par des algorithmes qui filtrent toujours ce qui mérite d’être vu.
Ce n’est pas la communauté qui décide : c’est le système qui choisit ce qu’elle est censée penser.
En cédant une partie du travail journalistique à ces dispositifs opaques, nous avons affaibli ce qui garantit la qualité de l’information : exactitude, rigueur, impartialité, etc. Loin d’une démocratisation, c’est une dépolitisation de la modération qui s’opère où tout devient question de rentabilité, même les faits.
Elon Musk affirme « Vous êtes les médias maintenant ». La question à se poser désormais est la suivante : avons-nous vraiment une voix libre, ou sommes-nous de simples variables formatées dans un algorithme ?
Laurence Grondin-Robillard ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Nature isn’t confined to officially protected areas. A lot can be done to conserve biodiversity in other places too. The United Nations Convention on Biological Diversity agreed in 2018 on the idea of “other effective area-based conservation measures” (OECMs). These are geographically defined areas which can be managed in ways that protect biodiversity, ecosystem functions and “where applicable, cultural, spiritual, socio-economic, and other locally relevant values.” Geographer Ndidzulafhi Innocent Sinthumule has explored the potential for sacred natural sites in South Africa to contribute to nature conservation.
Why does South Africa need to protect more land?
In South Africa, although protected areas play a vital role in biodiversity conservation, they are not sufficient. A lot of biodiversity occurs outside formal protected areas. Protected areas make up only 9.2% (or 11,280,684 hectares) of the country’s total land area. The National Protected Area Expansion Strategy, which was last updated in 2016, aims to increase the percentage of protected areas in the country to 16%.
My view is that the target can only be achieved by recognising other areas that have high conservation value, such as sacred natural sites. These are places with special spiritual and cultural value.
Recognising sacred natural sites as “other effective area-based conservation measures” entails officially declaring them as protected areas.
There are also other sites with conservation potential. These could be on public, private or community land. This means they are governed by a variety of rights holders. Apart from sacred natural sites, other examples include military land and waters, and locally managedmarine areas.
Whatever their other, primary purpose, they can also deliver conservation of biodiversity.
Where are South Africa’s sacred natural sites?
There are areas in South Africa known as sacred sites because of their cultural, spiritual, or historical value, often linked to ancestral beings, religion and traditional beliefs.
They are often places of reverence, where rituals, ceremonies, burials, or pilgrimage are conducted, and where the custodians of the areas feel a deep connection to something larger than themselves.
How do the sites fit in with protecting diversity?
The study aimed to assess opinions and perceptions about the opportunities and challenges of sacred natural sites in contributing to global conservation goals.
I interviewed academics involved in research on Indigenous knowledge, people involved in discussions about conservation, and custodians of sacred natural sites – 39 people in all.
Study participants identified a number of opportunities. They said:
Sacred natural sites frequently harbour high levels of biodiversity, including rare and endemic species, because they have been protected for a long time through cultural practices. Giving them more legal protection and funding, and integrating them into national conservation strategies, would protect hotspots of biological diversity.
Integrating traditional ecological knowledge and practices into mainstream conservation efforts would promote more inclusive and culturally sensitive approaches to environmental management.
It would expand the total land area under conservation.
It might create conservation corridors that would facilitate movement of animals and ecological processes between isolated habitat patches.
Sacred natural sites could serve as carbon sinks or storehouses of carbon emissions. Sacred forests have old, tall trees and well developed canopy – the layer of foliage that forms the crown of a forest.
They can serve as tourist destinations where visitors will learn about biodiversity and about religious and cultural practices.
The study participants also identified challenges.
A big one was access rights and harmonising cultural and formal conservation practices. Access to sacred natural sites and the use of resources by the public is usually not permitted.
There was a fear that external intervention by government, nongovernmental organisations and conservationists might sideline local people and lead to the loss of their sacred sites.
External interventions might promote scientific knowledge at the expense of the traditional ecological knowledge that has protected sacred natural sites for millennia.
Respondents were concerned about elites capturing all the benefits and not sharing them equitably.
A methodological challenge might be how to study conservation effectiveness while respecting cultural sensitivities.
How would a sacred natural site be officially recognised?
At the moment, sacred natural sites are not designated or recognised as an “other conservation measure”. Currently, there are no standard procedures, criteria, or guidelines available for declaring them as such in South Africa. These would have to be determined by the national Department of Forestry, Fisheries and the Environment.
The process should begin with identifying all sacred natural sites to understand where they are and what contribution they could make towards biodiversity conservation. The department should do this in consultation with local communities and traditional leaders who understand the local environment. It should be in line with the international principle of Free, Prior, and Informed Consent. This acknowledges the right of Indigenous peoples to give or withhold their consent for any action that would affect their lands.
This will set up sacred natural sites as a conservation model that contributes to both biodiversity protection and cultural heritage preservation. The involvement of communities will ensure that sacred natural sites are a sustainable solution.
All the respondents in my study said that designating a site as an “other conservation measure” should give control or legal protection, ownership and stewardship roles to local communities who have protected the area for ages.
Ndidzulafhi Innocent Sinthumule does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The UK Met Office has given storms forenames for the past decade as part of an effort to raise public awareness of extreme weather before it strikes. Heatwaves are becoming increasingly frequent and severe due to greenhouse gas emissions, predominantly from burning fossil fuel, which are raising global temperatures by trapping more heat in Earth’s atmosphere.
These extreme heat events aren’t named in the UK. Should that change?
Effective communication strategies are necessary to make people aware of upcoming heatwaves and help them understand how to reduce their risk. Spain started naming them in 2023, with Heatwave Zoe. Italy has a longstanding but unofficial tradition of naming heatwaves according to mythology and classical history.
The results include Lucifero (Lucifer, another name for the devil) and Cerbero (Cerberus, the three-headed dog that guards the underworld in Greek myth), popularised by the private weather service il Meteo (ilmeteo.it).
Severe heatwaves in summer 2023 and 2024 prompted a campaign to name heatwaves after fossil fuel companies, to increase awareness of their role in climate change.
However, there is limited evidence to indicate whether this would be effective in encouraging people to take proper safety precautions during heatwaves, such as staying in the shade between 11am and 3pm, closing the curtains of sun-facing windows during the day and making sure to have enough water if travelling and looking out for those who may struggle to keep themselves cool and hydrated, such as elderly people living alone.
To explore how effective naming heatwaves might be, my research team conducted online experiments with 2,152 people in England and 1,981 people in Italy.
Lucifer is scarier than Arnold
Participants were asked to imagine that next summer, they were to receive a warning that a heatwave was about to affect their country. Participants were randomly assigned information about an event that was was either unnamed, given a threatening name (Lucifer/Lucifero), or a more neutral name (Arnold).
Then they were asked how much of a risk they though that the event would pose and the actions they would anticipate taking. English participants were also asked about their thoughts on storm-naming practices in the UK and whether they felt that this should be extended to heatwaves.
We found that naming a heatwave had no effect on the intention of people to take protective measures against it in either country. In Italy, there was no difference between how people perceived the unnamed heatwave and Lucifero, but Arnold was judged to be slightly less concerning and severe.
This suggests that, while naming a heatwave does not increase concern, departing from Italy’s established convention of using threatening names does reduce it slightly.
Our participants in England rated Lucifer as more severe and concerning than an unnamed heatwave, though not by much. When asked about their thoughts on naming weather events more broadly, English participants tended to agree that naming storms made people more likely to engage with weather warnings, but only a minority were in favour of naming heatwaves. Overall we found that, while some people were generally supportive of naming weather events, others worried it could sensationalise them.
It probably won’t help much
We did not find enough evidence to support naming heatwaves in the UK.
Despite a large sample, we found only a very small effect on perceived risk and did not detect any greater intention to take safety precautions for a named heatwave. We also found that responses differed between England and Italy.
Heatwaves can cross national borders. The fact that there are national differences in how people respond to naming them could lead to unintended differences in how people interpret the risk in different places.
And unlike storms, which usually take place over a single day with a clearer start and end, heatwaves can last from days to weeks – it’s not always clear whether a prolonged hot spell is one heatwave or a series of them, which could lead to confusion if named.
Heatwaves are an opportunity to discuss the risks posed by climate change. But naming heatwaves risks coming across as sensationalist to some members of the public. This might have the opposite effect, and make people less likely to heed safety messaging about severe heat.
Don’t have time to read about climate change as much as you’d like?
Source: The Conversation – in French – By Françoise Vasselin, Maîtresse de conférences en Sciences Economiques – Université Paris-Est Créteil (UPEC), Université Paris-Est Créteil Val de Marne (UPEC)
L’USDT (Tether) et l’USDC (USD Coin) représentent plus de 90 % des stablecoins.ddRender/Shutterstock
Les stablecoins sont des actifs numériques conçus pour maintenir une valeur stable, généralement indexée sur le dollar américain. Leur capitalisation dépasse 260 milliards de dollars en juin 2025, dont plus de 99 % visent à suivre le dollar. Cette stabilité repose sur des réserves en monnaie fiduciaire, des cryptoactifs ou des mécanismes algorithmiques.
En ce mois de juin 2025, Circle, émetteur du stablecoin USDC, vient de déposer un dossier pour devenir une banque. La First National Digital Currency Bank. L’idée ? Se passer des acteurs bancaires traditionnels.
Les stablecoins jouent un rôle central dans les transformations monétaires actuelles. Loin d’être neutres, ils soulèvent des enjeux de régulation, de stabilité financière et de souveraineté. Dominés par l’USDT et l’USDC, qui représentent plus de 90 % du secteur, ils sont appelés à coexister avec les monnaies numériques de banques centrales — comme l’euro numérique — et les cryptoactifs décentralisés — comme le bitcoin.
Les comprendre, c’est anticiper l’évolution du système monétaire international.
Les stablecoins constituent l’une des quatre catégories de cryptoactifs — cryptomonnaies comme le bitcoin, jetons non monétaire comme le Basic Attention Token (BAT) sur ethereum, ou communautaire comme le $Trump –. On peut les distinguer à partir des deux critères : le mode d’émission et l’usage principal attendu.
Qu’en est-il des stablecoins ?
Natif et non natif
La majorité des cryptoactifs s’appuie sur des registres distribués — registre simultanément enregistré et synchronisé sur un réseau d’ordinateurs —, le plus souvent des blockchains, conçus pour enregistrer des données de manière fiable et infalsifiable.
Leur émission peut se faire de deux façons.
Les cryptoactifs natifs (ou cryptomonnaies) de paiement ou de plate-forme, comme le bitcoin ou l’ether, sont émis automatiquement par le logiciel de leur propre blockchain, conformément aux règles inscrites dans son protocole. De ce fait, ils ne sont associés à aucun émetteur identifiable.
Du lundi au vendredi + le dimanche, recevez gratuitement les analyses et décryptages de nos experts pour un autre regard sur l’actualité. Abonnez-vous dès aujourd’hui !
À l’inverse, les cryptoactifs non natifs sont créés après le lancement d’une blockchain, grâce à des programmes appelés contrats intelligents, ou smart contracts. Ces programmes autonomes, une fois lancés, fonctionnent de manière automatique et sans possibilité d’être modifiés. Ils sont installés sur une blockchain existante — comme Ethereum ou Solana — qui devient alors leur blockchain d’accueil.
Émetteur identifiable
Parmi les cryptoactifs non natifs, on distingue les jetons non monétaires, tels que les jetons utilitaires — comme le Basic Attention Token (BAT) sur Ethereum — ou communautaires — comme le $Trump ou Trump Coin —, et… les stablecoins, comme l’USDT.
L’émetteur d’un cryptoactif non natif est en général identifiable, car il existe une entité concevant le contrat intelligent, assurant la création et la gestion dudit jeton. Par exemple, le stablecoin USDC est émis par Circle, une entreprise spécialisée dans les services financiers numériques.
Sur la blockchain Ethereum, l’USDC prend la forme d’un jeton ERC-20, généré via un contrat intelligent. Ce jeton est conçu pour refléter la valeur du dollar américain, selon un principe de parité 1 :1. Lorsqu’un utilisateur envoie des dollars à Circle, celle-ci crée exactement la même quantité d’USDC et crédite l’adresse Ethereum de l’utilisateur. Les dollars reçus sont conservés en réserve, ce qui garantit la valeur des jetons émis. Circle reste responsable à la fois de l’émission et de la destruction des USDC, de manière à ce que chaque jeton en circulation corresponde à un dollar réel détenu en réserve.
Buste de Satoshi Nakamoto à Budapest, l’un des fondateurs du bitcoin. Shutterstock
Dans la vision de Satoshi Nakamoto, confondateur du Bitcoin, les cryptoactifs servent de moyen de paiement, puisqu’ils sont conçus pour permettre des transactions de pair-à-pair. Le bitcoin illustre pleinement cette fonction. Les stablecoins remplissent le même rôle, avec une particularité : leur valeur reste stable par rapport à une monnaie de référence, comme le dollar.
De son côté, Vitalik Buterin, cofondateur d’Ethereum, considère que le rôle principal des cryptoactifs est de donner accès aux services et fonctionnalités offerts par l’infrastructure blockchain ; qu’il s’agisse d’y développer des applications décentralisées, d’interagir avec des protocoles sans intermédiaires, ou encore d’y enregistrer des droits de propriété numérique comme les NFTs. L’idée : permettre aux utilisateurs de tirer parti de cette infrastructure, sans passer par des autorités centrales.
Les stablecoins sont donc des cryptoactifs non natifs, dont l’usage principal est celui de moyen de paiement.
Cryptoactif adossé au dollar
Face à la volatilité des cryptoactifs traditionnels, trois entrepreneurs, Brock Pierce, Craig Sellars et Reeve Collins, ont lancé en 2014 le projet RealCoin, le premier stablecoin adossé au dollar. Quelques mois plus tard, le projet a été rebaptisé Tether (USDT), afin de mieux refléter son ancrage à la monnaie états-unienne. Leur objectif est de créer un actif numérique stable, conçu pour faciliter les échanges tout en limitant les fluctuations de prix.
La stabilité des stablecoins repose généralement sur la possibilité pour les utilisateurs de les échanger à tout moment contre des dollars, à un taux fixe de 1 :1 — autrement dit, un jeton stablecoin vaut un dollar. Cette parité est rendue possible par l’existence de réserves, censées garantir la valeur de chaque jeton en circulation. Ces réserves peuvent prendre la forme de dollars déposés sur des comptes bancaires, ou, dans d’autres cas, de cryptoactifs.
Cours du Tether USDt (USDT) de sa création à aujourd’hui. Google Finance, FAL
Ce mécanisme d’émission et de rachat permet de corriger automatiquement les écarts de prix sur ce qu’on appelle le marché secondaire. Ce dernier regroupe les plateformes où les utilisateurs s’échangent directement les stablecoins entre eux, par opposition au marché primaire, où seuls les émetteurs créent ou détruisent les jetons.
Par exemple, si le prix d’un stablecoin dépasse un dollar, les utilisateurs sont incités à en vendre contre de la monnaie réelle, ce qui augmente l’offre du jeton et fait baisser son prix. À l’inverse, s’il passe sous un dollar, ils en rachètent, ce qui accroît la demande et fait remonter le prix. Ce jeu d’arbitrage contribue à maintenir la parité.
Trois types de stablecoins
Au sens strict, en excluant les cryptoactifs adossés à des matières premières, comme Tether Gold ou PAX Gold, qui suivent simplement le cours d’un actif réel tel que l’or, on distingue trois types de stablecoins :
Les stablecoins centralisés : adossés à des réserves en devises, souvent le dollar, ils sont émis par une entité centrale, comme Tether Limited, Circle, ou SG-Forge, une filiale de Société Générale, qui a lancé en avril 2023 un stablecoin adossé à l’euro (EURCV) destiné à des usages institutionnels sur la blockchain Ethereum.
Les stablecoins décentralisés comme DAI, dont la valeur est maintenue aussi proche que possible du dollar. Ils sont garantis par des cryptoactifs déposés dans des contrats intelligents sur la blockchain. Généralement surcollatéralisée, la valeur de ces cryptoactifs bloqués dépasse celle des jetons émis, afin d’absorber les pertes en cas de forte volatilité ou de défaut.
Les stablecoins algorithmiques, comme TerraUSD (UST), reposent sur un mécanisme à deux jetons : UST, censé rester stable à un dollar, et LUNA, utilisé pour absorber les variations de l’offre. Contrairement aux stablecoins garantis par des réserves réelles, UST n’était adossé à aucun collatéral tangible. Sa stabilité reposait sur des mécanismes incitatifs programmés, permettant notamment d’échanger 1 TerraUSD (UST) contre un dollar de LUNA. Mais en 2022, une perte de confiance a entraîné des ventes massives d’UST, forçant le protocole à émettre de grandes quantités de LUNA, ce qui a fait chuter sa valeur. Le système s’est enrayé, provoquant l’effondrement des deux jetons et révélant la fragilité de ce modèle.
Affaire Tether
En pratique, seuls les stablecoins centralisés bénéficient d’un mécanisme de stabilisation fondé sur l’arbitrage direct. Ce fonctionnement donne la possibilité pour les utilisateurs d’acheter ou de vendre des actifs numériques (jetons) en profitant d’écarts entre leur prix de marché et leur valeur théorique (1 dollar). Ils ont la garantie de pouvoir les échanger à tout moment contre des dollars auprès de l’émetteur, à un taux fixe de 1 :1. Ce mécanisme repose sur l’existence de réserves liquides, transparentes et bien gérées, permettant à l’émetteur d’assurer ce droit de rachat.
Deux affaires ont mis en lumière les limites de ce modèle lorsque la transparence fait défaut ou que l’émetteur conserve un pouvoir de blocage sur les fonds. En 2021, l’affaire Tether se conclut par une amende de 18,5 millions de dollars pour manque de transparence sur les réserves. En 2023, 63 millions d’USDC sont gelés par Circle à la suite du piratage de Multichain, un service de transfert entre blockchains, détourné à hauteur de 125 millions de dollars).
Ces événements mettent en lumière les approches divergentes des deux leaders actuels du marché des stablecoins. Circle (USDC) privilégie depuis l’origine la transparence et des actifs très liquides comme les bons du Trésor des États-Unis. Tether (USDT), longtemps critiqué pour son opacité et ses réserves risquées, a récemment renforcé la qualité et la transparence de ses actifs.
Usages concrets dans le monde entier
Les stablecoins sont déjà utilisés dans des contextes très variés, souvent face à des contraintes économiques locales. En 2022, le Haut Commissariat des Nations unies pour les réfugiés (HCR) a lancé un programme d’aide aux réfugiés ukrainiens via l’USDC sur la blockchain Stellar. Confrontés à une inflation supérieure à 100 %, de nombreux citoyens argentins se tournent vers l’USDT comme réserve de valeur et moyen de transaction.
Au Nigeria, leur usage s’est développé avec la dévaluation du naira et des restrictions bancaires. Enfin, aux Philippines, les stablecoins réduisent le coût des transferts de fonds depuis l’étranger. Les stablecoins jouent également un rôle croissant dans les environnements numériques. Sur certaines plates-formes, ils servent au paiement de salaires, comme avec Bitwage, ou aux réservations en ligne. À Singapour, les volumes de paiement en stablecoins dépassent déjà un milliard de dollars.
Dans tous les cas mentionnés, les stablecoins utilisés sont adossés au dollar états-unien, ce qui pourrait renforcer le rôle du dollar à l’échelle mondiale.
Exclusion de l’Union européenne ?
Depuis 2024, le règlement européen MiCA impose aux émetteurs de stablecoins une autorisation, ainsi que des exigences de transparence sur les réserves. Le stablecoin Euro CoinVertible (EURCV), lancé par Société Générale–Forge et émis sur les blockchains publiques Ethereum et Solana, répond à ces nouvelles exigences. Aux États-Unis, un projet de loi comparable, le Clarity for Payment Stablecoins Act, est toujours en discussion.
C’est dans ce contexte que Tether (USDT), principal stablecoin en circulation, se retrouve potentiellement exclu des plateformes régulées qui opèrent légalement dans l’Union européenne. Tether est émis hors d’Europe, par une entité qui n’a pas sollicité d’autorisation au titre de MiCA. Or, depuis juillet 2024, les stablecoins non autorisés ne peuvent plus être offerts au public ni admis à la négociation sur ces plateformes. USDT risque donc d’être progressivement retiré de plusieurs plateformes telles que Binance, Kraken, Bitstamp ou OKX EU.
Fragmentation du marché
Cette exclusion pourrait conduire à une fragmentation géographique du marché des stablecoins et à un avantage compétitif pour les jetons conformes à MiCA, comme EURCV. Cette situation illustre les limites de l’action réglementaire dans un environnement décentralisé. Si MiCA renforce la sécurité juridique sur les plateformes régulées, il ne peut pas empêcher la circulation transfrontière de l’USDT, qui restera accessible via des canaux non supervisés.
C’est aussi dans ce contexte que Société Générale–Forge s’apprête à lancer, durant l’été 2025, un nouveau stablecoin adossé au dollar, l’USD CoinVertible (USDCV). Émis sur Ethereum et Solana, ses actifs de réserve seront déposés auprès de la banque New York Mellon Corporation. Ce lancement pourrait contribuer à capter une part du marché institutionnel européen laissée vacante par le retrait probable de l’USDT. Il offrirait une alternative conforme à MiCA, libellée en dollar américain. D’autres initiatives similaires commencent à émerger…
Instruments de politique monétaire
Au-delà des aspects juridiques, les stablecoins sont devenus des instruments de politique monétaire internationale. En mars 2025, le secrétaire au Trésor américain Scott Bessent a déclaré que le gouvernement utiliserait les stablecoins pour maintenir la domination du dollar comme monnaie de réserve mondiale. Il a souligné que ces actifs facilitent l’accès au dollar et soutiennent la demande pour la dette publique américaine.
Face à cette situation, l’Union européenne cherche à accélérer la mise en place d’un euro numérique, une forme de monnaie de banque centrale, émise par la BCE et accessible au grand public, qui fonctionnera sur une blockchain privée. Pierre Gramegna, directeur général du Mécanisme européen de stabilité, a récemment alerté sur le risque d’une dépendance aux stablecoins adossés au dollar. Selon lui, leur adoption massive pourrait fragiliser la stabilité financière de la zone euro.
Françoise Vasselin ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.