Is it illegal to make online videos of someone without their consent? The law on covert filming

Source: The Conversation – UK – By Subhajit Basu, Professor of Law and Technology, University of Leeds

Could those glasses be recording you? Lucky Business/Shutterstock

Imagine a stranger starts chatting with you on a train platform or in a shop. The exchange feels ordinary. Later, it appears online, edited as “dating advice” and framed to invite sexualised commentary. Your face, and an interaction you didn’t know was being recorded, is pushed into feeds where strangers can identify, contact and harass you.

This is a reality for many people, though the most shocking examples are mainly affecting women. A BBC investigation recently found that men based outside of the UK have been profiting from covertly filming women on nights out in London and Manchester and posting the videos on social media.

In the UK, filming someone in public – even covertly – is not automatically unlawful. Sometimes, it is socially valuable (think of people recording violence or police misconduct).

But once a person is identifiable and the clip is uploaded for views or profit, it can become unlawful under data protection law and, in more intrusive cases, privacy or harassment law. The problem here is what the filming is for, how it is done and what the platforms do with it.

UK law is cautious about a general claim to “privacy in public”. There is a key distinction in case law between being seen in a public place and being recorded for redistribution.

Courts have accepted that privacy can apply even in public, depending on circumstances. In the case of Campbell v MGN (2004), the House of Lords ruled that the Daily Mirror had breached model Naomi Campbell’s privacy by publishing photos that, while taken in public, exposed her private medical information.

The rise of smartphones and now wearable cameras has made covert capture cheaper, more discreet and more accessible. With smart glasses, recording can look like eye contact.

Capture is frictionless: the file is ready to upload before the person filmed even knows it exists. And manufacturer safeguards such as recording lights are already reportedly being bypassed by users.

Once it’s been uploaded, modern social media platforms allow this content to become easily scalable, searchable and profitable.

Context is what shifts the stakes. Covert filming, an intrusive focus on the body and publication at scale can turn an everyday moment into exposure that invites harassment.

Privacy in public

Public life has always involved being seen. The harm is being made findable and targetable, at scale. This is why the most practical legal tool is data protection. Under the UK General Data Protection Regulation (GDPR), when people are identifiable in a video, recording and uploading it is considered processing of personal data.

The uploader and platform must therefore comply with GDPR rules, which in this case would (usually) mean not posting identifiable footage of a stranger in the first place or, removing the details that identify them and taking the clip down quickly if the person objects.

UK GDPR does not apply to purely personal or household activity, with no professional or commercial connection. This is a narrow exemption – “pickup artist” channels and monetised social media posts are unlikely to fall within it.

Harassment law may apply where the filming and posting is followed by repeated contact, threats or encouraging others to target the person filmed, which causes them alarm or distress.

Lagging enforcement

Harm spreads faster than the law can respond. A clip can be uploaded, shared and monetised within seconds. Enforcement of privacy and data protection law is split between the Information Commissioner’s Office, Ofcom, police and courts.

Victims are left to rely on platform reporting tools, and duplicates often continue to spread even after posts are taken down. Arguably, prevention would be more effective than after-the-fact removal.

The temptation is to call for a new offence of “filming in public”. In my view, this risks being either too broad (chilling legitimate recording) or too narrow (missing the combination of factors – covert filming, identifiability, platform amplification and monetisation that make this a problem).

A better approach would be twofold. First, treating wearable recording devices as higher-risk consumer tech, and requiring safeguards that work in practice. For example: conspicuous, genuinely tamper-resistant recording indicators; privacy-by-default settings; and audit logs so misuse is traceable. The law could build in clear public-interest exemptions (journalism, documenting wrongdoing) so rules do not become a backdoor ban on recording.

There are precedents for regulating consumer tech in this way. For example, the UK has strict security requirements for connectable devices like smart TVs to prevent cyberattacks.

View through augmented reality smart glasses
Wearable cameras and AI-enabled tech is making covert filming easier than ever.
Kaspars Grinvalds/Shutterstock

Second, platforms need a clear requirement to reduce the harm caused by covert filming. In practice, that means spotting and obscuring identifiers such as phone numbers and workplace details, warning users when a stranger is identifiable, fast-tracking complaints from the person filmed, blocking re-uploads, and removing monetisation from this content.

The Online Safety Act provides a framework for addressing this problem, but it is not a neat checklist for prevention. Where it clearly applies is when the content itself, or the response it triggers, amounts to illegal harassment or stalking. Those are priority offences in the act, so platforms are expected to assess and mitigate those risks.

The awkward truth is that some covert, degrading clips may be harmful without being obviously illegal at the point of upload, until threats, doxxing or stalking follow.

Privacy in public will not be protected by slogans or a tiny recording light. It will be protected when existing legal principles are applied robustly. And when enforcement is designed for the speed, incentives and business models that shape what people see and share online.

The Conversation

Subhajit Basu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is it illegal to make online videos of someone without their consent? The law on covert filming – https://theconversation.com/is-it-illegal-to-make-online-videos-of-someone-without-their-consent-the-law-on-covert-filming-274885

A brief history of table tennis in film – from Forrest Gump to Marty Supreme

Source: The Conversation – UK – By Jeff Scheible, Senior Lecturer in Film Studies, King’s College London

Table tennis and film have a surprisingly entangled history. Both depended on the invention of celluloid – which not only became the substrate of film, but is also used to make ping pong balls.

Following a brief ping pong craze in 1902, the game largely disappeared and was widely assumed to have been a passing fad. More than 20 years later, however, the British socialite, communist spy and filmmaker Ivor Montagu went to great lengths to establish the game as a sport – a story I explore in my current book project on ping pong and the moving image.

He founded the International Table Tennis Federation (ITTF) and codified the rules of the game in both a book and a corresponding short film, Table Tennis Today (1929).

Montagu presided over the ITTF for several decades. In 1925, the same year he founded the ITTF, Montagu also co-founded the London Film Society. The society helped introduce western audiences to experimental and art films that are now considered classics.

The game of table tennis has subsequently appeared at a number of moments when filmmakers and artists were experimenting with new technologies. An early example appears in one of the first works of “visual music”: Rhythm in Light (1934) by Mary Ellen Bute.

Table Tennis Today (Ivor Montagu, 1929)

Meanwhile, an early work of expanded cinema, Ping Pong (1968) by the artist Valie Export, invited audiences to pick up a paddle and ball and attempt to strike a physical ball against the representation of one moving on the cinema screen. Atari’s adaptation of the game into the interactive Pong (1972) is often considered the first video game.

Perhaps the most familiar cinematic example of all, however, is the digital simulation of a photorealistic ping pong ball – made possible by a then-new regime of computer-generated imagery. It helped Tom Hanks appear to be a ping pong whiz in the Academy-Award-winning Forrest Gump (1994).

The ping pong scene in Forest Gump.

There are a number of other fascinating moments in which the game surfaces meaningfully: in Powell and Pressburger’s A Matter of Life and Death (1946), Jacques Tati’s M Hulot’s Holiday (1953), Michael Haneke’s 71 Fragments of a Chronology of Chance (1994), and Agnes Varda and JR’s Faces Places (2017).

And every day for more than two years, from 2020 to 2022, one of the world’s most beloved filmmakers, David Lynch, uploaded YouTube videos in which he pulled a numbered ping pong ball from a jar and declared it “today’s number”. It was a fittingly Dada-esque gesture that stands among the last mysterious works he shared with the world.

Enter Josh Safdie’s Marty Supreme. The title sequence alone discovers a new way of visualising the game’s iconography, as we see a sperm fertilise an egg, which then transforms into a ping pong ball (the digital effects first witnessed in Gump are now fully integrated into popular cinema).

Why Marty Supreme is different

Marty Supreme is very loosely based on the real-life player Marty Reisman (here Marty Mauser, played by Timothée Chalamet). What sets it apart from earlier cinematic appearances of table tennis is that it centres the game as a sport.

When table tennis has previously appeared in film, it is usually to help show off new special effects or as a brief plot device. Or it frequently appears in the background, helping to furnish the mise-en-scene of an office, basement, or bar. In these instances, we might not notice the game or its materials at all. When it does have a narrative function, it usually occupies a single scene, frequently serving to stage or resolve fraught interpersonal relations between the characters who are playing.

In Marty Supreme, however, table tennis seems neither tethered to special effects nor, certainly, to the game’s “background” status. Chalamet trained extensively over the seven years he spent preparing for the role, even taking his own table to the desert while filming Dune (2021). And despite the film’s sometimes compelling eccentricities, Marty Supreme in many senses follows the generic blueprint of a sports film.

The trailer for Marty Supreme.

Safdie has made a sports film, coincidentally or not, like his frequent collaborator and brother Benny Safdie, whose wrestling film The Smashing Machine was also released this past year. Marty Supreme, though, revolves around an athlete who plays a game that generally has been assumed to not have enough gravitas to command a place in the genre or to hold an audience’s interest.

The absence of sports films about ping pong certainly speaks to ways in which it is perceived as something not worth taking too seriously, for reasons that are surely at least partially linked to the same reasons for which the game is often celebrated. It is perceived to be what I refer to as an “equalising” sport, open to people and bodies of all backgrounds and types.

As actor Susan Sarandon, who founded her own chain of ping pong bars, puts it: “Ping pong cuts across all body types and gender – everything, really – because little girls can beat big muscley guys. You don’t get hurt; it is not expensive; it is really good for your mind. It is one of the few sports that you can play until you die.”

This perception of the game has perhaps also led it to appear in more comedic contexts, with athletes embodied by actors we might more readily laugh at, as source material for visual and sonic gags, from a slapstick scene in You Can’t Cheat an Honest Man (1939) to the widely panned Balls of Fury (2007).

The tension between the game’s perceived triviality and Mauser’s extreme dedication lends Marty Supreme a vast blank canvas – or ping pong table – onto which its oscillations can be painted, or played… and in turn felt by the audience, with its high highs and low lows.

While it’s great that a talented director has poured his heart into a cinematic treatment of Reisman for the screen, I’m holding out hope for an Ivor Montagu film, which could be even more beholden to its real-life character – and even more wild.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Jeff Scheible does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A brief history of table tennis in film – from Forrest Gump to Marty Supreme – https://theconversation.com/a-brief-history-of-table-tennis-in-film-from-forrest-gump-to-marty-supreme-274445

Why the idea of an ‘ideal worker’ can be so harmful for people with mental health conditions

Source: The Conversation – UK – By Hadar Elraz, Senior Lecturer in Human Resource Management and Organisational Behaviour, Swansea University

PeopleImages/Shutterstock

In the modern world of work, the “ideal worker” is a dominant yet dangerous concept that can dictate workplace norms and expectations. This archetype describes an employee who is boundlessly productive, constantly available and emotionally stable at all times.

What makes this trope so flawed is that it assumes workers have no caring responsibilities outside work, or have unrealistic physical and psychological capabilities. It’s intended to drive efficiency, but in fact it is a standard that very few people can reach. It marginalises people who deviate from these rigid standards, including workers managing mental health conditions.

We are researchers in management and health, and our recent paper found that this “ideal worker” is a means of creating stigma. This stigma is embedded in processes and policies, creating a yardstick against which all employees are measured.

The study is based on in-depth interviews with a diverse group of employees with mental health conditions (including depression, bipolar disorder, anxiety and OCD). They worked across the private, public and third sectors in various jobs, including accounting, engineering, teaching and senior management.

For workers with mental health conditions, the expectation of emotional steadiness creates a conflict with the often fluctuating nature of their conditions.

When organisations are seen to value the ideal worker archetype, they can end up creating barriers to meaningful inclusion. In our paper we understand these as both “barriers to doing” and “barriers to being”.

What this means is that workplaces end up with rigid workloads and inflexible expectations (“barriers to doing”). As such, they fail to accommodate people with invisible or fluctuating symptoms. They can also undermine a worker’s identity and self-worth (“barriers to being”), framing them as unreliable or incompetent simply because they do not meet the standards of the ideal worker.

Because employees with mental health conditions often fear being perceived as weak, a burden or fragile, they frequently work excessively hard to prove their value. This means that these employees might compromise their resting and unwinding time in order to live up to workplace expectations.

But of course, these efforts create strain at the personal level. These workers can end up putting themselves at greater risk of relapse or ill health. Our research found that overworking to mask mental health symptoms (working unpaid hours to make up for times when they are unwell, for example) can suggest an organisational culture that may not be inclusive enough.

What’s really happening

HR practices may assume that mental health conditions should be managed by employees alone, rather than with support from the organisation. At the same time, this constant pressure to over-perform can exacerbate mental health conditions, leading to a vicious cycle of stress, exhaustion and even more stigma.

The ideal worker norm forces many employees into keeping their mental health conditions to themselves. They may see hiding their struggles as a tactical way of protecting their professional identity.

In an environment that rewards constant productivity, disclosing a condition that might require reasonable adjustments could be seen as a professional risk. In other words, stigma may compromise career chances.

Participants in our research reported lying on health questionnaires or hiding symptoms because the climate in their workplace signalled that mental health conditions were poorly understood. But this secrecy creates a massive emotional burden, as workers felt pressure to constantly monitor their health, mask their condition and schedule medical appointments in secret.

Paradoxically, while this approach allows people to remain employed, it reinforces the structures that demand their silence. And it ensures that workplace support remains invisible or inaccessible.

lone woman working at a desk in an office at night.
The research found that some workers put in extra unpaid hours to try to achieve ‘ideal’ levels of productivity.
Gorodenkoff/Shutterstock

Our analysis showed a stark contrast between perceptions of support for people with physical impairments and that for employees with mental health conditions. While physical aids like ramps are often visible and accepted, workers setting out their mental health needs frequently faced the risk of stigma, ignorance or disbelief.

By holding on to the ideal worker archetype, organisations are not only failing to fulfil their duty of care. They may also be undermining their own long-term sustainability if they lose skilled labour. Then there are the costs of constant recruitment and retraining.

Managing stigma is a workplace burden that can lead to burnout or divert energy away from a worker’s core tasks. We suggest a fundamental shift for employers: moving away from chasing the “ideal worker” towards creating “ideal workplaces” instead. This means challenging the assumption that productivity must be uninterrupted and that emotional stability is a prerequisite for professional value.

It also means focusing on the quality of an employee’s contribution rather than judging their constant availability or productivity. And it means designing work environments from the ground up to support diverse needs, so that mental health conditions are normalised. This would reduce the need for employees to keep conditions secret.

Ultimately, the problem with the ideal worker archetype is that it is a persistent myth that ignores the reality of human diversity. True equity requires organisations to stop trying to shape individuals to fit the mould and instead rethink work norms to support all employees so that everyone can play a part in enhancing the business.

The Conversation

Hadar Elraz disclosed this study was supported by the UK Economic and Social Research Council. She disclosed no relevant affiliations beyond their academic appointment.

Jen Remnant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the idea of an ‘ideal worker’ can be so harmful for people with mental health conditions – https://theconversation.com/why-the-idea-of-an-ideal-worker-can-be-so-harmful-for-people-with-mental-health-conditions-274350

The mental edge that separates elite athletes from the rest

Source: The Conversation – Canada – By Mallory Terry, Postdoctoral Fellow, Faculty of Science, McMaster University

Elite sport often looks like a test of speed, strength and technical skill. Yet some of the most decisive moments in high-level competition unfold too quickly to be explained by physical ability alone.

Consider Canadian hockey superstar Connor McDavid’s overtime goal at the 4 Nations Face-Off against the United States last February. The puck was on his stick for only a fraction of a second, the other team’s defenders were closing in and he still somehow found the one opening no one else saw.

As professional hockey players return to the ice at the Milan-Cortina Olympics, Canadians can expect more moments like this. Increasingly, research suggests these moments are better understood not as just physical feats, but also as cognitive ones.

A growing body of research suggests a group of abilities known as perceptual-cognitive skills are key differentiators. This is the mental capacity to turn a blur of sights, sounds and movements into split-second decisions.

These skills allow elite athletes to scan a chaotic scene, pick out the right cues and act before anyone else sees the opportunity. In short, they don’t just move faster, but they also see smarter.

Connor McDavid Wins 4 Nations Face-Off For Canada In Overtime (Sportsnet)

How athletes manage visual chaos

One way researchers study these abilities is through a task known as multiple-object tracking, which involves keeping tabs on a handful of moving dots on a screen while ignoring the rest. Multiple-object tracking is a core method I use in my own research on visual attention and visual-motor co-ordination.

Multiple-object tracking taxes attention, working memory and the ability to suppress distractions. These are the same cognitive processes athletes rely on to read plays and anticipate movement in real time.

Unsurprisingly, elite athletes reliably outperform non-athletes on this task. After all, reading plays, tracking players and anticipating movement all depend on managing visual chaos.

There is, however, an important caveat. Excelling at multiple-object tracking will not suddenly enable someone to anticipate a play like McDavid or burst past a defender like Marie-Philip Poulin, captain of the Canadian women’s hockey team. Mastering one narrow skill doesn’t always transfer to real-world performance. Researchers often describe this limitation as the “curse of specificity.”

This limitation raises a deeper question about where athletes’ mental edge actually comes from. Are people with exceptional perceptual-cognitive abilities drawn to fast-paced sports, or do years of experience sharpen it over time?

Evidence suggests the answer is likely both.

Born with it or trained over time?

Elite athletes, radar operators and even action video game players — all groups that routinely track dynamic, rapidly changing scenes — consistently outperform novices on perceptual-cognitive tasks.

At the same time, they also tend to learn these tasks faster, pointing to the potential role of experience in refining these abilities.

What seems to distinguish elite performers is not necessarily that they take in more information, but that they extract the most relevant information faster. This efficiency may ease their mental load, allowing them to make smarter, faster decisions under pressure.

My research at McMaster University seeks to solve this puzzle by understanding the perceptual-cognitive skills that are key differentiators in sport, and how to best enhance them.

This uncertainty around how to best improve perceptual-cognitive skills is also why we should be cautious about so-called “brain training” programs that promise to boost focus, awareness or reaction time.

The marketing is often compelling, but the evidence for broad, real-world benefits is far less clear. The value of perceptual-cognitive training hasn’t been disproven, but it hasn’t been tested rigorously enough in real athletic settings to provide compelling evidence. To date, though, tasks that include a perceptual element such as multiple-object tracking show the most promise.

Training perceptual-cognitive skills

Researchers and practitioners still lack clear answers about the best ways to train perceptual-cognitive skills, or how to ensure that gains in one context carry over to another. This doesn’t mean cognitive training is futile, but it does mean we need to be precise and evidence-driven about how we approach it.

Research does, however, point to several factors that increase the likelihood of real-world transfer.

Training is more effective when it combines high cognitive and motor demands, requiring rapid decisions under physical pressure, rather than isolated mental drills. Exposure to diverse stimuli matters as well, as it results in a brain that can adapt, not just repeat. Finally, training environments that closely resemble the game itself are more likely to produce skills that persist beyond the training session.

The challenge now is translating these insights from the laboratory into practical training environments. Before investing heavily in new perceptual-cognitive training tools, coaches and athletes need to understand what’s genuinely effective and what’s just a high-tech placebo.

For now, this means treating perceptual-cognitive training as a complement to sport-specific training, not as a substitute. Insights will also come from closer collaborations between researchers, athletes and coaches.

There is however, support for incorporating perceptual-cognitive tasks as an assessment of “game sense” to inform scouting decisions.

The real secret to seeing the game differently, then, is not just bigger muscles or faster reflexes. It’s a sharper mind, and understanding how it works could change how we think about performance, both on and off the ice.

The Conversation

Mallory Terry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The mental edge that separates elite athletes from the rest – https://theconversation.com/the-mental-edge-that-separates-elite-athletes-from-the-rest-273758

Addicts à nos écrans : et si tout avait commencé avec la télé en couleurs ?

Source: The Conversation – in French – By Jean-Michel Bettembourg, Enseignant, Métiers du multimédia et de l’Internet, Université de Tours

Le télévision couleurs a transformé le paysage médiatique et notre rapport aux écrans. Pexels, Marshal Yung, CC BY

Octobre 1967, le premier programme en couleurs est diffusé sur la deuxième chaîne de l’ORTF (aujourd’hui, France 2). Pourtant, la démocratisation de la télévision couleur a pris de nombreuses années, pour s’imposer à la fin des années 1980 dans quasiment tous les foyers de France, créant une forme de séduction qui se poursuit peut-être aujourd’hui à travers l’addiction aux écrans.


Le 1er octobre 1967, dans quelques salons français, le monde bascule avec l’avènement de la télévision en couleurs : une révolution médiatique, mais aussi politique, sociale et culturelle.

Cette première diffusion a lieu sur la deuxième chaîne de l’Office de radiodiffusion-télévision française (ORTF). Mais la télévision en couleurs, au début, n’est pas à la portée de tous. Pour la grande majorité des Français, elle reste un rêve inaccessible. Il y a, d’un côté, une ambition nationale immense pour promouvoir cette vitrine de la modernité et, de l’autre, une réalité sociale qui reste encore largement en noir et blanc.

La décision, très politique, vient d’en haut. La couleur n’était pas qu’une question d’esthétique, et le choix même de la technologie, le fameux procédé Sécam, est une affaire d’État. Plus qu’une décision purement technique, le choix d’un standard de télé est en réalité une affaire de souveraineté nationale.

Pour comprendre ce qui se joue, il faut se replacer dans le contexte de l’époque. Dans la période gaullienne, le maître mot, c’est l’indépendance nationale sur tous les fronts : militaire, diplomatique, mais aussi technologique.

Un standard très politique

Pour le général de Gaulle et son gouvernement, imposer un standard français, le Sécam, est un acte politique aussi fort que de développer le nucléaire civil ou plus tard de lancer le programme Concorde. La France fait alors face à deux géants : d’un côté le standard américain, le NTSC, qui existe déjà depuis 1953, et le PAL, développé par l’Allemagne de l’Ouest. Choisir le Sécam, c’est refuser de s’aligner, de devenir dépendant d’une technologie étrangère.

L’État investit. Des millions de francs de l’époque sont investis pour moderniser les infrastructures, pour former des milliers de techniciens et pour faire de la deuxième chaîne une vitrine de cette modernité à la française. Sécam signifie « Séquentiel couleur à mémoire », un système basé sur une sorte de minimémoire tampon avant l’heure, qui garantit une transmission sur de longues distances et des couleurs stables qui ne bavent pas.

Le pari est audacieux, car si la France opte ainsi pour un système plus robuste, cela l’isole en revanche du reste de l’Occident. Le prix à payer pour cette exception culturelle et technologique est double. Tout d’abord, le coût pour le consommateur : les circuits Sécam étaient beaucoup plus complexes et donc un poste TV français coûtait entre 20 et 30 % plus cher qu’un poste PAL, sans compter les problèmes d’incompatibilité à l’international. L’exportation des programmes français est acrobatique : il faut tout transcoder, ce qui est coûteux et entraîne une perte de qualité. Et ce coût, justement, nous amène directement à la réalité sociale de ce fameux 1er octobre 1967.

Fracture sociale

D’un côté, on avait cette vitrine nationale incroyable et, de l’autre, la réalité des salons français. Le moment clé, c’est l’allocution du ministre de l’information Georges Gorse qui, ce 1er octobre 1967, lance un sobre mais historique « Et voici la couleur… ». Mais c’est une image d’Épinal qui cache la fracture.

Le prix des télévisions nouvelle génération est exorbitant. Un poste couleur coûte au minimum 5 000 francs (soit l’équivalent de 7 544 euros, de nos jours). Pour donner un ordre de grandeur, le smic de l’époque est d’environ 400 francs par mois … 5 000 francs, c’est donc plus d’un an de smic, le prix d’une voiture !

La démocratisation a été extrêmement lente. En 1968, seuls 2 % des foyers environ étaient équipés, et pour atteindre environ 90 %, il faudra attendre jusqu’en… 1990 ! La télévision couleur est donc immédiatement devenue un marqueur social. Une grande partie de la population voyait cette promesse de modernité à l’écran, mais ne pouvait pas y toucher. La première chaîne, TF1, est restée en noir et blanc jusqu’en 1976.

Cette « double diffusion » a accentué la fracture sociale, mais en même temps, elle a aussi permis une transition en douceur. Le noir et blanc n’a pas disparu du jour au lendemain. Et une fois ce premier choc encaissé, la révolution s’est mise en marche.

La couleur a amplifié de manière exponentielle le rôle de la télé comme créatrice d’une culture populaire commune. Les émissions de variétés, les feuilletons, les jeux et surtout le sport, tout devenait infiniment plus spectaculaire. L’attrait visuel était décuplé. Lentement, la télévision est devenue le pivot qui structure les soirées familiales et synchronise les rythmes de vie. Et en coulisses, c’était une vraie révolution industrielle. L’audiovisuel s’est professionnalisé à une vitesse folle. Les budgets de production ont explosé.

Cela a créé des besoins énormes et fait émerger de tout nouveaux métiers, les étalonneurs, par exemple, ces experts qui ajustent la colorimétrie en postproduction. Et bien sûr, il a fallu former des légions de techniciens spécialisés sur la maintenance des caméras et des télés Sécam pour répondre à la demande. Dans ce contexte, l’ORTF et ses partenaires industriels, comme Thomson, ont développé des programmes ambitieux de formation des techniciens, impliquant écoles internes et stages croisés.

Le succès industriel est retentissant. L’impact économique a été à la hauteur de l’investissement politique initial. L’État misait sur la recherche et le développement. Thomson est devenu un leader européen de la production de tubes cathodiques couleur, générant des dizaines de milliers d’emplois directs dans ses usines. C’est une filière entière de la fabrication des composants à la maintenance chez le particulier qui est née.

Et puis il y a eu le succès à l’exportation du système. Malgré son incompatibilité, le Sécam a été vendu à une vingtaine de pays, notamment l’URSS.

La place des écrans dans nos vies

Mais en filigrane, on sentait déjà poindre des questions qui nous sont très familières aujourd’hui, sur la place des écrans dans nos vies. C’est peut-être l’héritage le plus durable de la télévision en couleurs. La puissance de séduction des images colorées a immédiatement renforcé les craintes sur la passivité du spectateur. On commence à parler de « consommation d’images », d’une « perte de distance critique ».

Le débat sur le temps d’écran n’est pas né avec les smartphones, mais bien à ce moment-là, quand l’image est devenue si attractive, si immersive, qu’elle pouvait littéralement « capturer » le spectateur. La télé couleur, avec ses rendez-vous, installait une logique de captation du temps.

Et avec l’arrivée des premières mesures d’audience précises à la fin des années 1960, certains observateurs parlaient déjà d’une forme d’addiction aux écrans. La couleur a ancré durablement la télévision au cœur des industries culturelles, avec le meilleur des créations audacieuses et le moins bon, des divertissements certes très populaires mais de plus en plus standardisés.

L’arrivée de la couleur, c’était la promesse de voir enfin le monde « en vrai » depuis son salon. Cette quête de réalisme spectaculaire, de plus vrai que nature, ne s’est jamais démentie depuis, de la HD à la 4K, 6K et au-delà. On peut alors se demander si cette promesse initiale vendue sur un écran de luxe ne contenait pas déjà en germe notre rapport actuel à nos écrans personnels ultraportables, où la frontière entre le réel et sa représentation est devenue une question centrale.

The Conversation

Jean-Michel Bettembourg ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Addicts à nos écrans : et si tout avait commencé avec la télé en couleurs ? – https://theconversation.com/addicts-a-nos-ecrans-et-si-tout-avait-commence-avec-la-tele-en-couleurs-274170

Why Canada must step up to protect children in a period of global turmoil

Source: The Conversation – Canada – By Catherine Baillie Abidi, Associate Professor, Child & Youth Study, Mount Saint Vincent University

Over half a billion children are now living in conflict zones, according to a 2025 Save the Children report, and the world is turning its back on them.

At a time of unprecedented global insecurity, funding and resources to care for, protect and engage with children affected by armed violence continue to decline.

The Donald Trump administration’s recent announcement of unprecedented American cuts to funding for international organizations — including reductions to the United Nations Offices of the Special Representatives of the Secretary-General for Children in Armed Conflict and on Violence Against Children — further undermines an already fragile system.

Cuts like these can have a devastating effect on some of the world’s most vulnerable populations, undermining important work to identify and prevent violations against children, and to assist children in rebuilding their lives in the aftermath of violence. Canada cannot sit on the sidelines.

Preventing violence against children

Violence against children is a global crisis. Without a seismic shift in how states take action to prevent such violence, the costs will continue to impact people around the world.

As a global community, we have a collective responsibility to build communities where children are not only safe and thriving, but where their capacity and agency as future peace-builders, leaders and decision-makers in their families, schools and communities are built upon and nurtured in wartime and post-conflict societies. These are core responsibilities that the global community is failing at miserably.

As many as 520 million war-affected children deserve better.

Canada has a long history of serving as a champion of children’s rights in armed conflict. Canadians have led global initiatives, including leading the first International Conference on War-affected Children, championing the Ottawa Treaty to ban landmines and developing the Vancouver Principles on Peacekeeping and Preventing the Recruitment and Use of Child Soldiers .

Canada is also the founder and chair of the Group of Friends of Children and Armed Conflict, an informal but vital UN network focused on child protection.

Now more than ever — amid American economic and political disengagement from core child protection priorities — there is both an opportunity and an imperative for Canada to demonstrate active leadership in the promotion of children’s rights and enhanced safety for children impacted by the devastation of armed conflict.

Complacency threatens to perpetuate generational impacts of violence.

Global leadership required

The Canadian government must once again stand up and provide global leadership on children and armed conflict by bolstering strategic alliances and funding efforts to protect and engage children impacted by armed conflict.

As a community of Canadian scholars dedicated to studying children, organized violence and armed conflict, we are deeply concerned about the growing vulnerability of children worldwide.

We see an opportunity for Canada to reclaim its role as a global leader in advancing and protecting children’s rights, especially in a time of political upheaval and heightened global insecurity. Canada can reassert itself and live up to its global reputation as a force for good in the world. It can stand on the global stage and draw attention to a crisis with generational impacts.

Children need protection from the effects of war, but they also need to be seen as active agents of peace who understand their needs and can help secure better futures.

Investments of attention and funding today can make significant differences in the emotional and social development of children who are navigating post-conflict life.




Read more:
The lasting scars of war: How conflict shapes children’s lives long after the fighting ends


Canada must take the lead

These investments are critical to the social structures of peaceful communities. Canadian leadership is well-positioned to take on this role, not only because of the country’s history and reputation, but because Canadian scholars are at the forefront, are organized around this issue and can be leveraged for maximum impact.

Prime Minister Mark Carney’s recent celebrated speech at the World Economic Forum’s annual conference at Davos signalled a possible and important shift in alliances, priorities and global moral leadership for Canada.

Canadian foreign policy can build upon this. Making the vulnerability of children affected by armed conflict and the capacity of children to be agents of peace a key foreign policy issue would positively affect the lives of millions of children globally. It would also signal to the world that Canada is ready to take on the significant global human rights challenges it once did.


The following scholars, members of The Canadian Community of Practice on Children and Organized Violence & Armed Conflict, contributed to this article: Maham Afzaal, PhD Student, Queens University; Dr. Marshall Beier, McMaster University; Sophie Greco, PhD Candidate, Wilfrid Laurier University; Ethan Kelloway, Honours Student, Mount Saint Vincent University; Dr. Marion Laurence, Dalhousie University; Dr. Kate Swanson, Dalhousie University; Orinari Wokoma, MA student, Mount Saint Vincent University.

The Conversation

Catherine Baillie Abidi receives funding from Social Science and Humanities Research Council of Canada.

Izabela Steflja receives funding from the Social Sciences and Humanities Research Council of Canada.

Kirsten J. Fisher receives funding from the Social Sciences and Humanities Research Council of Canada.

Myriam Denov receives funding from the Social Science and Humanities Research Council of Canada, and the Canada Research Chair Program.

ref. Why Canada must step up to protect children in a period of global turmoil – https://theconversation.com/why-canada-must-step-up-to-protect-children-in-a-period-of-global-turmoil-274398

Lessons from the sea: Nature shows us how to get ‘forever chemicals’ out of batteries

Source: The Conversation – Canada – By Alicia M. Battaglia, Postdoctoral Researcher, Department of Mechanical & Industrial Engineering, University of Toronto

As the world races to electrify everything from cars to cities, the demand for high-performance, long-lasting batteries is soaring. But the uncomfortable truth is this: many of the batteries powering our “green” technologies aren’t as green as we might think.

Most commercial batteries rely on fluorinated polymer binders to hold them together, such as polyvinylidene fluoride. These materials perform well — they’re chemically stable, resistant to heat and very durable. But they come with a hidden environmental price.

Fluorinated polymers are derived from fluorine-containing chemicals that don’t easily degrade, releasing persistent pollutants called PFAS (per- and polyfluoroalkyl substances) during their production and disposal. Once they enter the environment, PFAS can remain in water, soil and even human tissue for hundreds of years, earning them the nickname “forever chemicals.”

We’ve justified their use because they increase the lifespan and performance of batteries. But if the clean energy transition relies on materials that pollute, degrade ecosystems and persist in the environment for years, is it really sustainable?

As a graduate student, I spent years thinking about how to make batteries cleaner — not just in how they operate, but in how they’re made. That search led me somewhere unexpected: the ocean.




Read more:
Living with PFAS ‘forever chemicals’ can be distressing. Not knowing if they’re making you sick is just the start


Why binders are important

an electric car plugged in to charge
Most commercial batteries rely on fluorinated polymer binders to hold them together. These materials perform well but come with an environmental cost.
(Unsplash/CHUTTERSNAP)

Every rechargeable battery has three essential components: two electrodes separated by a liquid electrolyte that allows charged atoms (ions) to flow between them. When you charge a battery, the ions move from one electrode to the other, storing energy.

When you use the battery, the charged atoms flow back to their original side, releasing that stored energy to power your phone, car or the grid.

Each electrode is a mixture of three parts: an active material that stores and releases energy, a conductive additive that helps electrons move and a binder that holds everything together.

The binder acts like glue, keeping particles in place and preventing them from dissolving during use. Without it, a battery would be unable to hold a charge after only a few uses.

Lessons from the sea

Many marine organisms have evolved in remarkable ways to attach themselves to wet, slippery surfaces. Mussels, barnacles, sandcastle worms and octopuses produce natural adhesives to stick to rocks, ship hulls and coral in turbulent water — conditions that would defeat most synthetic glues.

For mussels, the secret lies in molecules called catechols. These molecules contain a unique amino acid in their sticky proteins that helps them form strong bonds with surfaces and hardens almost instantly when exposed to oxygen. This chemistry has already inspired synthetic adhesives used to seal wounds, repair tendons and create coatings that stick to metal or glass underwater.

Building on this idea, I began exploring a related molecule called gallol. Like catechol in mussels, gallol is used by marine plants and algae to cling to wet surfaces. Its chemical structure is very similar to catechol, but it contains an extra functional group that makes it even more adhesive and versatile. It can form multiple types of strong, durable and reversible bonds — properties that make it an excellent battery binder.

a group of mussels stuck to a rock
Mussels use molecules called catechols to stick to surfaces.
(Unsplash/Manu Mateo)

A greener solution

Working with Prof. Dwight S. Seferos at the University of Toronto, we developed a polymer binder based on gallol chemistry and paired it with zinc, a safer and more abundant metal than lithium. Unlike lithium, zinc is non-flammable and easier to source sustainably, making it ideal for large-scale applications.

The results were remarkable. Our gallol-based zinc batteries maintained 52 per cent higher energy efficiency after 8,000 charge-discharge cycles compared to conventional batteries that use fluorinated binders. In practical terms, that means longer-lasting devices, fewer replacements and a smaller environmental footprint.

Our findings are proof that performance and sustainability can go hand-in-hand. Many in industry might still view “green” and “effective” as competing priorities, with sustainability an afterthought. That logic is backwards.

We can’t build a truly clean energy future using polluting materials. For too long, the battery industry has focused on performance at any cost, even if that cost includes toxic waste, hard-to-recycle materials and unsustainable and unethical mining practices. The next generation of technologies must be sustainable by design, built from sources are renewable, biodegradable and circular.

Nature has been running efficient, self-renewing systems for billions of years. Mussels, shellfish and seaweeds build materials that are strong, flexible and biodegradable. No waste and no forever chemicals. It’s time we started paying attention.

The ocean holds more than beauty and biodiversity; it may also hold the blueprint for the future of energy storage. But realizing that future requires a cultural shift in science, one that rewards innovation that heals, not just innovation that performs.

We don’t need to sacrifice progress to protect the planet. We just need to design with the planet in mind.

The Conversation

This research was supported by the National Sciences and Engineering Research Council of Canada, the Canadian Foundation for Innovation, and the Ontario Research Fund. Alicia M. Battaglia received funding from the Ontario Graduate Scholarship Program.

ref. Lessons from the sea: Nature shows us how to get ‘forever chemicals’ out of batteries – https://theconversation.com/lessons-from-the-sea-nature-shows-us-how-to-get-forever-chemicals-out-of-batteries-273098

L’UFC, la mécanique bien huilée de la violence et de sa spectacularisation

Source: The Conversation – in French – By Blaise Dore-Caillouette, Doctorant en communication, Université de Montréal

L’Ultimate Fighting Championship a transformé un sport de combat brutal en empire de 10 milliards de dollars. Son secret ? Faire de la violence le cœur même du spectacle. Bonus financiers pour les K.O. les plus impressionnants, dramatisation des rivalités, valorisation des combattants offensifs. Voici comment l’UFC a érigé la spectacularisation de la violence en modèle d’affaires.

Moi-même pratiquant de MMA, je poursuis actuellement des recherches doctorales en communication à l’Université de Montréal. Mes recherches portent sur les études médiatiques, les Cultural Studies et la communication politique.

Une organisation globale

Créé en 1993, l’UFC a été conçu à l’origine pour déterminer quel style d’art martial était le plus efficace dans un combat limité par le moins de règles possible. Les arts martiaux mixtes sont donc devenus un sport de combat mêlant frappes, projections et techniques de soumission qui combine plusieurs disciplines comme la boxe, le judo, le jiu-jitsu brésilien et la lutte. L’UFC est aujourd’hui une organisation globale, capitalisant une dizaine de milliards de dollars et rassemblant les meilleurs combattants de toutes disciplines.

Mais ce qui distingue véritablement l’UFC, ce n’est pas seulement la qualité de ses athlètes, mais sa capacité à transformer un affrontement technique, stratégique et violent en spectacle, où chaque geste, chaque coup et chaque réaction est mis en scène pour maximiser l’impact émotionnel sur le public.

La violence dans les disciplines sportives ainsi que sa spectacularisation n’ont évidemment rien de nouveau. Que ce soit au football américain, au hockey, ou même à la boxe, les affrontements, les rivalités et les moments de tension et de violence sont médiatisés et amplifiés pour captiver le public. Il est cependant intéressant de constater que, à la différence des autres sports, le MMA est de nature plus violente, et que l’UFC capitalise et amplifie cette violence pour la spectaculariser.

MMA, une discipline sportive intrinsèquement plus violente

À la différence des autres sports de combat, les règles encadrant les contacts physiques en MMA sont beaucoup moins restrictives, ce qui rend le combat plus brutal et imprévisible. À l’inverse, il y a des sports de combat comptant de nombreuses restrictions, comme la boxe qui interdit les frappes au sol, les soumissions ou les projections.




À lire aussi :
Active Clubs : le corps comme champ de bataille de l’extrême droite


Depuis l’adoption des Unified Rules of MMA (2009), un ensemble de règles interdit désormais certaines techniques trop dangereuses comme les coups de tête ou les coups de pied au visage d’un adversaire au sol. Il n’en demeure pas moins que les combattants risquent des blessures graves à chaque combat. En effet, le MMA autorise un éventail de techniques qui peut être dangereux pour l’intégrité physique des combattants.



Les frappes avec les coudes et les genoux, par exemple, sont parmi celles qui infligent le plus de dégâts. Elles provoquent fréquemment des coupures, des saignements abondants et des K.O. rapides. Les combats peuvent également se poursuivre au sol, ce qui permet non seulement de frapper un adversaire au tapis, mais aussi d’appliquer des étranglements, ou des clés articulaires qui peuvent mettre fin au combat en quelques secondes.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Amplification de la spectacularisation de la violence

La violence est souvent spectacularisée dans les sports, car elle capte l’attention du public et renforce l’intensité dramatique des compétitions. Par exemple, au hockey, les bagarres sont devenues de véritables moments attendus par les spectateurs. On peut d’ailleurs les revivre au ralenti, en gros plans ou au sein de montages vidéo. En boxe, les K.O. spectaculaires sont mis en avant dans les moments clés.

Toutefois, l’UFC pousse encore plus loin cette logique en transformant la violence elle-même en moteur du spectacle. En effet, plusieurs éléments distinguent la spectacularisation de la violence dans l’UFC des autres arts martiaux ou disciplines sportives.




À lire aussi :
« Rage bait » : le mot de l’année met en évidence une mutation des médias sociaux vers la fabrication de la colère


Par exemple, l’UFC récompense directement à la hauteur de 50 000$ US les affrontements spectaculaires avec les bonus « Performance of the Night », ce qui incite les combattants à adopter un style offensif et à chercher les K.O. ou les soumissions les plus impressionnants. Cette politique valorise non seulement la violence des combats, mais crée aussi une dynamique où les athlètes savent qu’un combat particulièrement violent peut accroître leur notoriété et leurs revenus.

K.O particulièrement violent, où l’on voit que les athlètes sont encouragés à frapper leur adversaire – même inconscient – tant que l’arbitre n’a pas arrêté le combat.

L’UFC met à cet égard systématiquement de l’avant les combattants capables d’offrir des moments forts, et privilégie les profils connus pour leurs violences.

Parallèlement, l’UFC contribue à renforcer cette culture en valorisant publiquement les combats les plus violents et en critiquant ceux jugés trop prudents ou stratégiques. Cette culture instaure une pression implicite sur les combattants pour offrir un spectacle violent à chaque affrontement. L’ensemble de ces mécanismes, sans compter les nombreux autres que l’on retrouve dans tous sports de contacts (gros plans, ralentis et montages vidéo de tout genre sur les moments violents, pour en nommer quelques-uns), fait de la violence un élément central et soigneusement orchestré du spectacle UFC.

La dramatisation des rivalités

Les rivalités personnelles entre athlètes participent également à cette dramatisation de la violence dont je parle. Elles sont fréquemment mises de l’avant lors des conférences et tout au long de la promotion de l’affrontement.

Les combattants sont encouragés à accentuer leurs différends par des déclarations provocantes, des échanges verbaux intenses ponctués d’insultes. Il n’est pas rare de voir les combattants s’échanger des coups de poing en pleine conférence de presse.

Dans cette vidéo, les deux athlètes intensifient la tension de leur combat à venir avec des déclarations provocantes.

Cette spectacularisation de la violence transforme chaque affrontement en événement dramatique, où l’enjeu émotionnel dépasse la simple compétition sportive et contribue à capter l’attention du public.

Si l’UFC amplifie la violence de manière inédite, comme nous l’avons vu, d’autres sports, de la boxe au football américain, mobilisaient déjà des procédés semblables. Cette tendance démontre que la spectacularisation de la violence façonne les pratiques sportives modernes et la manière dont elles sont consommées par le public.

La Conversation Canada

Blaise Dore-Caillouette ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. L’UFC, la mécanique bien huilée de la violence et de sa spectacularisation – https://theconversation.com/lufc-la-mecanique-bien-huilee-de-la-violence-et-de-sa-spectacularisation-271153

AI is coming to Olympic judging: what makes it a game changer?

Source: The Conversation – France – By Willem Standaert, Associate Professor, Université de Liège

As the International Olympic Committee (IOC) embraces AI-assisted judging, this technology promises greater consistency and improved transparency. Yet research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy.

The Olympic AI agenda

In 2024, the IOC unveiled its Olympic AI Agenda, positioning artificial intelligence as a central pillar of future Olympic Games. This vision was reinforced at the very first Olympic AI Forum, held in November 2025, where athletes, federations, technology partners, and policymakers discussed how AI could support judging, athlete preparation, and the fan experience.

At the 2026 Winter Olympics in Milano-Cortina, the IOC is considering using AI to support judging in figure skating (men’s and women’s singles and pairs), helping judges precisely identify the number of rotations completed during a jump. Its use will also extend to disciplines such as big air, halfpipe, and ski jumping (ski and snowboard events where athletes link jumps and aerial tricks), where automated systems could measure jump height and take-off angles. As these systems move from experimentation to operational use, it becomes essential to examine what could go right… or wrong.

Judged sports and human error

In Olympic sports such as gymnastics and figure skating, which rely on panels of human judges, AI is increasingly presented by international federations and sports governing bodies as a solution to problems of bias, inconsistency, and lack of transparency. Judging officials must assess complex movements performed in a fraction of a second, often from limited viewing angles, for several hours in a row. Post-competition reviews show that unintentional errors and discrepancies between judges are not exceptions.

This became tangible again in 2024, when a judging error involving US gymnast Jordan Chiles at the Paris Olympics sparked major controversy. In the floor final, Chiles initially received a score that placed her fourth. Her coach then filed an inquiry, arguing that a technical element had not been properly credited in the difficulty score. After review, her score was increased by 0.1 points, temporarily placing her in the bronze medal position. However, the Romanian delegation contested the decision, arguing that the US inquiry had been submitted too late – exceeding the one-minute window by four seconds. The episode highlighted the complexity of the rules, how difficult it can be for the public to follow the logic of judging decisions, and the fragility of trust in panels of human judges.

Moreover, fraud has also been observed: many still remember the figure skating judging scandal at the 2002 Salt Lake City Winter Olympics. After the pairs event, allegations emerged that a judge had favoured one duo in exchange for promised support in another competition – revealing vote-trading practices within the judging panel. It is precisely in response to such incidents that AI systems have been developed, notably by Fujitsu in collaboration with the International Gymnastics Federation.

What AI can (and cannot) fix in judging

Our research on AI-assisted judging in artistic gymnastics shows that the issue is not simply whether algorithms are more accurate than humans. Judging errors often stem from the limits of human perception, as well as the speed and complexity of elite performances – making AI appealing. However, our study involving judges, gymnasts, coaches, federations, technology providers, and fans highlights a series of tensions.

AI can be too exact, evaluating routines with a level of precision that exceeds what human bodies can realistically execute. For example, where a human judge visually assesses whether a position is properly held, an AI system can detect that a leg or arm angle deviates by just a few degrees from the ideal position, penalising an athlete for an imperfection invisible to the naked eye.

While AI is often presented as objective, new biases can emerge through the design and implementation of these systems. For instance, an algorithm trained mainly on male performances or dominant styles may unintentionally penalise certain body types.

In addition, AI struggles to account for artistic expression and emotions – elements considered central in sports such as gymnastics and figure skating. Finally, while AI promises greater consistency, maintaining it requires ongoing human oversight to adapt rules and systems as disciplines evolve.

Action sports follow a different logic

Our research shows that these concerns are even more pronounced in action sports such as snowboarding and freestyle skiing. Many of these disciplines were added to the Olympic programme to modernise the Games and attract a younger audience. Yet researchers warn that Olympic inclusion can accelerate commercialisation and standardisation, at the expense of creativity and the identity of these sports.

A defining moment dates back to 2006, when US snowboarder Lindsey Jacobellis lost Olympic gold after performing an acrobatic move – grabbing her board mid-air during a jump – while leading the snowboard cross final. The gesture, celebrated within her sport’s culture, eventually cost her the gold medal at the Olympics. The episode illustrates the tension between the expressive ethos of action sports and institutionalised evaluation.

AI judging trials at the X Games

AI-assisted judging adds new layers to this tension. Earlier research on halfpipe snowboarding had already shown how judging criteria can subtly reshape performance styles over time. Unlike other judged sports, action sports place particular value on style, flow, and risk-taking – elements that are especially difficult to formalise algorithmically.

Yet AI was already tested at the 2025 X Games, notably during the snowboard SuperPipe competitions – a larger version of the halfpipe, with higher walls that enable bigger and more technical jumps. Video cameras tracked each athlete’s movements, while AI analysed the footage to generate an independent performance score. This system was tested alongside human judging, with judges continuing to award official results and medals. However, the trial did not affect official outcomes, and no public comparison has been released regarding how closely AI scores aligned with those of human judges.

Nonetheless, reactions were sharply divided: some welcomed greater consistency and transparency, while others warned that AI systems would not know what to do when an athlete introduces a new trick – something often highly valued by human judges and the crowd.

Beyond judging: training, performance and the fan experience

The influence of AI extends far beyond judging itself. In training, motion tracking and performance analytics increasingly shape technique development and injury prevention, influencing how athletes prepare for competition. At the same time, AI is transforming the fan experience through enhanced replays, biomechanical overlays, and real-time explanations of performances. These tools promise greater transparency, but they also frame how performances are understood – adding more “storytelling” “ around what can be measured, visualised, and compared.

At what cost?

The Olympic AI Agenda’s ambition is to make sport fairer, more transparent, and more engaging. Yet as AI becomes integrated into judging, training, and the fan experience, it also plays a quiet but powerful role in defining what counts as excellence. If elite judges are gradually replaced or sidelined, the effects could cascade downward – reshaping how lower-tier judges are trained, how athletes develop, and how sports evolve over time. The challenge facing Olympic sports is therefore not only technological; it is institutional and cultural: how can we prevent AI from hollowing out the values that give each sport its meaning?


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Willem Standaert ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. AI is coming to Olympic judging: what makes it a game changer? – https://theconversation.com/ai-is-coming-to-olympic-judging-what-makes-it-a-game-changer-274313

LSD, champignons hallucinogènes, ayahuasca… contre la tentation d’un « exceptionnalisme psychédélique »

Source: The Conversation – in French – By Zoë Dubus, Post docorante en histoire de la médecine, Sciences Po

LSD, psilocybine extraite de champignons hallucinogènes, mescaline issue de cactus, ayahuasca… les psychédéliques connaissent un regain d’intérêt en recherche médicale, pour des usages récréatifs ou expérientels voire stimulants, et bénéficient de représentations plus positives que les autres psychotropes. Ils ne doivent pas pour autant être considérés comme une catégorie « supérieure » aux autres substances psychoactives.


La « renaissance psychédélique », ce renouveau de l’intérêt scientifique autour de ces psychotropes, suscite des espoirs considérables : ces substances offriraient-elles des traitements miracles contre la dépression, le trouble de stress post-traumatique ou les addictions ? Rendraient-elles leurs usagers plus empathiques, plus écologistes, voire moralement meilleurs ? Seraient-elles finalement « supérieures » aux autres psychotropes ?

Comme le suggèrent des travaux en sciences sociales et en psychologie, ces attentes relèvent d’un imaginaire qui surestime les propriétés intrinsèques des substances et sous-estime la force des contextes d’usage ou des croyances préétablies, tout en dépolitisant profondément la manière de les aborder.

Ces représentations, partagées par une partie des usagers de psychédéliques voire par certains thérapeutes, sont en effet trompeuses : elles reposent sur des généralisations hâtives, amplifient des attentes démesurées et ne résistent ni à l’étude de la diversité des usages ni aux risques documentés.

La singularité des psychédéliques classiques justifie-t-elle des règles d’exception ?

Aucune de ces propositions ne justifie par ailleurs un « exceptionnalisme psychédélique », c’est-à-dire l’idée selon laquelle les substances psychédéliques classiques – psilocybine, le principe actif des champignons hallucinogènes ou « champignons magiques », LSD, DMT/ayahuasca et mescaline – seraient si singulières qu’elles devraient bénéficier de règles d’exception par rapport aux autres psychotropes en vertu d’une supposée « supériorité ».




À lire aussi :
Addiction à l’alcool : un jour, traiter avec un psychédélique issu de champignons hallucinogènes ou du LSD ?


Au contraire : la cohérence scientifique, l’équité et la réduction des risques (RdR) exigent de rompre avec les narratifs qui hiérarchisent moralement les substances – et donc, par ricochet, leurs usagers –, de reconnaître les risques liés à la prise de ces produits (y compris en thérapie), d’intégrer la RdR au cœur des pratiques et d’instaurer des garde-fous éthiques solides.

Promesses thérapeutiques, risques et enjeux éthiques

L’évaluation des propriétés thérapeutiques des psychédéliques depuis une vingtaine d’années présente un intérêt marqué pour la prise en charge de plusieurs pathologies (par exemple, une amélioration rapide des symptômes voire une guérison complète dans certains tableaux dépressifs ou anxieux). Mais ces données font face à des défis méthodologiques spécifiques : difficultés à évaluer en « double aveugle contre placebo » (les participants devinent la substance qui leur a été administrée), mesure du biais « d’attente » (l’espoir des patients comme de l’équipe soignante de voir ce traitement avoir un effet), rôle crucial de la psychothérapie, questions concernant la durabilité et la sécurité.




À lire aussi :
Des hallucinogènes autorisés pour soigner des troubles psychiatriques : que nous dit la science ?


En outre, même en environnement contrôlé, ces substances ne sont pas dépourvues de risques avec, par exemple, des épisodes d’angoisse sévère, la réapparition de souvenirs traumatiques mal pris en charge, etc. Des thérapeutes sont même accusés d’agressions sexuelles, par exemple dans le cadre d’un essai clinique canadien en 2015. Sur le plan éthique, la suggestibilité accrue observée sous psychédéliques impose donc des protections renforcées : un consentement réellement éclairé qui explicite les effets attendus et leurs limites, la prévention des abus (notamment sexuels) et des dérives sectaires ainsi qu’une formation unifiée des praticiens.

Le long processus de réhabilitation de l’usage thérapeutique de ces substances repose sur des essais rigoureux. L’European Medicine Agency a organisé, en 2024, un atelier multiacteurs pour réfléchir à un cadre réglementaire de l’Union européenne vis-à-vis de ces traitements, puis a intégré, en 2025, des attendus spécifiques aux recherches sur psychédéliques concernant le traitement de la dépression, encourageant notamment à plus de rigueur statistique et à une meilleure évaluation des biais.

Aux États-Unis, la Food and Drug Administration (FDA) (l’Agence du médicament dans ce pays, ndlr) a publié en 2023 ses recommandations méthodologiques, insistant sur la nécessité d’essais plus nombreux et d’une évaluation précise de la contribution de la psychothérapie.

Au-delà de ces aspects techniques, il est crucial d’interroger les dimensions politiques des pathologies concernées. Les psychédéliques sont étudiés dans le traitement de la dépression et des addictions, mais ces troubles sont profondément liés aux conditions sociales. Aucune substance ne peut à elle seule résoudre l’anxiété liée à la pauvreté, la solitude, les discriminations ou la crise écologique. Un traitement efficace implique donc une évolution globale qui reconnaisse les déterminants politiques du mal-être.

Nuancer l’impact de la « plasticité cérébrale »

Des études en neurosciences récentes, largement relayées car passionnantes, suggèrent que les psychédéliques favoriseraient la plasticité cérébrale, c’est-à-dire la capacité d’apprendre de nouvelles choses ou de modifier ses schémas cognitifs. Pour autant, les personnes consommant des psychédéliques deviennent-elles systématiquement des hippies anticapitalistes, écologistes, pacifistes et humanistes ? Loin s’en faut.

Si certaines études observent des changements durables vers davantage d’ouverture ou de connexion, ces effets dépendent du contexte, des attentes et des valeurs préexistantes. Une personne déjà concernée par le bien-être animal pourra renforcer cette sensibilité après une expérience psychédélique et devenir végétarienne ; à l’inverse, des recherches montrent aussi des appropriations d’extrême droite des imaginaires psychédéliques.

Par ailleurs, l’écosystème psychédélique est traversé par des logiques extractivistes, entrepreunariales et néocoloniales. Le risque de captation privative (dépôts de brevets massifs, retraites thérapeutiques réservées aux élites, encadrement extrêmement limité pour réduire le coût de la prise en charge) est réel et bien documenté pour le cas états-unien.




À lire aussi :
Quand le capitalisme s’intéresse aux psychédéliques : pour le meilleur et pour le pire


Dans la Silicon Valley, l’usage pour optimiser la performance au travail illustre ces dérives. En outre, des juristes et des anthropologues alertent au sujet de la biopiraterie et de la nécessité de protéger les traditions autochtones liées aux substances comme le cactus peyotl au Mexique ou l’arbuste iboga au Gabon, tous deux fortement menacés par la surexploitation, problèmes auxquelles s’ajoutent les questions plus globales d’expropriation permanente des terres indigènes, la déforestation et le changement climatique.

D’autres soulignent que les usages traditionnels peuvent s’inscrire dans des logiques guerrières ou malveillantes. Contrairement aux idées reçues largement partagées dans certains milieux psychédéliques, rien ne permet donc de les ériger en catalyseurs universels de transformations positives.

Contre une hiérarchie morale des psychotropes

En raison de leur faible toxicité et de la rareté de cas documentés d’addiction à ces substances, les psychédéliques sont parfois séparés des autres psychotropes. Le neuroscientifique américain Carl Hart critique ainsi la distinction entre de « bonnes drogues » (les psychédéliques) et de « mauvaises drogues » (opioïdes, stimulants), qui nourrit une rhétorique stigmatisante légitimant des politiques répressives discriminatoires qui touchent particulièrement les populations racisées et précarisées.

Réserver un traitement de faveur aux seuls psychédéliques dans les débats sur la réforme des stupéfiants sur la base de ce supposé exceptionnalisme serait en effet très préjudiciable. Rappelons qu’utilisés avec des connaissances adéquates, et malgré un marché illégal sans contrôle qualité, la majorité des usages de stupéfiants – héroïne ou crack compris – sont considérés comme « non problématiques », c’est-à-dire sans dommages majeurs pour la santé, les relations sociales ou le travail. Selon le « Tableau général de la demande et de l’offre de drogues. Office des Nations unies contre la drogue et le crime » (ONU, 2017, ONUDC), à l’échelle mondiale, la proportion des personnes qui consomment des drogues et qui souffriraient de troubles liés à cet usage serait d’environ 11 %.

Il faut de surcroît rappeler que la prise récréative ou expérientielle de psychédéliques comporte des risques : exacerbations psychiatriques chez des sujets vulnérables, reviviscence de traumas, expérience très intense dénuée de sens, dépersonnalisation, mais également interactions dangereuses avec d’autres médicaments, importance des techniques de set and setting qui visent à sécuriser l’expérience (set signifiant « prendre en compte le bien-être psychique de la personne » et setting correspondant au cadre de l’expérience).

Il s’agit de faire connaître ces risques aux usagers afin que ces derniers puissent consommer de manière plus sûre, selon les principes de la réduction des risques.Présenter ces substances comme différentes des autres psychotropes ne peut avoir qu’un effet délétère en minimisant la perception de ces risques.

La nécessité d’une lente et rigoureuse évaluation des psychédéliques

La généralisation de l’usage de psychédéliques en thérapie ne pourra se faire qu’au prix d’une lente et rigoureuse évaluation de leurs bénéfices, mettant au cœur de la pratique le bien-être et l’accompagnement au long cours des patients. Ce modèle suppose des prises de décisions politiques pour financer et rembourser un tel dispositif, mouvement à l’opposé de ce que nous observons actuellement dans le champ de la santé.

Les psychédéliques ne sont pas des produits d’exception. Ils doivent être intégrés à une réflexion plus large et nuancée sur l’évolution de nos représentations de l’ensemble des psychotropes, de leurs usages médicaux à ceux ayant une visée exploratoire ou de plaisir, informée par la science plutôt que par des paniques morales, des enjeux politiques ou des phénomènes de mode.

The Conversation

Zoë Dubus est membre fondatrice de la Société psychédélique française

ref. LSD, champignons hallucinogènes, ayahuasca… contre la tentation d’un « exceptionnalisme psychédélique » – https://theconversation.com/lsd-champignons-hallucinogenes-ayahuasca-contre-la-tentation-dun-exceptionnalisme-psychedelique-271779