Is it illegal to make online videos of someone without their consent? The law on covert filming

Source: The Conversation – UK – By Subhajit Basu, Professor of Law and Technology, University of Leeds

Could those glasses be recording you? Lucky Business/Shutterstock

Imagine a stranger starts chatting with you on a train platform or in a shop. The exchange feels ordinary. Later, it appears online, edited as “dating advice” and framed to invite sexualised commentary. Your face, and an interaction you didn’t know was being recorded, is pushed into feeds where strangers can identify, contact and harass you.

This is a reality for many people, though the most shocking examples are mainly affecting women. A BBC investigation recently found that men based outside of the UK have been profiting from covertly filming women on nights out in London and Manchester and posting the videos on social media.

In the UK, filming someone in public – even covertly – is not automatically unlawful. Sometimes, it is socially valuable (think of people recording violence or police misconduct).

But once a person is identifiable and the clip is uploaded for views or profit, it can become unlawful under data protection law and, in more intrusive cases, privacy or harassment law. The problem here is what the filming is for, how it is done and what the platforms do with it.

UK law is cautious about a general claim to “privacy in public”. There is a key distinction in case law between being seen in a public place and being recorded for redistribution.

Courts have accepted that privacy can apply even in public, depending on circumstances. In the case of Campbell v MGN (2004), the House of Lords ruled that the Daily Mirror had breached model Naomi Campbell’s privacy by publishing photos that, while taken in public, exposed her private medical information.

The rise of smartphones and now wearable cameras has made covert capture cheaper, more discreet and more accessible. With smart glasses, recording can look like eye contact.

Capture is frictionless: the file is ready to upload before the person filmed even knows it exists. And manufacturer safeguards such as recording lights are already reportedly being bypassed by users.

Once it’s been uploaded, modern social media platforms allow this content to become easily scalable, searchable and profitable.

Context is what shifts the stakes. Covert filming, an intrusive focus on the body and publication at scale can turn an everyday moment into exposure that invites harassment.

Privacy in public

Public life has always involved being seen. The harm is being made findable and targetable, at scale. This is why the most practical legal tool is data protection. Under the UK General Data Protection Regulation (GDPR), when people are identifiable in a video, recording and uploading it is considered processing of personal data.

The uploader and platform must therefore comply with GDPR rules, which in this case would (usually) mean not posting identifiable footage of a stranger in the first place or, removing the details that identify them and taking the clip down quickly if the person objects.

UK GDPR does not apply to purely personal or household activity, with no professional or commercial connection. This is a narrow exemption – “pickup artist” channels and monetised social media posts are unlikely to fall within it.

Harassment law may apply where the filming and posting is followed by repeated contact, threats or encouraging others to target the person filmed, which causes them alarm or distress.

Lagging enforcement

Harm spreads faster than the law can respond. A clip can be uploaded, shared and monetised within seconds. Enforcement of privacy and data protection law is split between the Information Commissioner’s Office, Ofcom, police and courts.

Victims are left to rely on platform reporting tools, and duplicates often continue to spread even after posts are taken down. Arguably, prevention would be more effective than after-the-fact removal.

The temptation is to call for a new offence of “filming in public”. In my view, this risks being either too broad (chilling legitimate recording) or too narrow (missing the combination of factors – covert filming, identifiability, platform amplification and monetisation that make this a problem).

A better approach would be twofold. First, treating wearable recording devices as higher-risk consumer tech, and requiring safeguards that work in practice. For example: conspicuous, genuinely tamper-resistant recording indicators; privacy-by-default settings; and audit logs so misuse is traceable. The law could build in clear public-interest exemptions (journalism, documenting wrongdoing) so rules do not become a backdoor ban on recording.

There are precedents for regulating consumer tech in this way. For example, the UK has strict security requirements for connectable devices like smart TVs to prevent cyberattacks.

View through augmented reality smart glasses
Wearable cameras and AI-enabled tech is making covert filming easier than ever.
Kaspars Grinvalds/Shutterstock

Second, platforms need a clear requirement to reduce the harm caused by covert filming. In practice, that means spotting and obscuring identifiers such as phone numbers and workplace details, warning users when a stranger is identifiable, fast-tracking complaints from the person filmed, blocking re-uploads, and removing monetisation from this content.

The Online Safety Act provides a framework for addressing this problem, but it is not a neat checklist for prevention. Where it clearly applies is when the content itself, or the response it triggers, amounts to illegal harassment or stalking. Those are priority offences in the act, so platforms are expected to assess and mitigate those risks.

The awkward truth is that some covert, degrading clips may be harmful without being obviously illegal at the point of upload, until threats, doxxing or stalking follow.

Privacy in public will not be protected by slogans or a tiny recording light. It will be protected when existing legal principles are applied robustly. And when enforcement is designed for the speed, incentives and business models that shape what people see and share online.

The Conversation

Subhajit Basu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is it illegal to make online videos of someone without their consent? The law on covert filming – https://theconversation.com/is-it-illegal-to-make-online-videos-of-someone-without-their-consent-the-law-on-covert-filming-274885

Why the idea of an ‘ideal worker’ can be so harmful for people with mental health conditions

Source: The Conversation – UK – By Hadar Elraz, Senior Lecturer in Human Resource Management and Organisational Behaviour, Swansea University

PeopleImages/Shutterstock

In the modern world of work, the “ideal worker” is a dominant yet dangerous concept that can dictate workplace norms and expectations. This archetype describes an employee who is boundlessly productive, constantly available and emotionally stable at all times.

What makes this trope so flawed is that it assumes workers have no caring responsibilities outside work, or have unrealistic physical and psychological capabilities. It’s intended to drive efficiency, but in fact it is a standard that very few people can reach. It marginalises people who deviate from these rigid standards, including workers managing mental health conditions.

We are researchers in management and health, and our recent paper found that this “ideal worker” is a means of creating stigma. This stigma is embedded in processes and policies, creating a yardstick against which all employees are measured.

The study is based on in-depth interviews with a diverse group of employees with mental health conditions (including depression, bipolar disorder, anxiety and OCD). They worked across the private, public and third sectors in various jobs, including accounting, engineering, teaching and senior management.

For workers with mental health conditions, the expectation of emotional steadiness creates a conflict with the often fluctuating nature of their conditions.

When organisations are seen to value the ideal worker archetype, they can end up creating barriers to meaningful inclusion. In our paper we understand these as both “barriers to doing” and “barriers to being”.

What this means is that workplaces end up with rigid workloads and inflexible expectations (“barriers to doing”). As such, they fail to accommodate people with invisible or fluctuating symptoms. They can also undermine a worker’s identity and self-worth (“barriers to being”), framing them as unreliable or incompetent simply because they do not meet the standards of the ideal worker.

Because employees with mental health conditions often fear being perceived as weak, a burden or fragile, they frequently work excessively hard to prove their value. This means that these employees might compromise their resting and unwinding time in order to live up to workplace expectations.

But of course, these efforts create strain at the personal level. These workers can end up putting themselves at greater risk of relapse or ill health. Our research found that overworking to mask mental health symptoms (working unpaid hours to make up for times when they are unwell, for example) can suggest an organisational culture that may not be inclusive enough.

What’s really happening

HR practices may assume that mental health conditions should be managed by employees alone, rather than with support from the organisation. At the same time, this constant pressure to over-perform can exacerbate mental health conditions, leading to a vicious cycle of stress, exhaustion and even more stigma.

The ideal worker norm forces many employees into keeping their mental health conditions to themselves. They may see hiding their struggles as a tactical way of protecting their professional identity.

In an environment that rewards constant productivity, disclosing a condition that might require reasonable adjustments could be seen as a professional risk. In other words, stigma may compromise career chances.

Participants in our research reported lying on health questionnaires or hiding symptoms because the climate in their workplace signalled that mental health conditions were poorly understood. But this secrecy creates a massive emotional burden, as workers felt pressure to constantly monitor their health, mask their condition and schedule medical appointments in secret.

Paradoxically, while this approach allows people to remain employed, it reinforces the structures that demand their silence. And it ensures that workplace support remains invisible or inaccessible.

lone woman working at a desk in an office at night.
The research found that some workers put in extra unpaid hours to try to achieve ‘ideal’ levels of productivity.
Gorodenkoff/Shutterstock

Our analysis showed a stark contrast between perceptions of support for people with physical impairments and that for employees with mental health conditions. While physical aids like ramps are often visible and accepted, workers setting out their mental health needs frequently faced the risk of stigma, ignorance or disbelief.

By holding on to the ideal worker archetype, organisations are not only failing to fulfil their duty of care. They may also be undermining their own long-term sustainability if they lose skilled labour. Then there are the costs of constant recruitment and retraining.

Managing stigma is a workplace burden that can lead to burnout or divert energy away from a worker’s core tasks. We suggest a fundamental shift for employers: moving away from chasing the “ideal worker” towards creating “ideal workplaces” instead. This means challenging the assumption that productivity must be uninterrupted and that emotional stability is a prerequisite for professional value.

It also means focusing on the quality of an employee’s contribution rather than judging their constant availability or productivity. And it means designing work environments from the ground up to support diverse needs, so that mental health conditions are normalised. This would reduce the need for employees to keep conditions secret.

Ultimately, the problem with the ideal worker archetype is that it is a persistent myth that ignores the reality of human diversity. True equity requires organisations to stop trying to shape individuals to fit the mould and instead rethink work norms to support all employees so that everyone can play a part in enhancing the business.

The Conversation

Hadar Elraz disclosed this study was supported by the UK Economic and Social Research Council. She disclosed no relevant affiliations beyond their academic appointment.

Jen Remnant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the idea of an ‘ideal worker’ can be so harmful for people with mental health conditions – https://theconversation.com/why-the-idea-of-an-ideal-worker-can-be-so-harmful-for-people-with-mental-health-conditions-274350

The mental edge that separates elite athletes from the rest

Source: The Conversation – Canada – By Mallory Terry, Postdoctoral Fellow, Faculty of Science, McMaster University

Elite sport often looks like a test of speed, strength and technical skill. Yet some of the most decisive moments in high-level competition unfold too quickly to be explained by physical ability alone.

Consider Canadian hockey superstar Connor McDavid’s overtime goal at the 4 Nations Face-Off against the United States last February. The puck was on his stick for only a fraction of a second, the other team’s defenders were closing in and he still somehow found the one opening no one else saw.

As professional hockey players return to the ice at the Milan-Cortina Olympics, Canadians can expect more moments like this. Increasingly, research suggests these moments are better understood not as just physical feats, but also as cognitive ones.

A growing body of research suggests a group of abilities known as perceptual-cognitive skills are key differentiators. This is the mental capacity to turn a blur of sights, sounds and movements into split-second decisions.

These skills allow elite athletes to scan a chaotic scene, pick out the right cues and act before anyone else sees the opportunity. In short, they don’t just move faster, but they also see smarter.

Connor McDavid Wins 4 Nations Face-Off For Canada In Overtime (Sportsnet)

How athletes manage visual chaos

One way researchers study these abilities is through a task known as multiple-object tracking, which involves keeping tabs on a handful of moving dots on a screen while ignoring the rest. Multiple-object tracking is a core method I use in my own research on visual attention and visual-motor co-ordination.

Multiple-object tracking taxes attention, working memory and the ability to suppress distractions. These are the same cognitive processes athletes rely on to read plays and anticipate movement in real time.

Unsurprisingly, elite athletes reliably outperform non-athletes on this task. After all, reading plays, tracking players and anticipating movement all depend on managing visual chaos.

There is, however, an important caveat. Excelling at multiple-object tracking will not suddenly enable someone to anticipate a play like McDavid or burst past a defender like Marie-Philip Poulin, captain of the Canadian women’s hockey team. Mastering one narrow skill doesn’t always transfer to real-world performance. Researchers often describe this limitation as the “curse of specificity.”

This limitation raises a deeper question about where athletes’ mental edge actually comes from. Are people with exceptional perceptual-cognitive abilities drawn to fast-paced sports, or do years of experience sharpen it over time?

Evidence suggests the answer is likely both.

Born with it or trained over time?

Elite athletes, radar operators and even action video game players — all groups that routinely track dynamic, rapidly changing scenes — consistently outperform novices on perceptual-cognitive tasks.

At the same time, they also tend to learn these tasks faster, pointing to the potential role of experience in refining these abilities.

What seems to distinguish elite performers is not necessarily that they take in more information, but that they extract the most relevant information faster. This efficiency may ease their mental load, allowing them to make smarter, faster decisions under pressure.

My research at McMaster University seeks to solve this puzzle by understanding the perceptual-cognitive skills that are key differentiators in sport, and how to best enhance them.

This uncertainty around how to best improve perceptual-cognitive skills is also why we should be cautious about so-called “brain training” programs that promise to boost focus, awareness or reaction time.

The marketing is often compelling, but the evidence for broad, real-world benefits is far less clear. The value of perceptual-cognitive training hasn’t been disproven, but it hasn’t been tested rigorously enough in real athletic settings to provide compelling evidence. To date, though, tasks that include a perceptual element such as multiple-object tracking show the most promise.

Training perceptual-cognitive skills

Researchers and practitioners still lack clear answers about the best ways to train perceptual-cognitive skills, or how to ensure that gains in one context carry over to another. This doesn’t mean cognitive training is futile, but it does mean we need to be precise and evidence-driven about how we approach it.

Research does, however, point to several factors that increase the likelihood of real-world transfer.

Training is more effective when it combines high cognitive and motor demands, requiring rapid decisions under physical pressure, rather than isolated mental drills. Exposure to diverse stimuli matters as well, as it results in a brain that can adapt, not just repeat. Finally, training environments that closely resemble the game itself are more likely to produce skills that persist beyond the training session.

The challenge now is translating these insights from the laboratory into practical training environments. Before investing heavily in new perceptual-cognitive training tools, coaches and athletes need to understand what’s genuinely effective and what’s just a high-tech placebo.

For now, this means treating perceptual-cognitive training as a complement to sport-specific training, not as a substitute. Insights will also come from closer collaborations between researchers, athletes and coaches.

There is however, support for incorporating perceptual-cognitive tasks as an assessment of “game sense” to inform scouting decisions.

The real secret to seeing the game differently, then, is not just bigger muscles or faster reflexes. It’s a sharper mind, and understanding how it works could change how we think about performance, both on and off the ice.

The Conversation

Mallory Terry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The mental edge that separates elite athletes from the rest – https://theconversation.com/the-mental-edge-that-separates-elite-athletes-from-the-rest-273758

Why Canada must step up to protect children in a period of global turmoil

Source: The Conversation – Canada – By Catherine Baillie Abidi, Associate Professor, Child & Youth Study, Mount Saint Vincent University

Over half a billion children are now living in conflict zones, according to a 2025 Save the Children report, and the world is turning its back on them.

At a time of unprecedented global insecurity, funding and resources to care for, protect and engage with children affected by armed violence continue to decline.

The Donald Trump administration’s recent announcement of unprecedented American cuts to funding for international organizations — including reductions to the United Nations Offices of the Special Representatives of the Secretary-General for Children in Armed Conflict and on Violence Against Children — further undermines an already fragile system.

Cuts like these can have a devastating effect on some of the world’s most vulnerable populations, undermining important work to identify and prevent violations against children, and to assist children in rebuilding their lives in the aftermath of violence. Canada cannot sit on the sidelines.

Preventing violence against children

Violence against children is a global crisis. Without a seismic shift in how states take action to prevent such violence, the costs will continue to impact people around the world.

As a global community, we have a collective responsibility to build communities where children are not only safe and thriving, but where their capacity and agency as future peace-builders, leaders and decision-makers in their families, schools and communities are built upon and nurtured in wartime and post-conflict societies. These are core responsibilities that the global community is failing at miserably.

As many as 520 million war-affected children deserve better.

Canada has a long history of serving as a champion of children’s rights in armed conflict. Canadians have led global initiatives, including leading the first International Conference on War-affected Children, championing the Ottawa Treaty to ban landmines and developing the Vancouver Principles on Peacekeeping and Preventing the Recruitment and Use of Child Soldiers .

Canada is also the founder and chair of the Group of Friends of Children and Armed Conflict, an informal but vital UN network focused on child protection.

Now more than ever — amid American economic and political disengagement from core child protection priorities — there is both an opportunity and an imperative for Canada to demonstrate active leadership in the promotion of children’s rights and enhanced safety for children impacted by the devastation of armed conflict.

Complacency threatens to perpetuate generational impacts of violence.

Global leadership required

The Canadian government must once again stand up and provide global leadership on children and armed conflict by bolstering strategic alliances and funding efforts to protect and engage children impacted by armed conflict.

As a community of Canadian scholars dedicated to studying children, organized violence and armed conflict, we are deeply concerned about the growing vulnerability of children worldwide.

We see an opportunity for Canada to reclaim its role as a global leader in advancing and protecting children’s rights, especially in a time of political upheaval and heightened global insecurity. Canada can reassert itself and live up to its global reputation as a force for good in the world. It can stand on the global stage and draw attention to a crisis with generational impacts.

Children need protection from the effects of war, but they also need to be seen as active agents of peace who understand their needs and can help secure better futures.

Investments of attention and funding today can make significant differences in the emotional and social development of children who are navigating post-conflict life.




Read more:
The lasting scars of war: How conflict shapes children’s lives long after the fighting ends


Canada must take the lead

These investments are critical to the social structures of peaceful communities. Canadian leadership is well-positioned to take on this role, not only because of the country’s history and reputation, but because Canadian scholars are at the forefront, are organized around this issue and can be leveraged for maximum impact.

Prime Minister Mark Carney’s recent celebrated speech at the World Economic Forum’s annual conference at Davos signalled a possible and important shift in alliances, priorities and global moral leadership for Canada.

Canadian foreign policy can build upon this. Making the vulnerability of children affected by armed conflict and the capacity of children to be agents of peace a key foreign policy issue would positively affect the lives of millions of children globally. It would also signal to the world that Canada is ready to take on the significant global human rights challenges it once did.


The following scholars, members of The Canadian Community of Practice on Children and Organized Violence & Armed Conflict, contributed to this article: Maham Afzaal, PhD Student, Queens University; Dr. Marshall Beier, McMaster University; Sophie Greco, PhD Candidate, Wilfrid Laurier University; Ethan Kelloway, Honours Student, Mount Saint Vincent University; Dr. Marion Laurence, Dalhousie University; Dr. Kate Swanson, Dalhousie University; Orinari Wokoma, MA student, Mount Saint Vincent University.

The Conversation

Catherine Baillie Abidi receives funding from Social Science and Humanities Research Council of Canada.

Izabela Steflja receives funding from the Social Sciences and Humanities Research Council of Canada.

Kirsten J. Fisher receives funding from the Social Sciences and Humanities Research Council of Canada.

Myriam Denov receives funding from the Social Science and Humanities Research Council of Canada, and the Canada Research Chair Program.

ref. Why Canada must step up to protect children in a period of global turmoil – https://theconversation.com/why-canada-must-step-up-to-protect-children-in-a-period-of-global-turmoil-274398

Lessons from the sea: Nature shows us how to get ‘forever chemicals’ out of batteries

Source: The Conversation – Canada – By Alicia M. Battaglia, Postdoctoral Researcher, Department of Mechanical & Industrial Engineering, University of Toronto

As the world races to electrify everything from cars to cities, the demand for high-performance, long-lasting batteries is soaring. But the uncomfortable truth is this: many of the batteries powering our “green” technologies aren’t as green as we might think.

Most commercial batteries rely on fluorinated polymer binders to hold them together, such as polyvinylidene fluoride. These materials perform well — they’re chemically stable, resistant to heat and very durable. But they come with a hidden environmental price.

Fluorinated polymers are derived from fluorine-containing chemicals that don’t easily degrade, releasing persistent pollutants called PFAS (per- and polyfluoroalkyl substances) during their production and disposal. Once they enter the environment, PFAS can remain in water, soil and even human tissue for hundreds of years, earning them the nickname “forever chemicals.”

We’ve justified their use because they increase the lifespan and performance of batteries. But if the clean energy transition relies on materials that pollute, degrade ecosystems and persist in the environment for years, is it really sustainable?

As a graduate student, I spent years thinking about how to make batteries cleaner — not just in how they operate, but in how they’re made. That search led me somewhere unexpected: the ocean.




Read more:
Living with PFAS ‘forever chemicals’ can be distressing. Not knowing if they’re making you sick is just the start


Why binders are important

an electric car plugged in to charge
Most commercial batteries rely on fluorinated polymer binders to hold them together. These materials perform well but come with an environmental cost.
(Unsplash/CHUTTERSNAP)

Every rechargeable battery has three essential components: two electrodes separated by a liquid electrolyte that allows charged atoms (ions) to flow between them. When you charge a battery, the ions move from one electrode to the other, storing energy.

When you use the battery, the charged atoms flow back to their original side, releasing that stored energy to power your phone, car or the grid.

Each electrode is a mixture of three parts: an active material that stores and releases energy, a conductive additive that helps electrons move and a binder that holds everything together.

The binder acts like glue, keeping particles in place and preventing them from dissolving during use. Without it, a battery would be unable to hold a charge after only a few uses.

Lessons from the sea

Many marine organisms have evolved in remarkable ways to attach themselves to wet, slippery surfaces. Mussels, barnacles, sandcastle worms and octopuses produce natural adhesives to stick to rocks, ship hulls and coral in turbulent water — conditions that would defeat most synthetic glues.

For mussels, the secret lies in molecules called catechols. These molecules contain a unique amino acid in their sticky proteins that helps them form strong bonds with surfaces and hardens almost instantly when exposed to oxygen. This chemistry has already inspired synthetic adhesives used to seal wounds, repair tendons and create coatings that stick to metal or glass underwater.

Building on this idea, I began exploring a related molecule called gallol. Like catechol in mussels, gallol is used by marine plants and algae to cling to wet surfaces. Its chemical structure is very similar to catechol, but it contains an extra functional group that makes it even more adhesive and versatile. It can form multiple types of strong, durable and reversible bonds — properties that make it an excellent battery binder.

a group of mussels stuck to a rock
Mussels use molecules called catechols to stick to surfaces.
(Unsplash/Manu Mateo)

A greener solution

Working with Prof. Dwight S. Seferos at the University of Toronto, we developed a polymer binder based on gallol chemistry and paired it with zinc, a safer and more abundant metal than lithium. Unlike lithium, zinc is non-flammable and easier to source sustainably, making it ideal for large-scale applications.

The results were remarkable. Our gallol-based zinc batteries maintained 52 per cent higher energy efficiency after 8,000 charge-discharge cycles compared to conventional batteries that use fluorinated binders. In practical terms, that means longer-lasting devices, fewer replacements and a smaller environmental footprint.

Our findings are proof that performance and sustainability can go hand-in-hand. Many in industry might still view “green” and “effective” as competing priorities, with sustainability an afterthought. That logic is backwards.

We can’t build a truly clean energy future using polluting materials. For too long, the battery industry has focused on performance at any cost, even if that cost includes toxic waste, hard-to-recycle materials and unsustainable and unethical mining practices. The next generation of technologies must be sustainable by design, built from sources are renewable, biodegradable and circular.

Nature has been running efficient, self-renewing systems for billions of years. Mussels, shellfish and seaweeds build materials that are strong, flexible and biodegradable. No waste and no forever chemicals. It’s time we started paying attention.

The ocean holds more than beauty and biodiversity; it may also hold the blueprint for the future of energy storage. But realizing that future requires a cultural shift in science, one that rewards innovation that heals, not just innovation that performs.

We don’t need to sacrifice progress to protect the planet. We just need to design with the planet in mind.

The Conversation

This research was supported by the National Sciences and Engineering Research Council of Canada, the Canadian Foundation for Innovation, and the Ontario Research Fund. Alicia M. Battaglia received funding from the Ontario Graduate Scholarship Program.

ref. Lessons from the sea: Nature shows us how to get ‘forever chemicals’ out of batteries – https://theconversation.com/lessons-from-the-sea-nature-shows-us-how-to-get-forever-chemicals-out-of-batteries-273098

AI is coming to Olympic judging: what makes it a game changer?

Source: The Conversation – France – By Willem Standaert, Associate Professor, Université de Liège

As the International Olympic Committee (IOC) embraces AI-assisted judging, this technology promises greater consistency and improved transparency. Yet research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy.

The Olympic AI agenda

In 2024, the IOC unveiled its Olympic AI Agenda, positioning artificial intelligence as a central pillar of future Olympic Games. This vision was reinforced at the very first Olympic AI Forum, held in November 2025, where athletes, federations, technology partners, and policymakers discussed how AI could support judging, athlete preparation, and the fan experience.

At the 2026 Winter Olympics in Milano-Cortina, the IOC is considering using AI to support judging in figure skating (men’s and women’s singles and pairs), helping judges precisely identify the number of rotations completed during a jump. Its use will also extend to disciplines such as big air, halfpipe, and ski jumping (ski and snowboard events where athletes link jumps and aerial tricks), where automated systems could measure jump height and take-off angles. As these systems move from experimentation to operational use, it becomes essential to examine what could go right… or wrong.

Judged sports and human error

In Olympic sports such as gymnastics and figure skating, which rely on panels of human judges, AI is increasingly presented by international federations and sports governing bodies as a solution to problems of bias, inconsistency, and lack of transparency. Judging officials must assess complex movements performed in a fraction of a second, often from limited viewing angles, for several hours in a row. Post-competition reviews show that unintentional errors and discrepancies between judges are not exceptions.

This became tangible again in 2024, when a judging error involving US gymnast Jordan Chiles at the Paris Olympics sparked major controversy. In the floor final, Chiles initially received a score that placed her fourth. Her coach then filed an inquiry, arguing that a technical element had not been properly credited in the difficulty score. After review, her score was increased by 0.1 points, temporarily placing her in the bronze medal position. However, the Romanian delegation contested the decision, arguing that the US inquiry had been submitted too late – exceeding the one-minute window by four seconds. The episode highlighted the complexity of the rules, how difficult it can be for the public to follow the logic of judging decisions, and the fragility of trust in panels of human judges.

Moreover, fraud has also been observed: many still remember the figure skating judging scandal at the 2002 Salt Lake City Winter Olympics. After the pairs event, allegations emerged that a judge had favoured one duo in exchange for promised support in another competition – revealing vote-trading practices within the judging panel. It is precisely in response to such incidents that AI systems have been developed, notably by Fujitsu in collaboration with the International Gymnastics Federation.

What AI can (and cannot) fix in judging

Our research on AI-assisted judging in artistic gymnastics shows that the issue is not simply whether algorithms are more accurate than humans. Judging errors often stem from the limits of human perception, as well as the speed and complexity of elite performances – making AI appealing. However, our study involving judges, gymnasts, coaches, federations, technology providers, and fans highlights a series of tensions.

AI can be too exact, evaluating routines with a level of precision that exceeds what human bodies can realistically execute. For example, where a human judge visually assesses whether a position is properly held, an AI system can detect that a leg or arm angle deviates by just a few degrees from the ideal position, penalising an athlete for an imperfection invisible to the naked eye.

While AI is often presented as objective, new biases can emerge through the design and implementation of these systems. For instance, an algorithm trained mainly on male performances or dominant styles may unintentionally penalise certain body types.

In addition, AI struggles to account for artistic expression and emotions – elements considered central in sports such as gymnastics and figure skating. Finally, while AI promises greater consistency, maintaining it requires ongoing human oversight to adapt rules and systems as disciplines evolve.

Action sports follow a different logic

Our research shows that these concerns are even more pronounced in action sports such as snowboarding and freestyle skiing. Many of these disciplines were added to the Olympic programme to modernise the Games and attract a younger audience. Yet researchers warn that Olympic inclusion can accelerate commercialisation and standardisation, at the expense of creativity and the identity of these sports.

A defining moment dates back to 2006, when US snowboarder Lindsey Jacobellis lost Olympic gold after performing an acrobatic move – grabbing her board mid-air during a jump – while leading the snowboard cross final. The gesture, celebrated within her sport’s culture, eventually cost her the gold medal at the Olympics. The episode illustrates the tension between the expressive ethos of action sports and institutionalised evaluation.

AI judging trials at the X Games

AI-assisted judging adds new layers to this tension. Earlier research on halfpipe snowboarding had already shown how judging criteria can subtly reshape performance styles over time. Unlike other judged sports, action sports place particular value on style, flow, and risk-taking – elements that are especially difficult to formalise algorithmically.

Yet AI was already tested at the 2025 X Games, notably during the snowboard SuperPipe competitions – a larger version of the halfpipe, with higher walls that enable bigger and more technical jumps. Video cameras tracked each athlete’s movements, while AI analysed the footage to generate an independent performance score. This system was tested alongside human judging, with judges continuing to award official results and medals. However, the trial did not affect official outcomes, and no public comparison has been released regarding how closely AI scores aligned with those of human judges.

Nonetheless, reactions were sharply divided: some welcomed greater consistency and transparency, while others warned that AI systems would not know what to do when an athlete introduces a new trick – something often highly valued by human judges and the crowd.

Beyond judging: training, performance and the fan experience

The influence of AI extends far beyond judging itself. In training, motion tracking and performance analytics increasingly shape technique development and injury prevention, influencing how athletes prepare for competition. At the same time, AI is transforming the fan experience through enhanced replays, biomechanical overlays, and real-time explanations of performances. These tools promise greater transparency, but they also frame how performances are understood – adding more “storytelling” “ around what can be measured, visualised, and compared.

At what cost?

The Olympic AI Agenda’s ambition is to make sport fairer, more transparent, and more engaging. Yet as AI becomes integrated into judging, training, and the fan experience, it also plays a quiet but powerful role in defining what counts as excellence. If elite judges are gradually replaced or sidelined, the effects could cascade downward – reshaping how lower-tier judges are trained, how athletes develop, and how sports evolve over time. The challenge facing Olympic sports is therefore not only technological; it is institutional and cultural: how can we prevent AI from hollowing out the values that give each sport its meaning?


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Willem Standaert ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. AI is coming to Olympic judging: what makes it a game changer? – https://theconversation.com/ai-is-coming-to-olympic-judging-what-makes-it-a-game-changer-274313

We run writing workshops at a South African university: what we’ve learnt about how students are using AI, and how to help them

Source: The Conversation – Africa – By Peet van Aardt, Coordinator: Initiative for Creative African Narratives (iCAN) & Lecturer: Academic Literacy, University of the Free State

Much is being said about the wonders of artificial intelligence (AI) and how it is the new frontier. And while it provides amazing possibilities in fields like medicine, academics are debating its advantages for university students. Peet van Aardt researches student writing and presents academic writing workshops at the University of the Free State Writing Centre, helping students to build clear arguments, summarise essay structure and express their opinions in their own voice. He also spearheads the Initiative for Creative African Narratives (iCAN), a project that assists students in getting their original stories published. Here he shares his experiences and thoughts on the use of generative AI at university.

What are your biggest concerns about the growth of AI-generated material from students?

The use of generative AI to compose assignments and write essays is widely reported, and its potentially detrimental effects on critical thinking and research are clear.

My biggest concern is that it takes away academic agency from students. By that I mean it takes the proverbial pen out of our students’ hands. If they over-rely on it (which we see they tend to do), they no longer think critically and no longer express their own voices.

Young man with a microphone
Student voice might be lost when AI does the writing.
Clout, Unsplash, CC BY

This is particularly important in African universities, where student voice and the intellectual contribution of students to society are drivers of social change and decolonisation.

How can you tell if a text is written by a student or is AI generated?

Flawless grammar and clichés are the first two signs. Generic, shallow reasoning is another. Finally, the generative AI answer does not tend to relate well to topics set in a local context.

If I take student short stories that have been submitted to our iCAN project as an example, I see more and more tales set in some unnamed place (previously, students’ stories often took place in their own towns) or adventures experienced by characters named Stacey, Rick, Damian or other American-sounding people.

Another example: third year students studying Geography were asked to write a ten page essay on the history and future of sustainability and how it applied to Africa. To guide them, the students were referred to a report that addresses challenges in sustainability. What we saw during our consultations in the writing centre were texts that discussed this report, as well as relevant topics such as “global inequality and environmental justice” and “linking human rights, sustainability and peace” – but nowhere was South Africa even mentioned. The students clearly prompted their generative AI tool to produce an essay on the first part of the assignment instructions.

Also, it’s quite easy to determine whether somebody did their own research and created their own arguments when they have to reflect on it.

When students don’t understand the text of their essay, it’s a sign that they didn’t produce it. As academics and writing coaches we increasingly encounter students who, instead of requiring help with their own essay or assignment, need assistance with their AI-produced text. Students ask questions about the meaning and relevance of the text.

Writing centre consultations have always relied on asking the students questions about their writing in an attempt to guide them on their academic exploration. But recently more time needs to be spent on reading what the students present as their writing, and then asking them what it means. Therefore, instead of specifics, we now need to take a step back and look at the bigger picture.

Not all students use generative AI poorly. That is why I still believe in using AI detection tools as a first “flag” in the process: it provides a place to start.

What interventions do you propose?

Students should be asked questions about text, like:

  • Does what it is saying make sense?

  • Does this statement sound true?

  • Does it answer the lecturer’s question?

In some instances teaching and learning is moving back to paper-based assignments, which I support. If possible, we should let students write with pens in controlled environments.

It’s also becoming more important to reignite the skill of academic reading so that students can understand what their AI assistant is producing. This points to the importance of reading for understanding, being able to question what was read, and being able to remember what one has read.

Generative AI is quite western and northern-centric. I believe we in academia have an opportunity to focus, where possible, on indigenous knowledge. Students should be encouraged to reflect on indigenous knowledge more often.

Lastly, academics should not over-rely on generative AI themselves if they don’t want their students to do so. As student enrolment numbers rise, time is becoming a rare luxury for academics, but we cannot expect students to take responsibility for their learning when we want to take shortcuts in our facilitation.

Have you changed your approach given these insights?

We have been revisiting our workshop materials to include more theory and practice on reading. Well-known strategies like the SQ3R method (to survey, question, read, recite and review a text) and the PIE approach (understanding that paragraphs Point to a main idea, support this by Illustration and Explain how and why the writer supports the main idea) are infused, along with various activities to ensure students apply some of these.

Our one-on-one consultations between students and trained, qualified academic writing experts continue to be integral.

If we as academics want to continue facilitating the learning process in students – and truly put them at the centre of education – we have to empower them to think critically and express themselves in their own voices.

The Conversation

Peet van Aardt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. We run writing workshops at a South African university: what we’ve learnt about how students are using AI, and how to help them – https://theconversation.com/we-run-writing-workshops-at-a-south-african-university-what-weve-learnt-about-how-students-are-using-ai-and-how-to-help-them-273286

NASA’s Artemis II plans to send a crew around the Moon to test equipment and lay the groundwork for a future landing

Source: The Conversation – USA – By Margaret Landis, Assistant Professor of Earth and Space Exploration, Arizona State University

A banner signed by NASA employees and contractors outside Launch Complex 39B, where NASA’s Artemis II rocket is visible in the background. NASA/Joel Kowsky, CC BY-NC-ND

Almost as tall as a football field, NASA’s Space Launch System rocket and capsule stack traveled slowly – just under 1 mile per hourout to the Artemis II launchpad, its temporary home at the Kennedy Space Center in Florida, on Jan. 17, 2026. That slow crawl is in stark contrast to the peak velocity it will reach on launch day, over 22,000 miles per hour, when it will send a crew of four on a journey around the Moon.

While its first launch opportunity is on Feb. 8., a rocket launch is always at the mercy of a variety of factors outside of the launch team’s control – from the literal position of the planets down to flocks of birds or rogue boats near the launchpad. Artemis II may not be able to launch on Feb. 8, but it has backup launch windows available in March and April. In fact, Feb. 8 already represents a small schedule change from the initially estimated Feb. 6 launch opportunity opening.

Artemis II’s goal is to send people to pass by the Moon and be sure all engineering systems are tested in space before Artemis III, which will land astronauts near the lunar south pole.

If Artemis II is successful, it will be the first time any person has been back to the Moon since 1972, when Apollo 17 left to return to Earth. The Artemis II astronauts will fly by the far side of the Moon before returning home. While they won’t land on the surface, they will provide the first human eyes on the lunar far side since the 20th century.

To put this in perspective, no one under the age of about 54 has yet lived in a world where humans were that far away from Earth. The four astronauts will orbit the Moon on a 10-day voyage and return through a splashdown in the Pacific Ocean. As a planetary geologist, I’m excited for the prospect of people eventually returning to the Moon to do fieldwork on the first stepping stone away from Earth’s orbit.

A walkthrough of the Artemis II mission, which plans to take a crew around the Moon.

Why won’t Artemis II land on the Moon?

If you wanted to summit Mount Everest, you would first test out your equipment and check to make sure everything works before heading up the mountain. A lunar landing is similar. Testing all the components of the launch system and crew vehicle is a critical part of returning people safely to the surface of the Moon and then flying them back to Earth.

And compared to the lunar surface, Everest is a tropical paradise.

NASA has accomplished lunar landings before, but the 54-year hiatus means that most of the engineers who worked on Apollo have retired. Only four of the 12 astronauts who have walked on the Moon are still alive.

Technology now is also vastly different. The Apollo lunar landing module’s computer only had about 4 kilobytes of RAM. A single typical iPhone photo is a few megabytes in size, over 1,000 times larger than the Apollo lunar landing module’s memory.

The two components of the Artemis II project are the rocket (the Space Launch System) and the crew capsule. Both have had a long road to the launchpad.

The Orion capsule was developed as part of the Constellation program, announced in 2005 and concluded in 2010. This program was a President George W. Bush-era attempt to move people beyond the space shuttle and International Space Station.

The Space Launch System started development in the early 2010s as a replacement vehicle for the Ares rocket, which was meant to be used with the Orion capsule in the Constellation program. The SLS rocket was used in 2022 for the Artemis I launch, which flew around the Moon without a crew. Boeing is the main contractor tasked with building the SLS, though over 1,000 separate vendors have been involved in the rocket’s fabrication.

The Apollo program, too, first sent a crewed capsule around the Moon without landing. Apollo 8, the first crewed spacecraft to leave Earth orbit, launched and returned home in December 1968. William Anders, one of the astronauts on board tasked with testing the components of the Apollo lunar spacecraft, captured the iconic “Earthrise” image during the mission.

The white and blue cloudy Earth is visible above a gray edge of the Moon's surface
The Apollo 8 ‘Earthrise’ image, showing the Earth over the horizon from the Moon. This image, acquired by William Anders, became famous for its portrayal of the Earth in its planetary context.
NASA

“Earthrise” was the first time people were able to look back at the Earth as part of a spacefaring species. The Earthrise image has been reproduced in a variety of contexts, including on a U.S. postage stamp. It fundamentally reshaped how people thought of their environment. Earth is still far and beyond the most habitable location in the solar system for life as we know it.

Unique Artemis II science

The Artemis II astronauts will be the first to see the lunar far side since the final Apollo astronauts left over 50 years ago. From the window of the Orion capsule, the Moon will appear at its largest to be about the size of a beach ball held at arm’s length.

Over the past decades, scientists have used orbiting satellites to image much of the lunar surface. Much imaging of the lunar surface has been accomplished, especially at high spatial resolution, by the lunar reconnaissance orbiter camera, LROC.

LROC is made up of a few different cameras. The LROC’s wide angle and narrow angle cameras have both captured images of more than 90% of the lunar surface. The LROC Wide Angle Camera has a resolution on the lunar surface of about 100 meters per pixel – with each pixel in the image being about the length of an American football field.

The LROC narrow angle camera provides about 0.5 to 2 meters per pixel resolution. This means the average person would fit within about the length of one pixel from the narrow angle camera’s orbital images. It can clearly see large rocks and the Apollo lunar landing sites.

If the robotic LROC has covered most of the lunar surface, why should the human crew of Artemis II look at it, at lower resolution?

Most images from space are not what would be considered “true” color, as seen by the human eye. Just like how the photos you take of an aurora in the night sky with a cellphone camera appear more dramatic than with the naked eye, the image depends on the wavelengths the detection systems are sensitive to.

Human astronauts will see the lunar surface in different colors than LROC. And something that human astronauts have that an orbital camera system cannot have is geology training. The Artemis II astronauts will make observations of the lunar far side and almost instantly interpret and adjust their observations.

The proceeding mission, Artemis III, which will include astronauts landing on the lunar surface, is currently scheduled to launch by 2028.

What’s next for Artemis II

The Artemis II crew capsule and SLS rocket are now waiting on the launchpad. Before launch, NASA still needs to complete several final checks, including testing systems while the rocket is fueled. These systems include the emergency exit for the astronauts in case something goes wrong, as well as safely moving fuel, which is made of hydrazine – a molecule made up of nitrogen and hydrogen that is incredibly energy-dense.

Completing these checks follows the old aerospace adage of “test like you fly.” They will ensure that the Artemis II astronauts have everything working on the ground before departing for the Moon.

The Conversation

Margaret Landis receives research funding from NASA. She is affiliated with the Planetary Society as a member for over 20 years.

ref. NASA’s Artemis II plans to send a crew around the Moon to test equipment and lay the groundwork for a future landing – https://theconversation.com/nasas-artemis-ii-plans-to-send-a-crew-around-the-moon-to-test-equipment-and-lay-the-groundwork-for-a-future-landing-273688

A human tendency to value expertise, not just sheer power, explains how some social hierarchies form

Source: The Conversation – USA – By Thomas Morgan, Associate Professor of Evolutionary Anthropology, Institute of Human Origins, Arizona State University

Leaders can seem to emerge from the group naturally, based on their skill and expertise. Hiraman/E+ via Getty Images

Born on the same day, Bill and Ben both grew up to have high status. But in every other way they were polar opposites.

As children, Bill was well-liked, with many friends, while Ben was a bully, picking on smaller kids. During adolescence, Bill earned a reputation for athleticism and intelligence. Ben, flanked by his henchmen, was seen as formidable and dangerous. In adulthood, Bill was admired for his decision-making and diplomacy, but Ben was feared for his aggression and intransigence.

People sought out Bill’s company and listened to his advice. Ben was avoided, but he got his way through force.

How did Ben get away with this? Well, there’s one more difference: Bill is a human, and Ben is a chimp.

This hypothetical story of Bill and Ben highlights a deep difference between human and animal social life. Many mammals exhibit dominance hierarchies; forms of inequality in which stronger individuals use strength, aggression and allies to get better access to food or mating opportunities.

Human societies are more peaceable but not necessarily more equal. We have hierarchies, too – leaders, captains and bosses. Does this mean we are no more than clothed apes, our domineering tendencies cloaked under superficial civility?

I’m an evolutionary anthropologist, part of a team of researchers who set out to come to grips with the evolutionary history of human social life and inequality.

Building on decades of discoveries, our work supports the idea that human societies are fundamentally different from those of other species. People can be coercive, but unlike other species, we also create hierarchies of prestige – voluntary arrangements that allocate labor and decision-making power according to expertise.

This tendency matters because it can inform how we, as a society, think about the kinds of social hierarchies that emerge in a workplace, on a sports team or across society more broadly. Prestige hierarchies can be steep, with clear differences between high and low status. But when they work well, they can form part of a healthy group life from which everyone benefits.

several chimpanzees walking in a loose line following each other
In other primates, leaders secure their dominant roles with physical strength and aggression.
Anup Shah/DigitalVision via Getty Images

Equal by nature?

Primate-style dominance hierarchies, along with the aggressive displays and fights that build them, are so alien to most humans that some researchers have concluded our species simply doesn’t “do” hierarchy. Add to this the limited archaeological evidence for wealth differences prior to farming, and a picture emerges of humans as a peaceful and egalitarian species, at least until agriculture upended things 12,000 years ago.

But new evidence tells a more interesting story. Even the most egalitarian groups, such as the Ju/‘hoansi and Hadza in Africa or Tsimané in South America, still show subtle inequalities in status, influence and power. And these differences matter: High-ranking men get their pick of partners, sometimes multiple partners, and go on to have more children. Archaeologists have also uncovered sites that display wealth differences even without agriculture.

So, are we more like other species than we might care to imagine, or is there still something different about human societies?

Dominance and prestige

One oddity is in how human hierarchies form. In other animals, fighting translates physical strength into dominance. In humans, however, people often happily defer to leaders, even seeking them out. This deference creates hierarchies of prestige, not dominance.

Why do people do this? One current hypothesis is that we, uniquely, live in a world that relies on complex technologies, teaching and cooperation. In this world, expertise matters. Some people know how to build a kayak; others don’t. Some people can organize a team to build a house; others need someone else to organize them. Some people are great hunters; others couldn’t catch a cold.

In a world like this, everyone keeps an eye out for who has the skills and knowledge they need. Adept individuals can translate their ability into power and status. But, crucially, this status benefits everyone, not just the person on top.

That’s the theory, but where’s the evidence?

One man watches another closely as he is woodworking
People pay attention to those who are skilled.
Virojt Changyencham/Moment via Getty Images

There are plenty of anthropological accounts of skillful people earning social status and bullies being quickly cut down. Lab studies have also found that people do keep an eye on how well others are doing, what they’re good at, and even whom others are paying attention to, and they use this to guide their own information-seeking.

What my colleagues and I wanted to do was investigate how these everyday decisions might lead to larger-scale hierarchies of status and influence.

From theory to practice

In a perfect world, we’d monitor whole societies for decades, mapping individual decisions to social consequences. In reality, this kind of study is impossible, so my team turned to a classic tool in evolutionary research: computer models. In place of real-world populations, we can build digital ones and watch their history play out in milliseconds instead of years.

In these simulated worlds, virtual people copied each other, watched whom others were learning from and accrued prestige. The setup was simple, but a clear pattern emerged: The stronger the tendency to seek out prestigious people, the steeper social influence hierarchies became.

Each dot represents a simulated person, sized according to their social influence. When prestige psychology is weak, most dots are of medium size, corresponding to an egalitarian group. When prestige psychology is strong, a handful of extremely prominent leaders emerge, as shown by the very large dots. The color of the dots corresponds to the beliefs of the simulated people. In egalitarian groups, beliefs are fluid and spread across the group. With hierarchical groups, leaders end up surrounded by like-minded followers.

Below a threshold, societies stayed mostly egalitarian; above it, they were led by a powerful few. In other words, “prestige psychology” – the mental machinery that guides whom people learn from – creates a societal tipping point.

The next step was to bring real humans into the lab and measure their tendency to follow prestigious leaders. This can tell us whether we, as a species, fall above or below the tipping point – that is, whether our psychology favors egalitarian or hierarchical groups.

To do this, my colleagues and I put participants into small groups and gave them problems to solve. We recorded whom participants listened to, and let them know whom their group mates were learning from, and we used this information to find the value of the human “hierarchy-forming” tendency. It was high – well above the tipping point for hierarchies to emerge, and our experimental groups ended up with clear leaders.

One doubt lingered: Our volunteers were from the modern United States. Can they really tell us about the whole human species?

Rather than repeat the study across dozens of cultures, we returned to modeling. This time, we let prestige psychology evolve. Each simulated person had their own tendency for how much they deferred to prestige. It guided their actions, affected their fitness and was passed on to their children with minor mutations.

Over thousands of generations, natural selection identified the most successful psychology: a sensitivity to prestige nearly identical to that we measured in real humans – and strong enough to produce the same sharp hierarchies.

Inequality for everyone?

In other primates, being at the bottom of the social ladder can be brutal, with routine harassment and bullying by group mates. Thankfully, human prestige hierarchies look nothing like this. Even without any coercion, people often choose to follow skilled or respected individuals because good leadership makes life easier for everyone. Natural selection, it seems, has favored the psychology that makes this possible.

Of course, reality is messier than any model or lab experiment. Our simulations and experiment didn’t allow for coercion or bullying, and so they give an optimistic view of how human societies might work – not how they do.

In the real world, leaders can selfishly abuse their authority or simply fail to deliver collective benefits. Even in our experiment, some groups rallied around below-average teammates, the snowballing tendency of prestige swamping signs of their poor ability. Leaders should always be held to account for the outcomes of their choices, and an evolutionary basis to prestige does not justify the oppression of the powerless by the powerful.

So hierarchies remain a double-edged sword. Human societies are unique in the benefits that hierarchies can bring to followers, but the old forces of dominance and exploitation have not disappeared. Still, the fact that natural selection favored a psychology that drives voluntary deference and powerful leaders suggests that, most of the time, prestige hierarchies are worth the risks. When they work well, we all reap the rewards.

The Conversation

Thomas Morgan has received research funding from DARPA, the NSF and the Templeton World Charity Foundation.

ref. A human tendency to value expertise, not just sheer power, explains how some social hierarchies form – https://theconversation.com/a-human-tendency-to-value-expertise-not-just-sheer-power-explains-how-some-social-hierarchies-form-271711

Certain brain injuries may be linked to violent crime – identifying them could help reveal how people make moral choices

Source: The Conversation – USA – By Christopher M. Filley, Professor Emeritus of Neurology, University of Colorado Anschutz Medical Campus

Neurological evidence is widely used in murder trials, but it’s often unclear how to interpret it. gorodenkoff/iStock via Getty Images Plus

On Oct. 25, 2023, a 40-year old man named Robert Card opened fire with a semi-automatic rifle at a bowling alley and nearby bar in Lewiston, Maine, killing 18 people and wounding 13 others. Card was found dead by suicide two days later. His autopsy revealed extensive damage to the white matter of his brain thought to be related to a traumatic brain injury, which some neurologists proposed may have played a role in his murderous actions.

Neurological evidence such as magnetic resonance imaging, or MRI, is widely used in court to show whether and to what extent brain damage induced a person to commit a violent act. That type of evidence was introduced in 12% of all murder trials and 25% of death penalty trials between 2014 and 2024. But it’s often unclear how such evidence should be interpreted because there’s no agreement on what specific brain injuries could trigger behavioral shifts that might make someone more likely to commit crimes.

We are two behavioral neurologists and a philosopher of neuroscience who have been collaborating over the past six years to investigate whether damage to specific regions of the brain might be somehow contributing to people’s decision to commit seemingly random acts of violence – as Card did.

With new technologies that go beyond simply visualizing the brain to analyze how different brain regions are connected, neuroscientists can now examine specific brain regions involved in decision-making and how brain damage may predispose a person to criminal conduct. This work may in turn shed light on how exactly the brain plays a role in people’s capacity to make moral choices.

Linking brain and behavior

The observation that brain damage can cause changes to behavior stretches back hundreds of years. In the 1860s, the French physician Paul Broca was one of the first in the history of modern neurology to link a mental capacity to a specific brain region. Examining the autopsied brain of a man who had lost the ability to speak after a stroke, Broca found damage to an area roughly beneath the left temple.

Broca could study his patients’ brains only at autopsy. So he concluded that damage to this single area caused the patient’s speech loss – and therefore that this area governs people’s ability to produce speech. The idea that cognitive functions were localized to specific brain areas persisted for well over a century, but researchers today know the picture is more complicated.

Researchers use powerful brain imaging technologies to identify how specific brain areas are involved in a variety of behaviors.

As brain imaging tools such as MRI have improved since the early 2000s it’s become increasingly possible to safely visualize people’s brains in stunning detail while they are alive. Meanwhile, other techniques for mapping connections between brain regions have helped reveal coordinated patterns of activity across a network of brain areas related to certain mental tasks.

With these tools, investigators can detect areas that have been damaged by brain disorders, such as strokes, and test whether that damage can be linked to specific changes in behavior. Then they can explore how that brain region interacts with others in the same network to get a more nuanced view of how the brain regulates those behaviors.

This approach can be applied to any behavior, including crime and immorality.

White matter and criminality

Complex human behaviors emerge from interacting networks that are made up of two types of brain tissue: gray matter and white matter.Gray matter consists of regions of nerve cell bodies and branching nerve fibers called dendrites, as well as points of connection between nerve cells. It’s in these areas that the brain’s heavy computational work is done. White matter, so named because of a pale, fatty substance called myelin that wraps the bundles of nerves, carries information between gray matter areas like highways in the brain.

Brain imaging studies of criminality going back to 2009 have suggested that damage to a swath of white matter called the right uncinate fasciculus is somehow involved when people commit violent acts. This tract connects the right amygdala, an almond-shaped structure deep in the brain involved in emotional processing, with the right orbitofrontal cortex, a region in the front of the brain involved in complex decision-making. However, it wasn’t clear from these studies whether damage to this tract caused people to commit crimes or was just a coincidence.

In a 2025 study, we analyzed 17 cases from the medical literature in which people with no criminal history committed crimes such as murder, assault and rape after experiencing brain damage from a stroke, tumor or traumatic brain injury. We first mapped the location of damage in their brains using an atlas of brain circuitry derived from people whose brains were uninjured. Then we compared imaging of the damage with brain imaging from more than 700 people who had not committed crimes but who had a brain injury causing a different symptom, such as memory loss or depression.

An MRI scan of the brain with the right uncinate fasciculus highlighted
Brain injuries that may play a role in violent criminal behavior damage white matter connections in the brain, shown here in orange and yellow, especially a specific tract called the right uncinate fasciculus.
Isaiah Kletenik, CC BY-NC-ND

In the people who committed crimes, we found the brain region that popped up the most often was the right uncinate fasciculus. Our study aligns with past research in linking criminal behavior to this brain area, but the way we conducted it makes our findings more definitive: These people committed their crimes only after they sustained their brain injuries, which suggests that damage to the right uncinate fasciculus played a role in triggering their criminal behavior.

These findings have an intriguing connection to research on morality. Other studies have found a link between strokes that damaged the right uncinate fasciculus with loss of empathy, suggesting this tract somehow regulates emotions that affect moral conduct. Meanwhile, other work has shown that people with psychopathy, which often aligns with immoral behavior, have abnormalities in their amygdala and the orbitofrontal cortex regions that are directly connected by the uncinate fasciculus.

Neuroscientists are now testing whether the right uncinate fasciculus may be synthesizing information within a network of brain regions dedicated to moral values.

Making sense of it all

As intriguing as these findings are, it is important to note that many people with damage to their right uncinate fasciculus do not commit violent crimes. Similarly, most people who commit crimes do not have damage to this tract. This means that even if damage to this area can contribute to criminality, it’s only one of many possible factors underlying it.

Still, knowing that neurological damage to a specific brain structure can increase a person’s risk of committing a violent crime can be helpful in various contexts. For example, it can help the legal system assess neurological evidence when judging criminal responsibility. Similarly, doctors may be able to use this knowledge to develop specific interventions for people with brain disorders or injuries.

More broadly, understanding the neurological roots of morality and moral decision-making provides a bridge between science and society, revealing constraints that define how and why people make choices.

The Conversation

Isaiah Kletenik receives funding from the NIH.

Nothing to disclose.

Christopher M. Filley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Certain brain injuries may be linked to violent crime – identifying them could help reveal how people make moral choices – https://theconversation.com/certain-brain-injuries-may-be-linked-to-violent-crime-identifying-them-could-help-reveal-how-people-make-moral-choices-262034