One Battle After Another: Sean Penn, Leonardo DiCaprio and Benicio Del Toro explore three visions of fatherhood

Source: The Conversation – UK – By Mark Gatto, Assistant Professor in Critical Organisation Studies, Northumbria University, Newcastle

Warning: this article contains spoilers.

In One Battle After Another, three characters (Bob Ferguson, Colonel Steven Lockjaw and Sergio St Carlos) represent three different models of fatherhood.

Fatherhood is a timely theme. The place of men in society is being debated and challenged by polarising figures from both sides of the political spectrum.

One side promotes a regressive vision of the patriarchal man harking back to ideals of fathers as dominant breadwinners and protectors. The other side argues for caring masculinity, involved fatherhood and men taking responsibility in their communities to break the cycle of intergenerational gender inequity.

This is a battle for hearts and minds, and such battles are rarely won with stats and figures. As the success of TV shows like Adolescence has demonstrated, there is nothing like a great story to cut through political stagnation and reach a wider audience.

One Battle After Another offers another opportunity to reflect on the past, present and future of fatherhood. This is established territory for director Paul Thomas Anderson, whose masterpiece There Will be Blood (2007) depicts the complex and dysfunctional relationship between Daniel Plainview (Daniel Day-Lewis) and his adopted son, H.W (Dillon Freasier). The gut-wrenching scenes of paternal abandonment in that film offer an enduring example of the all-too-familiar “absent father”.

The trailer for One Battle After Another.

Lockjaw: the absent father

The absent father is a culturally embedded version of masculinity present in many popular films, that has been experienced by generations of children. TV series like Mad Men (2007) have explored a simultaneously utopian and dystopian version of 1960s fathers as emotionally absent.

In One Battle After Another, actor Sean Penn’s visceral depiction of the aptly named Colonel Steven Lockjaw provides an extreme example of patriarchal fatherhood: absent yet casting a dreadful shadow over a family. Lockjaw is driven to bloody revenge in pursuit of his biological daughter, a daughter he has had no hand in raising.

We know from studies on absent fathers that such absence can have a lifelong effect on children. Lockjaw, with his bizarre behaviours and fawning pursuit of neo-Nazi recognition, offers an allegory for the current rise of alt-right masculinity as jarringly jingoistic and egoist.

Such satire is valuable but also aligns with existing critiques of the manosphere. We need only look to Elon Musk’s infamous hand gesture at the second inauguration of Donald Trump, and his later appearance with his son in the oval office to conjure similarly disturbing visuals of fatherhood. This film breaks newer ground with its depiction of flawed father involvement and the less researched community leadership.

Bob Ferguson: the involved father

Involved fatherhood has been researched for many decades. The triad of a dad’s interaction, availability and responsibility with and for their children is the core criteria.

With Leonardo DiCaprio’s Bob Ferguson, we are introduced to a relatable, “good enough” involved father. He is the product of state hostility to father involvement. Research has shown that the intent of fathers to be involved is often stifled by patriarchal gender norms and workplace stigma.

As an involved father, single dad Bob comfortably meets two of the three criteria – he is physically and emotionally engaged with his daughter, Willa (Chase Infiniti). His enduring presence is partial evidence of responsibility. However, we also see the deleterious impact his drug and alcohol abuse has had on his role as responsible caregiver. The roles have reversed for him and 16-year-old Willa. Bob’s version of involvement is symbolic of the father that cares and stays, but is flawed and unsupported.

Sergio St Carlos: the caring father

Finally, we come to Benicio del Toro’s, Sergio St Carlos, a Karate sensei, Willa’s teacher and father to the community. Offering a counternarrative to bombastic male leaders, Sergio calmly resists tyranny. As a leader, he might be interpreted as emblematic of the much-vaunted male role model, yet Sergio is also flawed. He drinks and drives, leaves much domestic care to his family and revels in his role as antagonist to the law. Yet, such flaws allow this caring father to feel recognisable, relatable and attainable.

Researchers have been writing about caring masculinities for years. Central to understanding this idea is the prioritisation of caring values of positive emotion, interdependence and relationality, and the rejection of domination.

In Sergio, we find a father who cares for his family and his community. Through him, we see a new depiction of fatherhood as the role of a caregiver and care receiver in harmony with his wider community.

Such admirable qualities may seem utopian and fantastical, yet these dads exist. Close to where I live, North East Young Dads and Lads offers a community lifeline to young dads: many later become support workers. One Battle After Another reminds us that community fathers can make a real difference.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Mark Gatto received funding from BA Leverhulme from 2022-2024.
Mark Gatto is an Academic Board member for Working Families

ref. One Battle After Another: Sean Penn, Leonardo DiCaprio and Benicio Del Toro explore three visions of fatherhood – https://theconversation.com/one-battle-after-another-sean-penn-leonardo-dicaprio-and-benicio-del-toro-explore-three-visions-of-fatherhood-266858

La fièvre de la vallée du Rift sévit au Sénégal : ce qu’il faut savoir sur cette maladie

Source: The Conversation – in French – By Marc Souris, chercheur, Institut de recherche pour le développement (IRD)

La fièvre de la vallée du Rift sévit actuellement dans le nord du Sénégal. Depuis l’apparition de l’épidémie, le 21 septembre dernier, le bilan ne cesse s’alourdir. Le dernier communiqué du gouvernement fait état de 78 cas confirmés et 11 décès. Maladie virale transmise par les moustiques et touchant principalement le bétail, elle peut également infecter l’humain. Si la plupart des cas humains restent bénins, la maladie entraîne de lourdes pertes économiques et sanitaires dans les élevages et constitue un risque pour les populations exposées.

En tant que chercheur, j’ai contribué à plusieurs études sur cette zoonose virale transmise par les moustiques et affectant principalement les ruminants et les humains. J’explique ici ce qu’est la fièvre de la vallée du Rift, comment elle est traitée et comment elle peut être contrôlée.

Comprendre la fièvre de la vallée du Rift

La fièvre de la vallée du Rift (FVR) est une zoonose (maladie affectant des animaux et pouvant se transmettre à l’humain) causée par le virus RVF, un phlebovirus de la famille des Phenuiviridae (ordre des Bunyavirales). La maladie touche en particulier certains animaux domestiques, principalement les bovins, les ovins, les caprins, mais aussi camélidés et autres petits ruminants. Mais elle peut occasionnellement infecter les humains.

Chez les animaux, la maladie provoque une importante morbidité: baisse de la production de lait, forte mortalité néonatale, avortements massifs chez les femelles gestantes, et mortalité dans 10 à 20 % des cas. Elle entraîne ainsi des pertes économiques considérables pour les éleveurs.

Chez les humains, la plupart des infections sont asymptomatiques ou bénignes (simple syndrome grippal), mais l’infection peut provoquer des symptômes graves dans un petit pourcentage de cas : affections oculaires, encéphalite, fièvre hémorragique. Le taux de mortalité chez les personnes infectées est de 1% environ.

Comment se transmet-elle ?

Chez les animaux, la maladie est principalement transmise par les piqûres de moustiques infectés (au moins 50 espèces de moustiques sont vecteurs du virus FVR, notamment les Aedes, mais aussi Culex, Anopheles, Mansonia, ainsi que d’autres insectes. Les moustiques s’infectent en piquant des animaux infectés et en phase de virémie (présence de particules virales dans le sang), et vont ensuite transmettre le virus en se nourrissant sur d’autres animaux. Chez les Aedes, la transmission verticale (œufs infectés par le virus) est également possible, ce qui pourrait contribuer à la survie du virus dans l’environnement.

Chez l’humain, dans la plupart des cas l’infection résulte d’un contact avec du sang ou des organes d’animaux contaminés, notamment au cours d’interventions vétérinaires ou de découpage de carcasses d’animaux infectés. Des contaminations directes par piqûre de moustiques ou de mouches infectés par le virus FVR ont également été rapportées, mais aucune transmission interhumaine n’a jusqu’à présent été constatée.

L’origine de la maladie

La maladie a été identifiée pour la première fois en 1931 dans la vallée du Rift, au Kenya, lors d’une épidémie humaine de 200 cas. Le virus responsable a été isolé en 1944 en Ouganda.

Depuis, de nombreuses flambées de la maladie ont été signalées en Afrique subsaharienne : en Egypte (1977), à Madagascar (1990, 2021), au Kenya (1997,1998), en Somalie (1998), en Tanzanie (1998), avec une extension en 2000 au Yemen et à l’Arabie saoudite, aux Comores (2007-2008) et à Mayotte (2018-2019). En Afrique de l’Ouest, les principales épidémies ont touché la Mauritanie (1987, 1993, 1998, 2003, 2010, 2012), le Sénégal (1987, 2013-2014) et le Niger (2016).

La maladie est donc historiquement présente en Afrique de l’Est, mais son extension au Sahel et en Afrique de l’Ouest s’est faite progressivement via le mouvement d’animaux et lors de conditions environnementales favorables, notamment lors de précipitations exceptionnelles. A ce jour, une trentaine de pays ont déclaré des cas animaliers et/ou humains sous la forme de foyers ou de flambées épidémiques.

Pourquoi et comment la maladie réapparaît

La maladie réémerge régulièrement en Afrique tous les cinq à quinze ans. En Afrique de l’Est, des foyers apparaissent et la maladie se propage lorsque les conditions climatiques sont propices, notamment lors de fortes précipitations ou d’inondations dans des zones naturellement arides. Par exemple, en 1998-1999 l’apparition de nombreux foyers en Afrique de l’Est a coïncidé avec les fortes précipitations liées au phénomène El Niño.

Dans le Sahel, la corrélation pluie-épidémie est moins systématique. L’émergence de foyers dans des zones peu surveillées montre que la FVR peut émerger dans des lieux inattendus. Des analyses sur les souches virales en Mauritanie ont également suggéré des introductions génétiques ou des mouvements viraux interrégionaux.

Le virus se maintient dans l’environnement sans que l’on en connaisse exactement le processus. Le virus est maintenu dans l’environnement par un réservoir animal sauvage hétérogène (antilopes, cerfs, reptiles, etc.) qui, à ce jour, n’est pas encore complètement identifié.

Comme pour de nombreuses maladies virales, les épidémies se propagent à partir d’un foyer principal vers des foyers secondaires. La diffusion est favorisée par le déplacement des troupeaux (notamment dans les zones pastorales du Sahel), par la dispersion accidentelle de moustiques infectés, et par les conditions environnementales propices au développement des vecteurs.

Symptômes cliniques et traitements

Chez les animaux, l’apparition de nombreux avortements et d’une mortalité très importante parmi les jeunes animaux est caractéristique de l’infection. Les animaux adultes sont aussi touchés : les bovins et ovins peuvent souffrir d’écoulement nasal, d’hypersalivation, d’anorexie, d’asthénie ou de diarrhée. La mortalité est de 20 % chez les ovins, 10 % chez les bovins. Les femelles gestantes avorteront systématiquement (80-100 %).

Chez l’humain, après une période d’incubation de deux à six jours, la plupart des infections sont asymptomatiques ou bénignes (simple syndrome grippal), et les symptômes durent alors en général de quatre à sept jours. Les individus ayant contracté la FVR acquièrent une immunité naturelle.

Dans un faible pourcentage de cas, la maladie est beaucoup plus sévère :

• Lésions oculaires (jusqu’à 10% des cas symptomatiques), apparaissant généralement une à trois semaines après les premiers symptômes. La maladie peut guérir spontanément, mais peut également entraîner une cécité définitive.

• Ménigo-encéphalite (inflammation cérébro-méningée, 2 à 4% des cas symptomatiques), apparaissant une à quatre semaines après les premiers symptômes. Le taux de mortalité est faible, mais les séquelles neurologiques sont courantes.

• Fièvre hémorragique (moins d’1% des cas symptomatiques), survenant deux à quatre jours après l’apparition de symptômes. Le taux de mortalité est de 50 % environ. Le décès intervient généralement trois à six jours après l’apparition des symptômes.

Il n’y a pas de traitement spécifique pour les cas graves de Fièvre la vallée du Rift chez les humains.

Surveillance, prévention et contrôle

Une surveillance vétérinaire permettant une notification immédiate en cas de détection de la maladie et le suivi de l’infection dans les populations animales sont des éléments essentiels au contrôle de la maladie. En cas d’épizootie, l’abattage sanitaire et le contrôle des déplacements des animaux d’élevage est le moyen le plus efficace pour ralentir la propagation du virus.

Comme pour toutes les arboviroses (maladies virales transmises par des arthropodes), la lutte contre la population vectorielle (moustiques pour FVR) est une mesure de prévention efficace mais difficile à mettre en œuvre, notamment en zone rurale.

Pour prévenir l’émergence de foyers épizootiques, il est possible de vacciner préventivement les animaux dans les zones d’endémicité. Il existe un vaccin à virus vivant modifié qui ne requiert qu’une seule dose et qui confère une immunité à long terme, mais il n’est pas recommandé pour des femelles gestantes en raison des risques d’avortement qu’il comporte. Il existe également un vaccin à virus inactivé ne présentent pas ces effets indésirables, mais qui nécessite plusieurs doses pour obtenir une protection immunitaire.

Menace, vulnérabilités et risques sanitaires

Chez les humains, les éleveurs, les agriculteurs, les employés des abattoirs, les vétérinaires sont les groupes professionnels les plus exposés au risque d’infection. Un vaccin inactivé à usage humain a été mis au point, mais ce vaccin n’est pas homologué et n’a été utilisé qu’à titre expérimental.

La sensibilisation aux facteurs de risque lorsque le virus réémerge est le seul moyen de diminuer le risque d’infection chez les humains. Les facteurs de risques de transmission de l’animal à l’humain sont principalement :

• Les pratiques d’élevage et d’abattage, notamment lors de la manipulation d’animaux malades ou de leurs tissus, ainsi que durant l’abattage;

• La consommation de sang frais, de lait cru ou de viandes;

• Les piqûres de moustique.

Un respect de quelques règles sanitaires est donc nécessaire lors d’une émergence de la fièvre de la vallée du Rift : hygiène des mains, protections adaptées lors de la manipulation d’animaux et de l’abattage, cuisson de tous les produits d’origine animale (sang, viande et lait), utilisation systématique de moustiquaires ou de produits répulsifs.

The Conversation

Marc Souris receives funding from government research agencies.

ref. La fièvre de la vallée du Rift sévit au Sénégal : ce qu’il faut savoir sur cette maladie – https://theconversation.com/la-fievre-de-la-vallee-du-rift-sevit-au-senegal-ce-quil-faut-savoir-sur-cette-maladie-266724

Half the UK’s fish stocks are overfished – but the evidence shows how they can be revived

Source: The Conversation – UK – By Callum Roberts, Professor of Marine Conservation, University of Exeter

North-east Atlantic mackerel are being fished beyond sustainable limits. shocky/Shutterstock

Most of the UK’s commercial fish stocks are not in a healthy state, according to a new landmark report.

Marine conservation charity Oceana UK’s Deep Decline report – one of the most comprehensive analyses of fish stocks since Brexit – finds that half of the UK’s top ten commercial fish stocks are either critically low, overexploited, or both. These include icons of our seas such as North Sea cod, North Sea herring and north-east Atlantic mackerel.

Only 41% of the UK’s commercial fish populations have been found to be healthy. A quarter are being fished beyond sustainable limits. And one in six are both critically low and yet still being overfished, placing them on a course to collapse. Many others, like skates, have been so historically depleted that they have all but disappeared and no longer even appear in statistics.

This disaster was entirely predictable and avoidable. Nearly five years on from the UK’s historic rupture from Europe, most people struggle to name any benefits from Brexit. One of the few benefits I can think of is the power to manage our fisheries without being beholden to the annual horse trading for fishing quotas in Brussels.

Freed at last from the constraints of collective bargaining, the UK could make rational decisions to delivery healthier seas and prosperity for the fishing industry.

Management under the EU’s common fisheries policy was famously flawed. Rather than confront difficult decisions about how to share limited resources, politicians routinely set quotas far above scientific recommendation for sustainable fishing – exceeding them on average by a third over more than 20 years.

If a farmer took more sheep to market every year than they produced, they would soon be out of business. Fisheries ministers failed to apply the same logic, so fish stocks dwindled and fishermen lost their jobs.

But for UK fisheries ministers, it seems that bad habits are hard to unlearn, and they continue to ignore expert advice. Rather than enjoying a rebound, our seas remain in deep decline.

Orange and green colourful fishing nets lying in pile on land
Ocean health underpins the UK’s blue economy, from fishing to tourism.
Andrew Chisholm/Shutterstock

Take the humble cod. Once the cornerstone of UK’s national dish fish and chips, North Sea cod is now at such low levels that, in September 2025, the international body providing scientific evidence for fish catch regulation – the International Council for the Exploration of the Sea (Ices) – advised a zero catch quota to safeguard the future of the cod fishery. Yet North Sea cod is still being overexploited. Ignoring science risks a future where the nation’s favourite dish is no longer affordable or even available.

The Irish Sea is the worst affected region of the UK with four in ten of its stocks overfished according to Oceana’s report, up from a quarter just five years ago. In the Celtic Sea, quotas for cod in 2024 were set higher than the estimated number of adult fish left.

This is political negligence, not ignorance. The UK has world class fisheries science, yet ministers repeatedly ignore their own experts.

Ocean health underpins the UK’s blue economy, from fishing to tourism. Fishing alone supports tens of thousands of jobs, particularly in communities with few alternatives. When stocks collapse, boats tie up, processors shut down and skills honed over generations are lost.

Who is to blame for this avoidable calamity? Ministers obviously, but who are they listening to if not their scientific advisors? Paradoxically, for over 50 years, the large corporates of the fishing industry have been tireless cheerleaders for their own demise, urging ministers to let them catch more fish, in doing so putting short-term benefit over long-term sustainability and job security.

Those with small, local boats – those not raking in the big money – are left trying to eke out a living from a depleted ecosystem. The fact is, if you keep within nature’s limits, you can fish forever.




Read more:
The secret to healthy and sustainable fish fingers – an expert explains


Ocean ally

There is more at stake than just emptied seas and ailing fisheries. The sea is one of our strongest allies in fighting climate change. Seagrass meadows, kelp forests and seabed sediments capture and store carbon, acting as natural defences. Overfishing and destructive bottom trawling damage these habitats and release carbon dioxide into the sea and atmosphere, stripping away our climate resilience just when we need it most.

Imagine instead a future of abundance. Herring shoals flashing silver. Puffins plunging into dense fish swarms. Porpoises chasing mackerel. Fin whales blowing once more in our waters. This vision is not fanciful.




Read more:
Mussel power: how an offshore shellfish farm is boosting marine life


The good news is that recovery is possible. We need only look at healthiest stocks in Oceana’s report, such as west of Scotland haddock, western Channel sole and North Sea plaice. What did they all have in common? Catch limits set in line with scientific advice.

The truth is that strong nature protection is the friend of fishing, not the enemy it is often painted as. Globally, when areas of the sea are genuinely protected and destructive fishing methods are banned, nature rebounds at speed. Fish populations multiply, wildlife flourishes and coastal communities gain a secure future. Protected areas rebuild fish stocks and feed productive fisheries in surrounding waters.

With the right choices, the UK could have more abundant seas that provide both food and jobs while restoring the wonders of marine life. Overfishing is a political choice.

For too long, governments have chosen short term quotas over long term security. Recovery is also a choice: the UK should set a new course that gives both ocean life and fishing communities a fair deal and a prosperous future.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

Callum Roberts receives funding from Convex Insurance Group and EU Synergy, and UK Natural Environment Research Council. He is a board member of Nekton and Maldives Coral Institute.

ref. Half the UK’s fish stocks are overfished – but the evidence shows how they can be revived – https://theconversation.com/half-the-uks-fish-stocks-are-overfished-but-the-evidence-shows-how-they-can-be-revived-266285

Will Rachel Reeves’ youth unemployment scheme force her to bend her own rules?

Source: The Conversation – UK – By Maha Rafi Atal, Adam Smith Senior Lecturer in Political Economy, School of Social and Political Sciences, University of Glasgow

Jacob Lund/Shutterstock

UK chancellor Rachel Reeves has set out a “youth guarantee” aimed at ending long-term unemployment among young people. Under the plan, a young person who has been out of work for 18 months would be offered a temporary job, apprenticeship or college place.

The UK has just under a million young people who are not in employment, education or training (Neet) – thought to be around 13% of the country’s 16- to 24-year-olds.

Under Reeves’ plans, those who refuse the offer could face benefit sanctions. The scheme is being positioned as a way to boost growth while keeping to Labour’s fiscal rules ahead of November’s budget.

The idea has some logic. Long-term youth unemployment has consequences that reach far beyond the individual. Research from the Organisation for Economic Co-operation and Development (OECD) and the Institute for Fiscal Studies shows that young people who are out of work for extended periods often face lower earnings for decades afterwards, as well as poorer health and social outcomes.

Economists sometimes describe this as “scarring” – that is, lasting negative economic effects. By contrast, job losses that come mid-career tend to have less lasting economic impact because these workers have more experience or skills that they can use to get their next job.

So the argument that tackling youth unemployment offers particularly high returns is, in theory, credible.

Long-term future

The difficulty is whether the guarantee, as outlined by Reeves, can deliver anything more than temporary relief. It is not yet clear where the promised jobs will come from.

If the government pays firms to create placements, they will have been specially created for the scheme, rather than representing real gaps that the firms need to fill to grow their business. When the government subsidy ends, the firms may have no reason to keep the young person on. And a short placement may not provide enough skills development to allow the young person to get a job elsewhere.

What’s more, the government is not proposing to pay the full cost of these placements. If the onus falls on businesses to absorb additional young workers in newly created roles at their own expense, the effect may be negligible. This is because Labour’s wider programme – from higher employer national insurance contributions to new employment rights – already imposes extra costs on employers.

That tension points to a broader issue in Reeves’ strategy. She has pledged not to increase headline tax rates. Instead she is seeking to expand the overall tax base by growing employment and productivity.

Yet that kind of growth usually requires sustained public investment in skills, infrastructure and industrial policy. A scheme that subsidises wages for 12 months may help individuals back into work, but it is unlikely to shift the productivity dial or generate lasting fiscal dividends without a wide programme of investment.

For Reeves, the challenge is that the guarantee must be large enough to create real career pathways and business growth. But to do so requires precisely the kind of government expenditure that is made difficult by her own “non-negotiable” fiscal rules.

Instead of a way to grow within the rules then, the youth guarantee may be added to the list of promises the government cannot fulfil without bending them.

The Conversation

Maha Rafi Atal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Will Rachel Reeves’ youth unemployment scheme force her to bend her own rules? – https://theconversation.com/will-rachel-reeves-youth-unemployment-scheme-force-her-to-bend-her-own-rules-266716

Would you watch a film with an AI actor? What Tilly Norwood tells us about art – and labour rights

Source: The Conversation – Global Perspectives – By Amy Hume, Lecturer In Theatre (Voice), Victorian College of the Arts, The University of Melbourne

Particle6 Productions

Tilly Norwood officially launched her acting career this month at the Zurich Film Festival.

She first appeared in the short film AI Commissioner, released in July. Her producer, Eline Van der Velden, claims Norwood has already attracted the attention of multiple agents.

But Norwood was generated with artificial intelligence (AI). The AI “actor” has been created by Xicoia, the AI branch of the production company Particle6, founded by the Dutch actor-turned-producer Ven der Velden. And AI Commissioner is an AI-generated short film, written by ChatGPT.

A post about the film’s launch on Norwood’s Facebook page read,

I may be AI generated, but I’m feeling very real emotions right now. I am so excited for what’s coming next!

The reception from the industry has been far from warm. Actors – and audiences – have come out in force against Norwood.

So, is this the future of film, or is it a gimmick?

‘Tilly Norwood is not an actor’

Norwood’s existence introduces a new type of technology to Hollywood. Unlike CGI (computer generated imagery), where a performer’s movements are captured and transformed into a digital character, or an animation which is voiced by a human actor, Norwood has no human behind her performance. Every expression and line delivery is generated by AI.

Norwood has been trained on the performances of hundreds of actors, without any payment or consent, and draws on the information from all those performances in every expression and line delivery.

Her arrival comes less than two years after the artist strikes that brought Hollywood to a stand-still, with AI a central issue to the disputes. The strike ended with a historic agreement placing limitations around digital replicas of actors’ faces and voices, but did not completely ban “synthetic fakes”.

SAG-AFTRA, the union representing actors in the United States, has said:

To be clear, ‘Tilly Norwood’ is not an actor; it’s a character generated by a computer program that was trained on the work of countless professional performers – without permission or compensation.

Additionally, real actors can set boundaries and are protected by agents, unions and intimacy coordinators who negotiate what is shown on screen.

Norwood can be made to perform anything in any context – becoming a vessel for whatever creators or producers choose to depict.

This absence of consent or control opens a dangerous pathway to how the (digitally reproduced) female body may be represented on screen, both in mainstream cinema, and in pornography.

Is it art?

We consider creativity to be a human quality. Art is generally understood as an expression of human experience. Norwood’s performances do not come from such creativity or human experience, but from a database of pre-existing performances.

All artists borrow from and are influenced by predecessors and contemporaries. But that human influence is limited by time, informed by our own experiences and shaped by our unique perspective.

AI has no such limits: just look at Google’s chess-playing program AlphaZero, which learnt by playing millions of games of chess, more than any human can play in a life time.

Norwood stands with a clapboard.
Norwood’s training can absorb hundreds of performances in a way no single actor could.
Particle6 Productions

Norwood’s training can absorb hundreds of performances in a way no single actor could. How can that be compared to an actor’s performance – a craft they have developed throughout their training and career?

Van der Velden argues Norwood is “a new tool” for creators. Tools have previously been a paintbrush or a typewriter, which have helped facilitate or extend the creativity of painting or writing.

Here, Norwood as the tool performs the creative act itself. The AI is the tool and the artist.

Will audiences accept AI actors?

Norwood’s survival depends not on industry hype but on audience reception.

So far, humans show a negative bias against AI-generated art. Studies across art forms have shown people prefer works when told they were created by humans, even if the output is identical.

We don’t know yet if that bias could fade. A younger generation raised on streaming may be less concerned with whether an actor is “real” and more with immediate access, affordability or how quickly they can consume the content.

If audiences do accept AI actors, the consequences go beyond taste. There would be profound effects on labour. Entry- and mid-level acting jobs could vanish. AI actors could shrink the demand for whole creative teams – from make-up and costume to lighting and set design – since their presence reduces the need for on-set artistry.

Economics could prove decisive. For studios, AI actors are cheaper, more controllable and free from human needs or unions. Even if audiences are ambivalent, financial pressures could steer production companies towards AI.

The bigger picture

Tilly Norwood is not a question of the future of Hollywood. She is a cultural stress-test – a case study in how much we value human creativity.

What do we want art to be? Is it about efficiency, or human expression? If we accept synthetic actors, what stops us from replacing other creative labour – writers, musicians, designers – with AI trained on their work, but with no consent or remuneration?

We are at a crossroads. Do we regulate the use of AI in the arts, resist it, or embrace it?

Resistance may not be realistic. AI is here, and some audiences will accept it. The risk is that in choosing imitation over human artistry, we reshape culture in ways that cannot be easily reversed.

The Conversation

Amy Hume does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Would you watch a film with an AI actor? What Tilly Norwood tells us about art – and labour rights – https://theconversation.com/would-you-watch-a-film-with-an-ai-actor-what-tilly-norwood-tells-us-about-art-and-labour-rights-266476

The world’s most sensitive computer code is vulnerable to attack. A new encryption method can help

Source: The Conversation – Global Perspectives – By Qiang Tang, Associate Professor, Computer Science, University of Sydney

Joan Gammell/Unsplash

Nowadays data breaches aren’t rare shocks – they’re a weekly drumbeat. From leaked customer records to stolen source code, our digital lives keep spilling into the open.

Git services are especially vulnerable to cybersecurity threats. These are online hosting platforms that are widely used in the IT industry to collaboratively develop software, and are home to most of the world’s computer code.

Just last week, hackers reportedly stole about 570 gigabytes of data from a git service called GitLab. The stolen data was associated with major companies such as IBM and Siemens, as well as United States government organisations.

In December 2022, hackers stole source code from IT company Okta which was stored in repositories on GitHub.

Cyberattackers can also quietly insert malicious code into existing projects without a developer’s knowledge. These so-called “software supply-chain” attacks have turned development tools and update channels on git services into high-value targets.

As we explain in a new conference paper, our team has developed a new way to make git services more secure, with very little impact on performance.

The gold standard

We already know how to keep conversations private: secure messenger services such as Signal and WhatsApp use end-to-end encryption, which locks messages on your device and only unlocks them on the recipient’s device. This protects the data even if the service platform is hacked, which is why it’s considered the gold standard to protect data.

But git services, which are widely used by major tech companies and startups, currently don’t use end-to-end encryption. The same is true for most of the other tools we use to work together, such as shared documents.

Because git services allow a huge number of collaborators to work on the same project at the same time, the software codes they host are constantly written and updated at a very rapid rate. This makes using standard encryption impractical. To do so would take up too much bandwidth to transmit all of the data for even one word change, and make the services very inefficient.

But our new encryption method overcomes this challenge.

Striking an important balance

The method we have developed uses what’s known as “character-level encryption”. This means only edits to a software code stored on the git service are treated as new data to be encrypted – rather than the entire code.

Think of it as encrypting the tracked changes in a word document, instead of a new version every time.

This method strikes an important balance. It keeps the updated code private and secure while reducing the amount of communication between user and git services, as well as the amount of storage required.

Importantly, this new method is also compatible with existing git services, making it easy for people to adopt. It also doesn’t interfere with other functions of git servers, such as hosting, saving bandwidth and indexing, so people can keep using these servers as they normally would – just with the added benefit of extra security.

A broader end-to-end encrypted internet

This new tool is currently free and open-source for all users. It can be installed easily like a patch when using git services, and will run in the background as users access git services just like before.

But this is just the starting point for a broader shift towards online collaboration that is secured by end-to-end encryption.

Extending the same guarantees to shared documents, spreadsheets and design files is possible, but will require sustained research and investment.

One complication to ensure security is managing encryption keys or credentials for users to decrypt encrypted data. Fortunately, our previous research shows us how to create a secure cloud storage system that will allow users to safely store their credentials.

Just as importantly, we must balance security with compliance and accountability. Universities, hospitals and government agencies are required to retain and, in some cases, provide lawful access to certain data. Meeting these obligations, without weakening end-to-end encryption, pushes us to research new techniques.

The goal is not secrecy at all costs, but verifiable controls that respect both privacy and the rule of law.

We don’t need a brand new internet to get there. We need pragmatic upgrades that fit the tools people already use – paired with clear, provable guarantees.

Messaging proved that end-to-end encryption can scale to billions. Code and cloud files are next, and with continued research and targeted investment, the rest of our everyday collaboration can follow.

So before too long, you will hopefully be able to work on a shared document with colleagues with the peace of mind that it, too, has gold standard security.

The Conversation

Qiang Tang receives funding from Google via Digital Future Initiative to support the research on this project.

Moti Yung works for Google as a distinguished research scientist.

Yanan Li is supported by the funding from Google via Digital Future Initiative for doing this research at the University of Sydney.

ref. The world’s most sensitive computer code is vulnerable to attack. A new encryption method can help – https://theconversation.com/the-worlds-most-sensitive-computer-code-is-vulnerable-to-attack-a-new-encryption-method-can-help-266236

Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago

Source: The Conversation – Global Perspectives – By Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology

A crowd gathers outside the New York Stock Exchange following the ‘Great Crash’ of October 1929. New York World-Telegram and the Sun Newspaper Photograph Collection, US Library of Congress

The electrification boom of the 1920s set the United States up for a century of industrial dominance and powered a global economic revolution.

But before electricity faded from a red-hot tech sector into invisible infrastructure, the world went through profound social change, a speculative bubble, a stock market crash, mass unemployment and a decade of global turmoil.

Understanding this history matters now. Artificial intelligence (AI) is a similar general purpose technology and looks set to reshape every aspect of the economy. But it’s already showing some of the hallmarks of electricity’s rise, peak and bust in the decade known as the Roaring Twenties.

The reckoning that followed could be about to repeat.

First came the electricity boom

A century ago, when people at the New York Stock Exchange talked about the latest “high tech” investments, they were talking about electricity.

Investors poured money into suppliers such as Electric Bond & Share and Commonwealth Edison, as well as companies using electricity in new ways, such as General Electric (for appliances), AT&T (telecommunications) and RCA (radio).

It wasn’t a hard sell. Electricity brought modern movies, new magazines from faster printing presses, and evenings by the radio.

It was also an obvious economic game changer, promising automation, higher productivity, and a future full of leisure and consumption. In 1920, even Soviet revolutionary leader Vladimir Lenin declared: “Communism is Soviet power plus the electrification of the whole country.”

Today, a similar global urgency grips both communist and capitalist countries about AI, not least because of military applications.

A cover story of the New York Times Magazine in October 1927.
The New York Times

Then came the peak

Like AI stocks now, electricity stocks “became favorites in the boom even though their fundamentals were difficult to assess”.

Market power was concentrated. Big players used complex holding structures to dodge rules and sell shares in basically the same companies to the public under different names.

US finance professor Harold Bierman, who argued that attempts to regulate overpriced utility stocks were a direct trigger for the crash, estimated that utilities made up 18% of the New York Stock Exchange in September 1929. Within electricity supply, 80% of the market was owned by just a handful of holding firms.

But that’s just the utilities. As today with AI, there was a much larger ecosystem.

Almost every 1920s “megacap” (the largest companies at the time) owed something to electrification. General Motors, for example, had overtaken Ford using new electric production techniques.

Essentially, electricity became the backdrop to the market in the same way AI is doing, as businesses work to become “AI-enabled”.

No wonder that today tech giants command over a third of the S&P 500 index and nearly three-quarters of the NASDAQ. Transformative technology drives not only economic growth, but also extreme market concentration.

In 1929, to reflect the new sector’s importance, Dow Jones launched the last of its three great stock averages: the electricity-heavy Dow Jones Utilities Average.

But then came the bust

The Dow Jones Utilities Average went as high as 144 in 1929. But by 1934, it had collapsed to just 17.

No single cause explains the New York Stock Exchange’s unprecedented “Great Crash”, which began on October 24 1929 and preceded the worldwide Great Depression.

That crash triggered a banking crisis, credit collapse, business failures, and a drastic fall in production. Unemployment soared from just 3% to 25% of US workers by 1933 and stayed in double figures until the US entered the second world war in 1941.

Lithograph of Wall Street, New York City, with panicked crowd, lightning, people jumping out of buildings, buildings falling, at time of stock market crash in 1929.

Lithograph of Wall Street, New York City, after the 1929 stock market crash. Jame Rosenberg, Ben and Beatrice Goldstein Foundation collection, US Library of Congress

The ripple effects were global, with most countries seeing a rise in unemployment, especially in countries reliant on international trade, such as Chile, Australia and Canada, as well as Germany.

The promised age of shorter hours and electric leisure turned into soup kitchens and bread lines.

The collapse exposed fraud and excess. Electricity entrepreneur Samuel Insull, once Thomas Edison’s protégé and builder of Chicago’s Commonwealth Edison, was at one point worth US$150 million – an even more staggering amount at the time.

But after Insull’s empire went bankrupt in 1932, he was indicted for embezzlement and larceny. He fled overseas, was brought back, and eventually acquitted – but 600,000 shareholders and 500,000 bondholders lost everything.

However, to some Insull seemed less a criminal mastermind than a scapegoat for a system whose flaws ran far deeper.

Reforms unthinkable during the boom years followed.

The Public Utility Holding Company Act of 1935 broke up the huge holding company structures and imposed regional separation. Once exciting electricity darlings became boring regulated infrastructure: a fact reflected in the humble “Electric Company” square on the original 1935 Monopoly board.

Lessons from the 1920s for today

AI is rolling out faster than even those seeking to use it for business or government policy can sometimes manage properly.

Like electricity a century ago, a few interconnected firms are building today’s AI infrastructure.

And like a century ago, investors are piling in – though many don’t know the extent of their exposure through their superannuation funds or exchange traded funds (ETFs).

Just as in the late 1920s, today’s regulation of AI is still loose in many parts of the world – though the European Union is taking a tougher approach with its world-first AI law.

US President Donald Trump has taken the opposite approach, actively cutting “onerous regulation” of AI. Some US states have responded by taking action themselves. The courts, when consulted, are hamstrung by laws and definitions written for a different era.

Can we transition to AI being invisible infrastructure like electricity without a another bust, only then followed by reform?

If the parallels to the electrification boom remain unnoticed, the chances are slim.

The Conversation

Cameron Shackell works primarily as a Sessional Academic at the QUT School of Information Systems. He also works one day a week as CEO of Equate IT Consulting, a firm using AI to analyse brands and trademarks.

ref. Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago – https://theconversation.com/todays-ai-hype-has-echoes-of-a-devastating-technology-boom-and-bust-100-years-ago-265492

These 4 aeroplane failures are more common than you think – and not as scary as they sound

Source: The Conversation – Global Perspectives – By Guido Carim Junior, Senior Lecturer in Aviation, Griffith University

redcharlie/Unsplash

“It is the closest all of us passengers ever want to come to a plane crash,”
a Qantas flight QF1889’s passenger said after the plane suddenly descended about 20,000 feet on Monday September 22, and diverted back to Darwin.

The Embraer 190’s crew received a pressurisation warning, followed the procedures, and landed normally – but in the cabin, that rapid drop felt anything but normal.

The truth is, in-flight technical problems such as this one are part of flying. Pilots train extensively for them. Checklists contain detailed instructions on how to deal with each issue. Aircraft are built with layers of redundancy, and warning systems alert pilots to problems. It is because of these safety systems that the vast majority of flights that experience technical issues end with a safe arrival rather than tragic headlines.

Here are four scary-sounding failures you might hear about (or even experience) and how they are actually dealt with in the air.

1. Air-conditioning and pressurisation hiccups

What it is

At cruising altitudes (normally around 36,000 feet), aeroplane cabins are kept at a comfortable “cabin altitude” of 8,000 feet using air from the engines that is cooled through the air conditioner.

This artificial air pressure allows us to survive while the atmosphere outside the plane is highly hostile to human life, with temperatures around -55°C and no breathable air. However, if the system misbehaves or the cabin altitude starts to rise for whatever reason, crews treat it as a potential pressurisation problem and initiate the preventive procedures immediately.

What you might feel/see

A quick, controlled descent (it can feel dramatic), ears popping, and sometimes oxygen masks – these typically drop automatically only if the cabin altitude exceeds roughly 14,000 feet. Similar to QF1889, a rapid descent without masks being deployed is the most common outcome.

What pilots do

As soon as they notice a problem with the cabin pressurisation, the pilots put on their own oxygen masks, declare an emergency, and follow the emergency descent checklist, bringing the aircraft as quickly as possible to about 10,000 feet. This is usually followed by a diversion or return to the departure airport.

2. Most feared: engine failures

What it is

Twin-engine airliners are certified to fly safely on one engine. Yet, one-engine failures are treated seriously and thoroughly rehearsed in flight simulators at least annually.

Dual failures, however, are exceptionally rare. The 2009 “Miracle on the Hudson”, for example, was a once-in-a-generation bird strike event that led to both engines stopping. The plane safely landed on the Hudson River in New York with no casualties.

US Airways Flight 1549 after crashing into the Hudson River, January 15 2009.
Wikimedia Commons, CC BY

What you might feel/see

A loud bang, vibration, sparks coming out of the engine, smell of burning or a sudden quietening. This may result in a turn-back and an emergency services welcome. Recent headlines on engine failures – from a 737 in Sydney to a multiple bird-strike-related return in the United States ended with safe landings.

What pilots do

After being alerted by the warning system, pilots identify the affected engine and follow the checklist. The checklist typically requires them to shut down the problematic engine, descent to an appropriate altitude and divert if in cruise, or return to the departure airport if after takeoff.

Even when an engine failure damages other systems, crews are trained to manage cascades of warnings – as Qantas A380 flight QF32’s crew did in 2010, returning safely to Singapore.

3. Hydraulic trouble and flight controls

What it is

The many aeroplane flight controls move because of multiple hydraulic or electric systems. If one system misbehaves – for example the left wing aileron, which is used to turn the aircraft, won’t move – redundancy keeps the aeroplane flyable because the right wing aileron will still work.

Crews use specific checklists and adjust speeds, distances and landing configurations to ensure a safe return to the ground.

Ailerons are the hinged parts you can see at the end of the aeroplane wing.
Stephan Hinni/Unsplash

What you might feel/see

A longer hold while the crew troubleshoots, a return to the departure airport or a faster-than-normal landing. In July, a regional Qantas flight to Melbourne made an emergency landing at Mildura after a hydraulics issue.

What pilots do

After the warning system’s detection, pilots run through a checklist, decide on the landing configuration, request the longest suitable runway and emergency services just in case.

All these resources are available because lessons learned from extreme events – such as United 232’s 1989 loss of all hydraulic systems – were brought into the design of modern aeroplanes and training programs.

4. Landing gear and brake system drama

What it is

Airliners have retractable landing gears that remain inside a compartment for most of the flight. Those are the wheels that come out of the aeroplane belly before landing. Assembled in the wheels are the brakes. They aim to reduce the aircraft speed after touchdown, like in a car.

With so many moving parts, sometimes the landing gear doesn’t extend or retract properly, or the braking system loses some effectiveness, such as the loss of a hydraulic system.

What you might feel/see

A precautionary return, cabin preparation for potential forced landing, or “brace for impact” instruction from the cabin crew right before landing can happen.

While scary, these are preventive measures if something doesn’t go as planned. Earlier this year, a Qantas flight returned to Brisbane after experiencing a problem with its landing gear; passengers were told to keep “heads down” while the aircraft landed safely.

What pilots do

They’ll use long checklists and eventually contact maintenance engineers to troubleshoot the problem. There are also redundancies available to lower the landing gear and to deploy the brakes.

In extreme cases, they may be required to land at the longest runway available (in case of brake problems) or land on the belly (if the landing gear can’t be lowered).

The big picture

Most in-flight failures trigger a chain of defences aimed at keeping the flight safe. Checklists, extensive training and decades of expertise are backed by multiple redundancies and robust design. And these flights typically end like QF1889 did: safely on the ground, with passengers a little shaken.

A dramatic descent or an urgent landing doesn’t mean disaster. It usually means the safety system (aircraft + crew + checklist + training + redundancy) is doing exactly what it’s supposed to do.

The Conversation

Guido Carim Junior does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. These 4 aeroplane failures are more common than you think – and not as scary as they sound – https://theconversation.com/these-4-aeroplane-failures-are-more-common-than-you-think-and-not-as-scary-as-they-sound-265866

People trust podcasts more than social media. But is the trust warranted?

Source: The Conversation – Global Perspectives – By Jason Weismueller, Lecturer, UWA Business School, The University of Western Australia

Medy Siregar/Unsplash

There’s been a striking decline in public confidence in social media platforms, according to the 2025 Ethics Index published by the Governance Institute of Australia. One in four Australians now rate social media as “very unethical”.

This is consistent with other reports on Australian attitudes towards social media. For example, the Digital News Report 2025 similarly identified widespread concern about misinformation and distrust in news shared on social media.

And such distrust isn’t limited to Australia. The sentiment is evident worldwide. The 2025 Edelman Trust Barometer, based on an annual global survey of more than 30,000 people across 28 countries, reports a decline in trust in social media companies.

So where does this negativity come from? And are other ways of consuming information online, such as podcasts, any better? Podcasts are booming in Australia and around the world, and are often perceived much more positively than social media.

Let’s look at what the evidence says about the impacts of social media, what it does and doesn’t yet tell us about podcasts, and what this reveals about the need for accountability across digital platforms.

Where does this distrust stem from?

While social media has enabled connection, creativity and civic participation, research also highlights its downsides.

Studies have shown that, on certain social media platforms, false and sensational information can often spread faster than truth. Such information can also fuel negativity and political polarisation.

Beyond civic harms, heavy social media use has also been linked to mental health challenges. The causes are difficult to establish, but studies report associations between social media use and higher levels of depression, anxiety and psychological distress, particularly among adolescents and young adults.

In 2021, Frances Haugen, a former Facebook product manager, made public thousands of internal documents that revealed Instagram’s negative impact on teen mental health. The revelations triggered global scrutiny and intensified debate about social media accountability.

Whistleblowers such as Haugen suggest social media companies are aware of potential harms, but don’t always act.




Read more:
Facebook data reveal the devastating real-world harms caused by the spread of misinformation


Podcasts have a much better reputation

In contrast to social media, podcasts appear to enjoy a very different reputation. Not only do Australians view them far more positively, but podcast consumption has significantly increased over the years.

More than half of Australians over the age of ten engage with audio or video podcasts on a monthly basis. It’s not surprising that the 2025 Australian election saw political leaders feature on podcasts as part of their campaign strategy.

YouTube, traditionally a video sharing platform, has a large section dedicated to podcasts on its home page.
YouTube

Why are podcasts so popular and trusted? Several features may help explain this.

Consumption is often more deliberate. Listeners choose specific shows and episodes instead of scrolling through endless feeds. Podcasts typically provide longer and more nuanced discussions compared with the short snippets served by social media algorithms.

Given these features, research suggests podcasts foster a sense of intimacy and authenticity. Listeners develop ongoing “relationships” with hosts and view them as credible, authentic and trustworthy.

Yet this trust can be misplaced. A Brookings Institution study analysing more than 36,000 political podcast episodes found nearly 70% contained at least one unverified or false claim. Research also shows political podcasts often rely on toxic or hostile language.

This shows that podcasts, while often perceived as more “ethical” than social media, are not automatically safer or more trustworthy spaces.

Rethinking trust in a complex media environment

What’s clear is that we shouldn’t blindly trust or dismiss any online platform, whether it’s a social media feed or a podcast. We must think critically about all the information we encounter.

We all need better tools to navigate a complex media environment. Digital literacy efforts must expand beyond social media to help people assess any information, from a TikTok clip to a long-form podcast episode.




Read more:
Critical thinking is more important than ever. How can I improve my skills?


To regain public trust, social media platforms will have to behave more ethically. They should be transparent about advertising, sponsorships and moderation policies, and should make clear how content is recommended.

This expectation should also apply to podcasts, streaming services and other digital media, which can all be misused by people who want to mislead or harm others.

Governments can reinforce accountability through fair oversight, but rules will only work if they are paired with platforms acting responsibly.

Earlier this year, the Australian government released a report that argued social media platforms have a “duty of care” towards their users. They should proactively limit the spread of harmful content, for example.

A healthier information environment depends on sceptical but engaged citizens, stronger ethical standards across platforms, and systems of accountability that reward transparency and reliability.

The lesson is straightforward: trust or distrust alone doesn’t change whether the information you receive is actually truthful – particularly in an online environment where anyone can say anything. It’s best to keep that in mind.

The Conversation

Jason Weismueller does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. People trust podcasts more than social media. But is the trust warranted? – https://theconversation.com/people-trust-podcasts-more-than-social-media-but-is-the-trust-warranted-266791

Premio Nobel de Física a los experimentos pioneros que allanaron el camino para las computadoras cuánticas

Source: The Conversation – (in Spanish) – By Rob Morris, Professor of Physics, School of Science and Technology, Nottingham Trent University

El Premio Nobel de Física 2025 ha sido otorgado a tres científicos por el descubrimiento de un efecto que tiene aplicaciones en dispositivos médicos y computación cuántica.

John Clarke, Michel Devoret y John Martinis llevaron a cabo una serie de experimentos hace unos 40 años que terminaron moldeando nuestra comprensión de las extrañas propiedades del mundo cuántico. Es un premio muy oportuno, ya que en 2025 se cumple el centenario de la formulación de la mecánica cuántica.

En el mundo microscópico, una partícula puede a veces atravesar una barrera y aparecer al otro lado. Este fenómeno se denomina efecto túnel cuántico. Los experimentos de los galardonados demostraron el efecto túnel en el mundo macroscópico, es decir, el mundo visible a simple vista. Y corroboraron que podía observarse en un circuito eléctrico experimental.

El efecto túnel cuántico tiene posibles aplicaciones futuras en la mejora de la memoria de los teléfonos móviles y ha sido importante para el desarrollo de los qubits, que almacenan y procesan información en ordenadores cuánticos. También tiene aplicaciones en dispositivos superconductores, capaces de conducir la electricidad con muy poca resistencia.

John Clarke, nacido en Gran Bretaña, es profesor de Física en la Universidad de California, Berkeley. Michel Devoret nació en París y es profesor F. W. Beinecke de Física Aplicada en la Universidad de Yale. John Martinis es profesor de Física en la Universidad de California, Santa Bárbara.

¿Qué es el efecto túnel cuántico?

El efecto túnel cuántico es un fenómeno contraintuitivo por el cual las diminutas partículas que componen todo lo que podemos ver y tocar pueden aparecer al otro lado de una barrera sólida, que en otras circunstancias se esperaría que las detuviera.

Desde que se propuso por primera vez, en 1927, se ha observado en partículas muy pequeñas y es responsable de nuestra explicación de la desintegración radiactiva de átomos grandes en átomos más pequeños y en algo bautizado como partícula alfa. Sin embargo, también se predijo que podríamos ver este mismo comportamiento en cosas más grandes: es lo que se denomina efecto túnel cuántico macroscópico.

¿Cómo podemos ver el efecto túnel cuántico?

La clave para observar este efecto túnel macroscópico es algo llamado unión Josephson, que consiste en una especie de un cable roto sofisticado. El cable no es un cable típico como el que se utiliza para cargar el teléfono, sino que es un tipo especial de material conocido como superconductor. Un superconductor no tiene resistencia, lo que significa que la corriente puede fluir a través de él indefinidamente sin perder energía. Los superconductores se utilizan, por ejemplo, para crear campos magnéticos muy fuertes en los escáneres de resonancia magnética (RM).

¿Cómo nos ayuda esto a explicar este extraño comportamiento de túnel cuántico? Si colocamos dos cables superconductores uno al lado del otro, separados por un aislante, creamos nuestra unión Josephson. Normalmente se fabrica en un solo dispositivo que, con unos conocimientos básicos de electricidad, no debería conducir la electricidad. Sin embargo, gracias al túnel cuántico, podemos ver que la corriente puede fluir a través de la unión.

Los tres galardonados demostraron el efecto túnel cuántico en un artículo publicado en 1985 (es habitual que transcurra tanto tiempo antes de que se concedan los premios Nobel). Anteriormente se había sugerido que el efecto túnel cuántico estaba causado por una avería en el aislante. Los investigadores comenzaron enfriando su aparato experimental hasta una fracción de grado del cero absoluto, la temperatura más fría que se puede alcanzar.

El calor puede proporcionar a los electrones de los conductores la energía suficiente para atravesar la barrera. Por lo tanto, tendría sentido que cuanto más se enfriara el dispositivo, menos electrones escaparan. Sin embargo, si se produce el efecto túnel cuántico, debería haber una temperatura por debajo de la cual el número de electrones que escapan ya no disminuiría. Los tres galardonados descubrieron precisamente esto.

¿Por qué es importante?

En aquel momento, los tres científicos intentaban demostrar mediante experimentos esta teoría en desarrollo sobre el efecto túnel cuántico macroscópico. Incluso durante el anuncio del premio de 2025, Clarke restó importancia al descubrimiento, a pesar de que ha sido fundamental en muchos avances que se encuentran a la vanguardia de la física cuántica actual.

La computación cuántica sigue siendo una de las oportunidades más interesantes que se vislumbran para un futuro próximo y es objeto de importantes inversiones en todo el mundo. Esto conlleva mucha especulación sobre los riesgos para nuestras tecnologías de cifrado.

También resolverá en última instancia problemas que están fuera del alcance incluso de los superordenadores más grandes de la actualidad. Los pocos ordenadores cuánticos que existen hoy en día se basan en el trabajo de los tres premios Nobel de Física de 2025 y, sin duda, serán objeto de otro premio Nobel de Física en las próximas décadas.

Ya estamos aprovechando estos efectos en otros dispositivos, como los dispositivos superconductores de interferencia cuántica (SQuID), que se utilizan para medir pequeñas variaciones en los campos magnéticos de la Tierra, lo que nos permite encontrar minerales bajo la superficie. Los SQuID también tienen usos en medicina: pueden detectar los campos magnéticos extremadamente débiles que emite el cerebro. Esta técnica, conocida como magnetoencefalografía o MEG, puede utilizarse, por ejemplo, para encontrar el área específica del cerebro desde la que emanan las crisis epilépticas.

No podemos predecir si tendremos ordenadores cuánticos en nuestros hogares, o incluso en nuestras manos, ni cuándo. Sin embargo, una cosa es segura: la velocidad de desarrollo de esta nueva tecnología se debe en gran parte a los ganadores del premio Nobel de Física de 2025, que demostraron el efecto túnel cuántico macroscópico en circuitos eléctricos.

The Conversation

Rob Morris no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Premio Nobel de Física a los experimentos pioneros que allanaron el camino para las computadoras cuánticas – https://theconversation.com/premio-nobel-de-fisica-a-los-experimentos-pioneros-que-allanaron-el-camino-para-las-computadoras-cuanticas-266986