We have drugs to manage HIV. So why are we spending millions looking for cures?

Source: The Conversation – Global Perspectives – By Bridget Haire, Associate Professor, Public Health Ethics, School of Population Health, UNSW Sydney

Alim Yakubov/Shutterstock

Over the past three decades there have been amazing advances in treating and preventing HIV.

It’s now a manageable infection. A person with HIV who takes HIV medicine consistently, before their immune system declines, can expect to live almost as long as someone without HIV.

The same drugs prevent transmission of the virus to sexual partners.

There is still no effective HIV vaccine. But there are highly effective drugs to prevent HIV infection for people without HIV who are at higher risk of acquiring it.

These drugs are known as as “pre-exposure prophylaxis” or PrEP. These come as a pill, which needs to be taken either daily, or “on demand” before and after risky sex. An injection that protects against HIV for six months has recently been approved in the United States.

So with such effective HIV treatment and PrEP, why are we still spending millions looking for HIV cures?

Not everyone has access to these drugs

Access to HIV drugs and PrEP depends on the availability of health clinics, health professionals, and the means to supply and distribute the drugs. In some countries, this infrastructure may not be secure.

For instance, earlier this year, US President Donald Trump’s dissolution of the USAID foreign aid program has threatened the delivery of HIV drugs to many low-income countries.

This demonstrates the fragility of current approaches to treatment and prevention. A secure, uninterrupted supply of HIV medicine is required, and without this, lives will be lost and the number of new cases of HIV will rise.

Another example is the six-monthly PrEP injection just approved in the US. This drug has great potential for controlling HIV if it is made available and affordable in countries with the greatest HIV burden.

But the prospect for lower-income countries accessing this expensive drug looks uncertain, even if it can be made at a fraction of its current cost, as some researchers say.

So despite the success of HIV drugs and PrEP, precarious health-care systems and high drug costs mean we can’t rely on them to bring an end to the ongoing global HIV pandemic. That’s why we also still need to look at other options.

Haven’t people already been ‘cured’?

Worldwide, at least seven people have been “cured” of HIV – or at least have had long-term sustained remission. This means that after stopping HIV drugs, they did not have any replicating HIV in their blood for months or years.

In each case, the person with HIV also had a life-threatening cancer needing a bone marrow transplant. They were each matched with a donor who had a specific genetic variation that resulted in not having HIV receptors in key bone marrow cells.

After the bone marrow transplant, recipients stopped HIV drugs, without detectable levels of the virus returning. The new immune cells made in the transplanted bone marrow lacked the HIV receptors. This stopped the virus from infecting cells and replicating.

But this genetic variation is very rare. Bone marrow transplantation is also risky and extremely resource-intensive. So while this strategy has worked for a few people, it is not a scalable prospect for curing HIV more widely.

So we need to keep looking for other options for a cure, including basic laboratory research to get us there.

How about the ‘breakthrough’ I’ve heard about?

HIV treatment stops the HIV replication that causes immune damage. But there are places in the body where the virus “hides” and drugs cannot reach. If the drugs are stopped, the “latent” HIV comes out of hiding and replicates again. So it can damage the immune system, leading to HIV-related disease.

One approach is to try to force the hidden or latent HIV out into the open, so drugs can target it. This is a strategy called “shock and kill”. And an example of such Australian research was recently reported in the media as a “breakthrough” in the search for an HIV cure.

Researchers in Melbourne have developed a lipid nanoparticle – a tiny ball of fat – that encapsulates messenger RNA (or mRNA) and delivers a “message” to infected white blood cells. This prompts the cells to reveal the “hiding” HIV.

In theory, this will allow the immune system or HIV drugs to target the virus.

This discovery is an important step. However, it is still in the laboratory phase of testing, and is just one piece of the puzzle.

We could say the same about many other results heralded as moving closer to a cure for HIV.

Further research on safety and efficacy is needed before testing in human clinical trials. Such trials start with small numbers and the trialling process takes many years. This and other steps towards a cure are slow and expensive, but necessary.

Importantly, any cure would ultimately need to be fairly low-tech to deliver for it to be feasible and affordable in low-income countries globally.

So where does that leave us?

A cure for HIV that is affordable and scalable would have a profound impact on human heath globally, particularly for people living with HIV. To get there is a long and arduous path that involves solving a range of scientific puzzles, followed by addressing implementation challenges.

In the meantime, ensuring people at risk of HIV have access to testing and prevention interventions – such as PrEP and safe injecting equipment – remains crucial. People living with HIV also need sustained access to effective treatment – regardless of where they live.

The Conversation

Bridget Haire has received funding from the National Health and Medical Research Council. She is a past president of the Australian Federation of AIDS Organisations (now Health Equity Matters).

Benjamin Bavinton receives funding from the National Health and Medical Research Council, the Australian government, and state and territory governments. He also receives funding from ViiV Healthcare and Gilead Sciences, both of which make drugs or drug classes mentioned in this article. He is a Board Director of community organisation, ACON, and is on the National PrEP Guidelines Panel coordinated by ASHM Health.

ref. We have drugs to manage HIV. So why are we spending millions looking for cures? – https://theconversation.com/we-have-drugs-to-manage-hiv-so-why-are-we-spending-millions-looking-for-cures-258391

Somaliland’s 30-year quest for recognition: could US interests make the difference?

Source: The Conversation – Africa (2) – By Aleksi Ylönen, Professor, United States International University

More than three decades after unilaterally declaring independence from Somalia, Somaliland still seeks international recognition as a sovereign state. Despite a lack of formal acknowledgement, the breakaway state has built a relatively stable system of governance. This has drawn increasing interest from global powers, including the United States. As regional dynamics shift and great-power competition intensifies, Somaliland’s bid for recognition is gaining new currency. Aleksi Ylönen has studied politics in the Horn of Africa and Somaliland’s quest for recognition. He unpacks what’s at play.


What legal and historical arguments does Somaliland use?

The Somali National Movement is one of the main clan-based insurgent movements responsible for the collapse of the central government in Somalia. It claims the territory of the former British protectorate of Somaliland. The UK had granted Somaliland sovereign status on 26 June 1960.

The Somali government tried to stomp out calls for secession. It orchestrated the brutal killing of hundreds of thousands of people in northern Somalia between 1987 and 1989.

But the Somali National Movement declared unilateral independence on 18 May 1991 and separated from Somalia.

With the collapse of the Somali regime in 1991, the movement’s main enemy was gone. This led to a violent power struggle between various militias.

This subsided only after the politician Mohamed Egal consolidated power. He was elected president of Somaliland in May 1993.

Egal made deals with merchants and businessmen, giving them tax and commercial incentives to accept his patronage. As a result, he obtained the economic means to consolidate political power and to pursue peace and state-building. It’s something his successors have kept up with since his death in 2002.

What has Somaliland done to push for recognition?

Successive Somaliland governments continue to engage in informal diplomacy. They have aligned with the west, particularly the US, which was the dominant power after the cold war, and the former colonial master, the UK. Both countries host significant Somaliland diaspora communities.

The US and the UK have for decades flirted with the idea of recognising Somaliland, which they consider a strategic partner. However, they have been repeatedly thrown back by their respective Somalia policies. These have favoured empowering the widely supported Mogadishu government to reassert its authority and control over Somali territories.

This Somalia policy has been increasingly questioned in recent years, in part due to Mogadishu’s security challenges. In contrast, the Hargeisa government of Somaliland has largely shown it can provide security and stability. It has held elections and survived as a state for the last three decades, though it has faced political resistance and armed opposition.




Read more:
Somaliland elections: what’s at stake for independence, stability and shifting power dynamics in the Horn of Africa


As new global powers rise, Somaliland administrations have pursued an increasingly diverse foreign policy, with one goal: international recognition.

Hargeisa hosts consulates and representative offices of Djibouti, Ethiopia, Kenya, Taiwan, the UK and the European Union, among others.

The government has also engaged in informal foreign relations with the United Arab Emirates. The Middle Eastern monarchy serves as a business hub and a destination of livestock exports. Many Somalilanders migrate there.

Somaliland maintains representative offices in several countries. These include Canada, the US, Norway, Sweden, the UK, Saudi Arabia, Turkey and Taiwan. Hargeisa has alienated China because it has collaborated with Taiwan since 2020. Taiwan is a self-ruled island claimed by China.

On 1 January 2024, Somaliland’s outgoing president Muse Bihi signed a memorandum of understanding with Ethiopian prime minister Abiy Ahmed for increased cooperation. Bihi implied that Ethiopia would be the first country to formally recognise Somaliland. The deal caused a sharp deterioration of relations between Addis Ababa and Mogadishu.

Abiy later moderated his position and, with Turkish mediation, reconciled with his Somalia counterpart, President Hassan Mohamud.

What’s behind US interest in Somaliland?

The US, like other great powers, has been interested in Somaliland because of its strategic location. It is on the African shores of the Gulf of Aden, across from the Arabian Peninsula. Its geographical position has gained currency recently as Yemeni Houthi rebels strike maritime traffic in the busy shipping lanes. Somaliland is also well located to curb piracy and smuggling on this global trade route.

The US Africa Command set up its main Horn of Africa base at Camp Lemonnier in Djibouti in 2002. This followed the 11 September 2001 attacks.




Read more:
Somaliland’s quest for recognition: UK debate offers hint of a sea change


In 2017, China, which had become the main foreign economic power in the Horn of Africa, set up a navy support facility in Djibouti. This encouraged closer collaboration between American and Somaliland authorities. The US played with the idea of establishing a base in Berbera, which hosts Somaliland’s largest port.

With Donald Trump winning the US presidential election in 2024, there were reports of an increased push for US recognition of Somaliland. This would allow the US to deepen its trade and security partnerships in the volatile Horn of Africa region.

Since March 2025, representatives of the Trump administration have engaged in talks with Somaliland officials to establish a US military base near Berbera. This would be in exchange for a formal but partial recognition of Somaliland.

What are the risks of US recognition of Somaliland?

Stronger US engagement with Somaliland risks neglecting Somalia.

Mogadishu depends on external military assistance in its battle against the advancing violent Islamist extremist group, Al-Shabaab. It also faces increasing defiance from two federal regions, Puntland and Jubaland.

US recognition would reward Hargeisa for its persistent effort to maintain stability and promote democracy. However, it could encourage other nations to recognise Somaliland. This would deliver a blow to Somali nationalists who want one state for all Somalis.

The Conversation

Aleksi Ylönen is affiliated with the Center for International Studies, Iscte-Instituto Universitário de Lisboa, and is an associate fellow at the HORN International Institute for Strategic Studies.

ref. Somaliland’s 30-year quest for recognition: could US interests make the difference? – https://theconversation.com/somalilands-30-year-quest-for-recognition-could-us-interests-make-the-difference-255399

Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings

Source: The Conversation – Global Perspectives – By Daniel Binns, Senior Lecturer, Media & Communication, RMIT University

AI-generated with Leonardo Phoenix 1.0. Author supplied

Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content that feels a little too real.

The markers of AI-generated media are becoming harder to spot as technology companies work to iron out the kinks in their generative artificial intelligence (AI) models.

But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.

When AI hallucinates, contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.

In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.

Welcome to the Slopocene

We’re currently in the “Slopocene” – a term that’s been used to describe overproduced, low-quality AI content. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.




Read more:
What is ‘model collapse’? An expert explains the rumours about an impending AI doom


AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues large language models (LLMs) hallucinate all the time, and it’s only when they

go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

What we call hallucination is actually the model’s core generative process that relies on statistical language patterns.

In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.

This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.

Pushing a chatbot to its limits

If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?

With this in mind, I decided to “break” Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.

The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.

Screenshot of an AI text interface showing an unusual output. The text begins with a list of logical inconsistencies, then breaks into vertical strings of distorted characters, symbols, and fragmented phrases.
A language model in collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding its usual guardrails and running until the system cut it off.
Screenshot by author.

Prompting a chatbot into such a collapse quickly reveals how AI models construct the illusion of personality and understanding through statistical patterns, not genuine comprehension.

Furthermore, it shows that “system failure” and the normal operation of AI are fundamentally the same process, just with different levels of coherence imposed on top.

‘Rewilding’ AI media

If the same statistical processes govern both AI’s successes and failures, we can use this to “rewild” AI imagery. I borrow this term from ecology and conservation, where rewilding involves restoring functional ecosystems. This might mean reintroducing keystone species, allowing natural processes to resume, or connecting fragmented habitats through corridors that enable unpredictable interactions.

Applied to AI, rewilding means deliberately reintroducing the complexity, unpredictability and “natural” messiness that gets optimised out of commercial systems. Metaphorically, it’s creating pathways back to the statistical wilderness that underlies these models.

Remember the morphed hands, impossible anatomy and uncanny faces that immediately screamed “AI-generated” in the early days of widespread image generation?

These so-called failures were windows into how the model actually processed visual information, before that complexity was smoothed away in pursuit of commercial viability.

AI image of two women under red umbrellas. One wears bold clothing and a turquoise hat. A red speech bubble reads It's urgent that I see your project to assess.
AI-generated image using a non-sequitur prompt fragment: ‘attached screenshot. It’s urgent that I see your project to assess’. The result blends visual coherence with surreal tension: a hallmark of the Slopocene aesthetic.
AI-generated with Leonardo Phoenix 1.0, prompt fragment by author.

You can try AI rewilding yourself with any online image generator.

Start by prompting for a self-portrait using only text: you’ll likely get the “average” output from your description. Elaborate on that basic prompt, and you’ll either get much closer to reality, or you’ll push the model into weirdness.

Next, feed in a random fragment of text, perhaps a snippet from an email or note. What does the output try to show? What words has it latched onto? Finally, try symbols only: punctuation, ASCII, unicode. What does the model hallucinate into view?

The output – weird, uncanny, perhaps surreal – can help reveal the hidden associations between text and visuals that are embedded within the models.

Insight through misuse

Creative AI misuse offers three concrete benefits.

First, it reveals bias and limitations in ways normal usage masks: you can uncover what a model “sees” when it can’t rely on conventional logic.

Second, it teaches us about AI decision-making by forcing models to show their work when they’re confused.

Third, it builds critical AI literacy by demystifying these systems through hands-on experimentation. Critical AI literacy provides methods for diagnostic experimentation, such as testing – and often misusing – AI to understand its statistical patterns and decision-making processes.

These skills become more urgent as AI systems grow more sophisticated and ubiquitous. They’re being integrated in everything from search to social media to creative software.

When someone generates an image, writes with AI assistance or relies on algorithmic recommendations, they’re entering a collaborative relationship with a system that has particular biases, capabilities and blind spots.

Rather than mindlessly adopting or reflexively rejecting these tools, we can develop critical AI literacy by exploring the Slopocene and witnessing what happens when AI tools “break”.

This isn’t about becoming more efficient AI users. It’s about maintaining agency in relationships with systems designed to be persuasive, predictive and opaque.

The Conversation

Daniel Binns is an Associate Investigator with the ARC Centre of Excellence for Automated Decision-Making and Society.

ref. Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings – https://theconversation.com/understanding-the-slopocene-how-the-failures-of-ai-can-reveal-its-inner-workings-258584

Digital government can benefit citizens: how South Africa can reduce the risks and get it right

Source: The Conversation – Africa (2) – By Busani Ngcaweni, Visiting Adjunct Professor, Wits School of Governance, University of the Witwatersrand

The digital revolution is reshaping governance worldwide. From the electronic filing of taxes to digital visa applications, technology is making government services more accessible, efficient and transparent.

South Africa is making progress in its digital journey. In 2024 it climbed to 40th place out of 193 countries, from 65th place in 2022, in the United Nations e-Government Index. This improvement makes the country one of Africa’s digital leaders, surpassing Mauritius and Tunisia.

South Africa has identified more than 255 government services for digitisation. Already, 134 are available on the National e-Government Portal. This achievement is remarkable. Nevertheless, the shift to digitisation comes with challenges and risks.

Some countries have weakened the state’s role by rapidly outsourcing key government functions. But South Africa has the opportunity to build a model of digital transformation that strengthens public institutions rather than diminishes them.

New technologies must bring tangible benefits for citizens. Digital transformation can improve public administration. But, if mismanaged, it could burden taxpayers with costs.

Benefits

Digital transformation comes at a cost. This is particularly true if the state fails to use its procurement power to negotiate reasonable prices. Infrastructure upgrades, cybersecurity measures, software licensing and system maintenance require substantial financial investment.

The question is whether these expenses are a necessary step towards a more efficient and accessible government.

Two South African examples illustrate that digital transformation can save money and enhance service delivery quality.

The first is the South African Revenue Service. Its goal is to ensure that taxpayers and tax advisers can use the service from anywhere and at any time. The changes made more than a decade ago show that digital systems can yield substantial financial gains. After introducing e-filing in 2006, the revenue service streamlined tax processes, reduced inefficiencies and led to higher compliance rates. Ultimately this led to improved revenue collection.

Similarly, digitising social grant payments has had a number of positive effects. In a chapter of a recent edited volume on public governance, my colleagues and I wrote a case study about how the South African Social Security Agency used basic technologies and platforms like WhatsApp and email to process a grant during the COVID pandemic. It allowed over 14 million people to apply, paid grants to over 6 million beneficiaries during the first phase of the project.

South African Social Security Agency annual reports show that over 95% of grant beneficiaries receive their payouts electronically through debit cards, instead of going to cash points. This improves security and lets beneficiaries decide when to get and spend their money.

There are fears that automation could result in massive job losses. But global experience has shown that digitalisation does not necessarily lead to large-scale retrenchments. Instead it can shift the nature of work to other responsibilities.

The South African Social Security Agency provides a compelling case. Its transition to digital grant payments did not lead to job losses. Similarly, the expansion of e-filing at the revenue service has not resulted in workforce reductions. In both cases efficiencies improved.

These cases highlight that digital transformation is reshaping roles rather than displacing employees. Public servants are moving into areas such as cybersecurity, data analysis and AI-driven decision-making.

Shortcomings and pitfalls

A number of inefficiencies are at play in government services.

Firstly, most government digital operations still work with outdated paper-based systems. The lack of a uniform digital identity creates bureaucratic inefficiencies and delays.

Secondly, fragmented procurement of equipment in government has led to duplicated efforts, increased costs and fruitless expenditure.

Thirdly, different departments often use isolated and incompatible digital systems. This reduce the mutual benefits of digital transformation. The State IT Agency has been blamed for inefficiencies, procurement failures and questionable spending.

Fourthly, South Africa’s public service remains fragmented. Citizens still struggle to access government services seamlessly. They often move between departments to complete what should be a single transaction.

Without a centralised system, departments operate in isolation, duplicating efforts, increasing costs and eroding public trust.




Read more:
South Africa’s civil servants are missing skills, especially when it comes to technology – report


Fifth, a lack of skills. Increasing reliance on digital tools requires expertise in data analytics, cloud computing and automation. Many public servants lack the training to take on these new roles. The National Digital and Future Skills Strategy was introduced in September 2020 to bridge this gap, but its effectiveness depends on its implementation.

Introducing it in 2020 at the height of the COVID-19 pandemic forced government to make digital leaps which otherwise might have taken longer. To sustain services, technology had to be rapidly adopted, including basic things like holding Cabinet meetings online, using a system rapidly developed by the State Information Technology Agency.

Sixth, security concerns complicate the transformation. As government systems become digital, they become vulnerable to cyberattacks. South Africa must put in place cybersecurity infrastructure to prevent identity theft, data breaches and service disruptions. A cyberattack on one department could affect the entire public sector.

What needs to be done

Government must streamline procurement, improve coordination and eliminate inefficiencies to ensure interdepartmental collaboration.

A single, integrated e-government platform would:

  • cut red tape

  • reduce queues

  • increase efficiency.

Government needs to upskill civil servants and improve their digital literacy.

Government must create a seamless e-government system that connects services while protecting citizens’ personal information. The success of digitalisation depends on technological advancements as well as the level of trust citizens have in government systems. Without strong security measures, transparency and accountability, even the most sophisticated digital tools will fail to gain public confidence.

South Africa has the chance to demonstrate that a strong, capable state can successfully integrate technology while safeguarding public interests. It should take full advantage of offers by Microsoft, Amazon and Huawei to support digital skills training in the public sector in a way that does not advantage one company’s technologies over others. Choices of technology must be user-centric, not based on preferences of accounting officers and chief information officers. Leaders of public institutions must be measured on their ability to digitally transform their organisations.

The Conversation

Busani Ngcaweni is affiliated with the National School of Government, Wits and Johannesburg Universities.

ref. Digital government can benefit citizens: how South Africa can reduce the risks and get it right – https://theconversation.com/digital-government-can-benefit-citizens-how-south-africa-can-reduce-the-risks-and-get-it-right-254089

The 28 Days Later franchise redefined zombie films. But the undead have an old, rich and varied history

Source: The Conversation – Global Perspectives – By Christopher White, Historian, The University of Queensland

The history of the dead – or, more precisely, the history of the living’s fascination with the dead – is an intriguing one.

As a researcher of the supernatural, I’m often pulled aside at conferences or at the school gate, and told in furtive whispers about people’s encounters with the dead.

The dead haunt our imagination in a number of different forms, whether as “cold spots”, or the walking dead popularised in zombie franchises such as 28 Days Later.

The franchise’s latest release, 28 Years Later, brings back the Hollywood zombie in all its glory – but these archetypal creatures have a much wider and varied history.

Zombis, revenants and the returning dead

A zombie is typically a reanimated corpse: a category of the returning dead. Scholars refer to them as “revenants”, and continue to argue over their exact characteristics.

In the Haitian Vodou religion, the zombi is not the same as the Hollywood zombie. Instead, zombi are people who, as a religious punishment, are drugged, buried alive, then dug out and forced into slavery.

The Hollywood zombie, however, draws more from medieval European stories about the returning dead than from Vodou.

A perfect setting for a ‘zombie’ film

In 28 Years Later, the latest entry in Danny Boyle’s blockbuster horror franchise, the monsters technically aren’t zombies because they aren’t dead. Instead, they are infected by a “rage virus”, accidentally released by a group of animal rights activists in the beginning of the first film.

This third film focuses on events almost three decades after the first film. The British Isles is quarantined, and the young protagonist Spike (Alfie Williams) and his family live in a village on Lindisfarne Island. This island, one of the most important sites in early medieval British Christianity, is isolated and protected by a tidal causeway that links it to the mainland.

Two actors with crossbows are running outdoors in a scene from a zombie film, with some blurry figures in the back.
Aaron Taylor-Johnson and Alfie Williams star in the new film, out in Australian cinemas today.
Sony Pictures

The film leans heavily on how we imagine the medieval world, with scenes showing silhouetted fletchers at work making arrows, children training with bows, towering ossuaries and various memento mori. There’s also footage from earlier depictions of medieval warfare. And at one point, the characters seek sanctuary in the ruins of Fountains Abbey, in Yorkshire, which was built in 1132.

The medieval locations and imagery of 28 Years Later evoke the long history of revenants, and the returned dead who once roved medieval England.

Early accounts of the medieval dead

In the medieval world, or at least the parts that wrote in Latin, the returning dead were usually called spiritus (“spirit”), but they weren’t limited to the non-corporeal like today’s ghosts are.

Medieval Latin Christians from as early as the 3rd century saw the dead as part of a parallel society that mirrored the world of the living, where each group relied on the other to aid them through the afterlife.

Depiction of the undead from a medieval manuscript.
British Library, Yates Thompson MS 13

While some medieval ghosts would warn the living about what awaited sinners in the afterlife, or lead their relatives to treasure, or prophesise the future, some also returned to terrorise the living.

And like the “zombies” affected by the rage virus in 28 Years Later, these revenants could go into a frenzy in the presence of the living.

Thietmar, the Prince-Bishop of Merseburg, Germany, wrote the Chronicon Thietmari (Thietmar’s Chronicle) between 1012 and 1018, and included a number of ghost stories that featured revenants.

Although not all of them framed the dead as terrifying, they certainly didn’t paint them as friendly, either. In one story, a congregation of the dead at a church set the priest upon the altar, before burning him to ashes – intended to be read as a mirror of pagan sacrifice.

These dead were physical beings, capable of seizing a man and sacrificing him in his own church.

A threat to be dealt with

The English monastic historian William of Newburgh (1136–98) wrote revenants were so common in his day that recording them all would be exhausting. According to him, the returned dead were frequently seen in 12th century England.

So, instead of providing a exhausting list, he offered some choice examples which, like most medieval ghost stories, had a good Christian moral attached to them.

William’s revenants mostly killed the people of the towns they lived, returning to the grave between their escapades. But the medieval English had a method for dealing with these monsters; they dug them up, tore out the heart and then burned the body.

Other revenants were dealt with less harshly, William explained. In one case, all it took was the Bishop of Lincoln writing a letter of absolution to stop a dead man returning to his widow’s bed.

These medieval dead were also thought to spread disease – much like those infected with the rage virus – and were capable of physically killing someone.

Depiction of the undead from a medieval manuscript.
British Library, Arundel MS 83.

The undead, further north

In medieval Scandinavia and Iceland, the undead draugr were extremely strong, hideous to look at and stunk of decomposition. Some were immune to human weapons and often killed animals near their tombs before building up to kill humans. Like their English counterparts, they also spread disease.

But according to the Eyrbyggja saga, an anonymous 13th or 14th century text written in Iceland, all it took was a type of community court and the threat of legal action to drive off these returned dead.

It’s a method the survivors in 28 Years Later didn’t try.

The dead live on

The first-hand zombie stories that were common during the medieval period started to dwindle in the 16th century with the Protestant Reformation, which focused more on individuals’ behaviours and salvation.

Nonetheless, their influence can still be felt in Catholic ritual practices today, such as in prayers offered for the dead, and the lighting of votive candles.

We still tell ghost stories, and we still worry about things that go bump in the night. And of course, we continue to explore the undead in all its forms on the big screen.

The Conversation

Christopher White does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The 28 Days Later franchise redefined zombie films. But the undead have an old, rich and varied history – https://theconversation.com/the-28-days-later-franchise-redefined-zombie-films-but-the-undead-have-an-old-rich-and-varied-history-247900

Is Sabrina Carpenter’s Man’s Best Friend album cover satire or self-degradation? A psychology expert explores our reactions

Source: The Conversation – Global Perspectives – By Katrina Muller-Townsend, Lecturer in Psychology, Edith Cowan University

Island Records

Sabrina Carpenter’s Man’s Best Friend album cover has fans divided.

Carpenter poses on all fours, her glossy blond hair grasped by a male figure cropped from the frame. Her wide-eyed expression intensifies an ambiguous performance of subservience, tapping into a visual language tied to female objectification, from classic pin-up imagery to contemporary pop culture.

The emotionally loaded image plays on her hyper-feminine, tongue-in-cheek pop star persona, forcing us to question where irony ends and objectification begins.

Is it satire, or self-degradation?

Up for debate

At first glance, the cover seems like just another stylised, provocative pop image. It delivers what we’ve come to expect: a bold, ironic twist on the exaggerated Juno-style pose she reinvents on stage.

To some fans, it’s clever satire: a pop star reclaiming and amplifying her image to mock industry norms. Satire uses exaggeration, irony, or humour to critique power structures – and Carpenter’s pose walks that tightrope.

To others it crosses a line, reinforcing regressive attitudes about women’s sexuality and drawing criticism from domestic violence advocates.

The debate reflects our unresolved discomfort about gender, power and control. There is a tension between Carpenter’s ironic persona and the submissive pose, creating uncertainty for the viewer.

We can use psychology to better understand this dichotomy.

The schema violation

This mismatch between expectation and perception is a schema violation.

A schema is a mental shortcut: a template built from experience and unspoken rules that helps us make sense of the world and predict what to expect. When something breaks that pattern, it’s called a schema violation.

Carpenter’s brand is cheeky, self-aware irony – so when she adopts a pose steeped in submission and hyper-femininity as in this album image, it feels off.

That can trigger cognitive dissonance: the mental tension we feel when two ideas (here, empowerment and obedience) don’t align.

To resolve the conflict, some fans reinterpret the image as feminist sarcasm. Others reject it, fearing it panders to outdated, dangerous norms.

Both reactions reflect our emotional and ideological investments in who Carpenter is or should be.

Exploring confirmation bias

Part of this conflicted reaction is driven by confirmation bias: our tendency to filter information to support what we already believe.

Fans who see Carpenter as witty and empowered interpret the image as intentionally ironic. Others – more sceptical of the industry’s history of exploiting female sexuality – view it as a throwback to damaging norms.

Either way, our interpretations often reflect more about ourselves than about Carpenter’s intent.

When her image contradicts both her public persona and our social values, it creates a gap between what we think is right and what we want to be right. So, we try to explain it away, by either defending the image or criticising it.

Satire and scandal

Carpenter’s cover follows a long tradition of female artists whose work straddles satire and scandal, complicating public reception.

Madonna’s Like a Prayer drew outrage for mixing religion with sexual imagery. Yet it positioned her as a provocateur – a woman resisting the lack of agency that so often defines sexualised media.

Miley Cyrus’ Bangerz era shocked fans with a bold shift from Hannah Montana innocence to hypersexualised rebellion, challenging the narrow roles women in pop culture are confined to.

Doja Cat’s shift from glam pop princess to glitch villainess unsettled audiences. Was it satire, rebellion, or just chaos?

These women, like Carpenter, force us to confront our own discomfort with women who won’t stay in one lane.

Performer and provocateur

Audience reaction is also shaped by emotional investment in Carpenter’s persona. Through carefully curated social media, interviews and lyrics, fans build intimate narratives forming parasocial relationships – one-sided emotional bonds with celebrities.

When an image contradicts that imagined persona, it can feel jarring, even like betrayal.

Audiences often expect idols to be empowering but not polarising, sexy but safe, to challenge norms – but only in ways that affirm our own values.

Carpenter’s image breaks that implicit contract, which creates discomfort for some viewers.

Carpenter’s cover raises uncomfortable but necessary questions about how much freedom female artists have to be both critical and complicit. Can they play with society and play along, to be both performer and provocateur?

This highlights the double bind many women face in media and popular culture. Female artists are expected to both subvert and satisfy; to entertain without offending; empower without alienating. The burden to be palatable and provocative is one male artists rarely face.

It’s what we make of it

Is Carpenter undermining herself or subverting the system? Perhaps both. Or perhaps the image isn’t the message: our reaction is.

The image forces us to confront not only our perception of Sabrina Carpenter but also our cultural discomfort with women who defy neat categorisation. Satire demands interpretation, especially when it comes from women addressing sex or power.

More than provocation, Carpenter’s cover mirrors our cultural struggle to accept women who defy simple labels of satire or submission. The image can reflect broader social ideals and tensions projected onto public figures.

What we see says more about our assumptions than her intent. Understanding those reactions doesn’t kill the fun – it deepens it.

The Conversation

Katrina Muller-Townsend does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is Sabrina Carpenter’s Man’s Best Friend album cover satire or self-degradation? A psychology expert explores our reactions – https://theconversation.com/is-sabrina-carpenters-mans-best-friend-album-cover-satire-or-self-degradation-a-psychology-expert-explores-our-reactions-259043

Nineteen Eighty-Four might have been inspired by George Orwell’s fear of drowning

Source: The Conversation – Global Perspectives – By Nathan Waddell, Associate Professor in Twentieth-Century Literature, University of Birmingham

George Orwell had a traumatic relationship with the sea. In August 1947, while he was writing Nineteen Eighty-Four (1949) on the island of Jura in the Scottish Hebrides, he went on a fishing trip with his young son, nephew and niece.

Having misread the tidal schedules, on the way back Orwell mistakenly piloted the boat into rough swells. He was pulled into the fringe of the Corryvreckan whirlpool off the coasts of Jura and Scarba. The boat capsized and Orwell and his relatives were thrown overboard.

It was a close call – a fact recorded with characteristic detachment by Orwell in his diary that same evening: “On return journey today ran into the whirlpool & were all nearly drowned.” Though he seems to have taken the experience in his stride, this may have been a trauma response: detachment ensures the ability to persist after a near-death experience.

We don’t know for sure if Nineteen Eighty-Four was influenced by the Corryvreckan incident. But it’s clear that the novel was written by a man fixated on water’s terrifying power.


This article is part of Rethinking the Classics. The stories in this series offer insightful new ways to think about and interpret classic books and artworks. This is the canon – with a twist.


Nineteen Eighty-Four isn’t typically associated with fear of death by water. Yet it’s filled with references to sinking ships, drowning people and the dread of oceanic engulfment. Fear of drowning is a torment that social dissidents might face in Room 101, the torture chamber to which all revolutionaries are sent in the appropriately named totalitarian state of Oceania.

An early sequence in the novel describes a helicopter attack on a ship full of refugees, who are bombed as they fall into the sea. The novel’s protagonist, Winston Smith, has a recurring nightmare in which he dreams of his long-lost mother and sister trapped “in the saloon of a sinking ship, looking up at him through the darkening water”.

George Orwell in a black and white photograph
George Orwell in 1943.
National Union of Journalists

The sight of them “drowning deeper every minute” takes Winston back to a culminating moment in his childhood when he stole chocolate from his mother’s hand, possibly condemning his sister to starvation. These watery graves imply that Winston is drowning in guilt.

The “wateriness” of Nineteen Eighty-Four may have another interesting historical source. In his essay My Country Right or Left (1940), Orwell recalls that when he had just become a teenager he read about the “atrocity stories” of the first world war.

Orwell states in this same essay that “nothing in the whole war moved [him] so deeply as the loss of the Titanic had done a few years earlier”, in 1912. What upset Orwell most about the Titanic disaster was that in its final moments it “suddenly up-ended and sank bow foremost, so that the people clinging to the stern were lifted no less than 300 feet into the air before they plunged into the abyss”.

Sinking ships and dying civilisations

Orwell never forgot this image. Something similar to it appears in his novel Keep the Aspidistra Flying (1936) where the idea of a sinking passenger liner evokes the collapse of modern civilisation, just as the Titanic disaster evoked the end of Edwardian industrial confidence two decades beforehand.

A boy selling newspapers about the Titanic accident
The Titanic disaster had a profound impact on Orwell.
Wiki Commons

References to sinking ships and drowning people appear at key moments in many other works by Orwell, too. But did the full impact of the Titanic surface in Nineteen Eighty-Four?

Sinking ships were part of Orwell’s descriptive toolkit. In Nineteen Eighty-Four, a novel driven by memories of unsympathetic water, they convey nightmares. Filled with references to water and liquidity, it’s one of the most aqueous novels Orwell produced, relying for many of its most shocking episodes on imagery of desperate people drowning or facing imminent death on sinking sea craft.

The thought of trapped passengers descending into the depths survives in Winston’s traumatic memories of his mother and sister, who, in the logic of his dreams, are alive inside a sinking ship’s saloon.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


There’s no way to prove that the Nineteen Eighty-Four is “about” the Titanic disaster, but in the novel, and indeed in Orwell’s wider body of work, there are too many tantalising hints to let the matter rest.

Thinking about fear of death by water takes us into Orwell’s terrors just as it takes us into Winston’s, allowing readers to see the frightened boy inside the adult man and, indeed, inside the author who dreamed up one of the 20th century’s most famous nightmares.

Beyond the canon

As part of the Rethinking the Classics series, we’re asking our experts to recommend a book or artwork that tackles similar themes to the canonical work in question, but isn’t (yet) considered a classic itself. Here is Nathan Waddell’s suggestion:

As soon as the news broke of the Titanic’s sinking, literary works of all shapes and sizes started to appear in tribute to the disaster and its victims. As the century went on, and as research into the tragedy developed (particularly after the ships wreckage was discovered in 1985), more nuanced literary responses to the sinking became possible.

One such response is Beryl Bainbridge’s Whitbread-prize-winning novel Every Man for Himself (1996). It reimagines the disaster from the first-person perspective of an imaginary character, Morgan, the fictional nephew of the historically real financier J. P. Morgan (who was due to sail on the Titanic but changed plans before it sailed).

This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

The Conversation

Nathan Waddell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Nineteen Eighty-Four might have been inspired by George Orwell’s fear of drowning – https://theconversation.com/nineteen-eighty-four-might-have-been-inspired-by-george-orwells-fear-of-drowning-251289

How pterosaurs learned to fly: scientists have been looking in the wrong place to solve this mystery

Source: The Conversation – Global Perspectives – By Davide Foffa, Research Fellow in Palaeobiology, University of Birmingham

Ever since the first fragments of pterosaur bone surfaced nearly 250 years ago, palaeontologists have puzzled over one question: how did these close cousins of land-bound dinosaurs take to the air and evolve powered flight? The first flying vertebrates seemed to appear on the geological stage fully formed, leaving almost no trace of their first tentative steps into the air.

Taken at face value, the fossil record implies that pterosaurs suddenly originated in the later part of the Triassic period (around 215 million years ago), close to the equator on the northern super-continent Pangaea. They then spread quickly between the Triassic and the Jurassic periods, about 10 million years later, in the wake of a mass extinction that was most likely caused by massive volcanic activity.

Most of the handful of Triassic specimens come from narrow seams of dark shale in Italy and Austria, with other fragments discovered in Greenland, Argentina and the southwestern US. These skeletons appear fully adapted for flight, with a hyper-elongated fourth finger supporting membrane-wings. Yet older rocks show no trace of intermediate gliders or other transitional forms that you might expect as evidence of pterosaurs’ evolution over time.

There are two classic competing explanations for this. The literal reading says pterosaurs evolved elsewhere and did not reach those regions where most have been discovered until very late in the Triassic period, by which time they were already adept flyers. The sceptical reading notes that pterosaurs’ wafer-thin, hollow bones could easily vanish from the fossil record, dissolve, get crushed or simply be overlooked, creating this false gap.

Eudimorphodon ranzii fossil found in Bergamo in 1973
Eudimorphodon ranzii fossil from Bergamo in 1973 is one of many pterosaur discoveries from southern Europe.
Wikimedia, CC BY-SA

For decades, the debate stalled as a result of too few fossils or too many missing rocks. This impasse began to change in 2020, when scientists identified the closest relatives of pterosaurs in a group of smallish upright reptiles called lagerpetids.

From comparing many anatomical traits across different species, the researchers established that pterosaurs and lagerpetids shared many similarities including their skulls, skeletons and inner ears. While this discovery did not bring any “missing link” to the table, it showed what the ancestor of pterosaurs would have looked like: a rat-to-dog-sized creature that lived on land and in trees.

This brought new evidence about when pterosaurs may have originated. Pterosaurs and lagerpetids like Scleromochlus, a small land-dwelling reptile, diverged at some point after the end-Permian mass extinction. It occurred some 250 million years ago, 35 million years before the first pterosaur appearance in the fossil record.

Artist's impression of a Scleromochlus
Scleromochlus is one of the lagerpetids, the closest known relatives to the pterosaurs.
Gabriel Ugueto

Pterosaurs and their closest kin did not share the same habitats, however. Our new study, featuring new fossil maps, shows that soon after lagerpetids appeared (in southern Pangaea), they spread across wide areas, including harsh deserts, that many other groups were unable to get past. Lagerpetids lived both in these deserts and in humid floodplains.

They tolerated hotter, drier settings better than any early pterosaur, implying that they had evolved to cope with extreme temperatures. Pterosaurs, by contrast, were more restricted. Their earliest fossils cluster in the river and lake beds of the Chinle and Dockum basins (southwest US) and in moist coastal belts fringing the northern arm of the Tethys Sea, a huge area that occupied today’s Alps.

Scientists have inferred from analysing a combination of fossil distributions, rock features and climate simulations that pterosaurs lived in areas that were warm but not scorching. The rainfall would have been comparable to today’s tropical forests rather than inland deserts.

This suggests that the earliest flying dinosaurs may have lived in tree canopies, using foliage both for take-off and to protect themselves from predators and heat. As a result of this confined habitat, the distances that they flew may have been quite limited.

Changing climates

We were then able to add a fresh dimension to the story using a method called ecological niche modelling. This is routinely used in modern conservation to project where endangered animals and plants might live as the climate gets hotter. By applying this approach to later Triassic temperatures, rainfall and coastlines, we asked where early pterosaurs lived, regardless of whether they’ve shown up there in the fossil record.

Many celebrated fossil sites in Europe emerge as poor pterosaur habitat until very late in the Triassic period: they were simply too hot, too dry or otherwise inhospitable before the Carnian age, around 235 million years ago. The fact that no specimens have been discovered there that are more than about 215 million years old may be because the climate conditions were still unsuitable or simply because we don’t have the right type of rocks preserved of that age.

In contrast, parts of the south-western US, Morocco, India, Brazil, Tanzania and southern China seem to have offered welcoming environments several million years earlier than the age of our oldest discoveries. This rewrites the search map. If pterosaurs could have thrived in those regions much more than 215 million years ago, but we have not found them there, the problem may again lie not with biology but with geology: the right rocks have not been explored, or they preserve fragile fossils only under exceptional conditions.

Our study flags a dozen geological formations, from rivers with fine sediment deposits to lake beds, as potential prime targets for the next breakthrough discovery. They include the Timezgadiouine beds of Morocco, the Guanling Formation of south-west China and, in South America, several layers of rock from the Carnian age, such as the Santa Maria Formation, Chañares Formation and Ischigualasto Formation.

Pterosaurs were initially confined to tropical treetops near the equator. When global climates shifted and forested corridors opened, pterosaurs’ wings catapulted them into every corner of the planet and ultimately carried them through one of Earth’s greatest extinctions. What began as a tale of missing fossils has become a textbook example of how climate, ecology and evolutionary science have come together to illuminate a fragmentary history that has intrigued paleontologists for over two centuries.

The Conversation

Davide Foffa is funded by Marie Skłodowska-Curie Actions: Individual (Global) Fellowship (H2020-MSCA-IF-2020; No.101022550), and by the Royal Commission for the Exhibition of 1851–Science Fellowship

Alfio Alessandro Chiarenza receives funding from The Royal Society (Newton International Fellowship NIFR1231802)

Emma Dunne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How pterosaurs learned to fly: scientists have been looking in the wrong place to solve this mystery – https://theconversation.com/how-pterosaurs-learned-to-fly-scientists-have-been-looking-in-the-wrong-place-to-solve-this-mystery-259063

Are Israel’s actions in Iran illegal? Could it be called self-defence? An international law expert explains

Source: The Conversation – Global Perspectives – By Shannon Bosch, Associate Professor (Law), Edith Cowan University

Israel’s major military operation against Iran has targeted its nuclear program, including its facilities and scientists, as well as its military leadership.

In response, the United Nations Security Council has quickly convened an emergency sitting. There, the Israeli ambassador to the UN Danny Danon defended Israel’s actions as a “preventative strike” carried out with “precision, purpose, and the most advanced intelligence”. It aimed, he said, to:

dismantle Iran’s nuclear programme, eliminate the architects of its terror and aggression and neutralise the regime’s ability to follow through on its repeated public promise to destroy the state of Israel.

So, what does international law say about self-defence? And were Israel’s actions illegal under international law?

When is self-defence allowed?

Article 2.4 of the UN charter states:

All members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations.

There are only two exceptions:

  1. when the UN Security Council authorises force, and
  2. when a state acts in self-defence.

This “inherent right of individual or collective self-defence”, as article 51 of the UN charter puts it, persists until the Security Council acts to restore international peace and security.

So what’s ‘self-defence’ actually mean?

The International Court of Justice (ICJ) has consistently interpreted self-defence narrowly.

In many cases, it has rejected arguments from states such as the United States, Uganda and Israel that have sought to promote a more expansive interpretation of self-defence.

The 9/11 attacks marked a turning point. The UN Security Council affirmed in resolutions 1368 and 1373 that the right to self-defence extends to defending against attacks by non-state actors, such as terrorist groups. The US, invoking this right, launched its military action in Afghanistan.

The classic understanding of self-defence – that it’s justified when a state responds reactively to an actual, armed attack – was regarded as being too restrictive in the age of missiles, cyberattacks and terrorism.

This helped give rise to the idea of using force before an imminent attack, in anticipatory self-defence.

The threshold for anticipatory self-defence is widely seen by scholars as high. It requires what’s known as “imminence”. In other words, this is the “last possible window of opportunity” to act to stop an unavoidable attack.

As set out by then-UN Secretary-General Kofi Annan in 2005:

as long as the threatened attack is imminent, no other means would deflect it and the action is proportionate, this would meet the accepted interpretation of self defence under article 51.

As international law expert Donald Rothwell points out, the legitimacy of anticipatory self-defence hinges on factual scrutiny and strict criteria, balancing urgency, legality and accountability.

However, the lines quickly blurred

In 2002, the US introduced a “pre-emptive doctrine” in its national security strategy.

This argued new threats – such as terrorism and weapons of mass destruction – justified using force to forestall attacks before they occurred.

Critics, including Annan, warned that if the notion of preventive self-defence was widely accepted, it would undermine the prohibition on the use of force. It would basically allow states to act unilaterally on speculative intelligence.

Annan acknowledged:

if there are good arguments for preventive military action, with good evidence to support them, they should be put to the Security Council, which can authorise such action if it chooses to.

If it does not so choose, there will be, by definition, time to pursue other strategies, including persuasion, negotiation, deterrence and containment – and to visit again the military option.

This is exactly what Israel has failed to do before attacking Iran.

Lessons from history

Israel’s stated goal was to damage Iran’s nuclear program and prevent it from developing a nuclear weapon that could be used against it.

This is explicitly about preventing an alleged, threatened, future attack by Iran with a nuclear weapon that, according to all publicly available information, Iran does not currently possess.

This is not the first time Israel has advanced a broad interpretation of self-defence.

In 1981, Israel bombed Iraq’s Osirak nuclear reactor, which was under construction on the outskirts of Baghdad. It claimed a nuclear-armed Iraq would pose an unacceptable threat. The UN Security Council condemned the attack.

As international law stands, unless an armed attack is imminent and unavoidable, such strikes are likely to be considered unlawful uses of force.

While there is still time and opportunity to use non-forcible means to prevent the threatened attack, there’s no necessity to act now in self defence.

Diplomatic engagement, sanction, and international monitoring of Iran’s nuclear program – such as through the International Atomic Energy Agency – remain the lawful means of addressing the emerging threat posed by Tehran.

Preserving the rule of law

The right to self-defence is not a blank cheque.

Anticipatory self-defence remains legally unsettled and highly contested.

So were Israel’s attacks on Iran a legitimate use of “self-defence”? I would argue no.

I concur with international law expert Marko Milanovic that Israel’s claim to be acting in preventive self-defence must be rejected on the facts available to us.

In a volatile world, preserving these legal limits is essential to avoiding unchecked aggression and preserving the rule of law.

The Conversation

Shannon Bosch does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Are Israel’s actions in Iran illegal? Could it be called self-defence? An international law expert explains – https://theconversation.com/are-israels-actions-in-iran-illegal-could-it-be-called-self-defence-an-international-law-expert-explains-259259

Computers tracking us, an ‘electronic collar’: Gilles Deleuze’s 1990 Postscript on the Societies of Control was eerily prescient

Source: The Conversation – Global Perspectives – By Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology

Our cultural touchstones series looks at influential works.

Gilles Deleuze was one of the most original and imaginative thinkers of postwar France. A lifelong teacher, he spent most of his career at the University of Paris VIII, influencing generations of students but largely shunning the mantle of public intellectual.

His complex, creative books mix philosophy, literature, film and politics – not to give clear answers, but to spark new ways of thinking.

Postscript on the Societies of Control, published 35 years ago in the countercultural L’Autre Journal is Deleuze at his most accessible and prophetic.

Written at a time when the Cold War was ending, computers were becoming more common, and the internet was beginning to connect institutions, the essay describes the emergence of a new kind of society – one not ruled by a single stern voice but by the soft hum of networks.

How societies work

Postscript was written as an update to the work of Deleuze’s contemporary Michel Foucault, who had died in 1984. Deleuze called it a “postscript” not just because of its brevity (it’s only around 2,300 words in English translation) but to highlight he wasn’t refuting Foucault, just building on his work.

A black and white photo of a man in a polo neck jumper.
Gilles Deleuze.
Tintinades/Wikimedia Commons, CC BY-NC-SA

From the 18th to early 20th centuries, Foucault had argued, Western societies were “disciplinary societies”. Schools, factories, prisons and hospitals – institutions with walls, schedules, routines and clear expectations – moulded behaviour. People were trained, observed, tested and corrected as they passed from one institution to the next.




Read more:
‘A dark masterpiece’: Foucault’s Discipline and Punish at 50


But in the late 20th century, Deleuze saw something shifting. He thought the stodgy old disciplinary institutions were “in a generalized crisis” due to technological advances and a new form of capitalism that demanded more flexibility in workers and consumers.

New systems of management and technology were starting to reshape people without sending them through traditional institutions. Deleuze wrote presciently, for example, that “perpetual training tends to replace the school, and continuous control to replace the examination”.

In business, he saw a growing idea of “salary according to merit”, transforming work into “challenges, contests, and highly comic group sessions” – something much at odds with the old model of the standard wage and the assembly line. Traditional government institutions like hospitals and the classic factory were embracing the model of the corporation, driven always by a profit motive and the need for better human tools.

To Deleuze, all this meant people were becoming more “free-floating” – they could be still playing socially useful roles but were being gently steered into them. This greater freedom, however, required a new system to keep everyone in line. He called this “modulation” to underline its dynamic, enveloping nature.

Like nudging, but everywhere

Deleuze described modulation as “a self-deforming cast that will continuously change from one moment to the other”. He meant that people were beginning to live in an environment where everything shape-shifts to encourage or discourage us in the right direction without explicitly putting up walls.

A prime example of how modulation has since become commonplace is nudging – the use of psychological techniques, often subtle and data-driven, to shape people’s behaviour.

Nudging didn’t really exist in 1990, but governments and tech companies use nudges all the time now. We’re nudged to eat healthier, buy, save, recycle, donate. Web sites use “dark patterns” – tricky designs that steer (or nudge) us toward certain choices. Social media feeds use algorithms to exclude us if we say the wrong thing. In fact, entire teams of behavioural scientists operate behind the scenes to manipulate many aspects of our lives.

Nudges can be good and can save us from poor choices, but their newfound moral acceptability (sometimes called libertarian paternalism) is very much a clue that Deleuze’s control society has arrived.

Control in your pocket

Deleuze, who died in 1995, wrote Postscript before the advent of the smartphone, but he foresaw that an “electronic collar” would assume a central role in society. He envisaged a “computer that tracks each person’s position – licit or illicit – and effects a universal modulation.”

Smartphones more than fit the bill. In the old disciplinary ways, they track where we go, what we search for, what we buy, how many steps we take, even how well we sleep. But if we apply Deleuze’s ideas to these phones, detailed surveillance is no longer their most important function. Our phones present and curate options.

In effect, they shape how we see the world. When you scroll through news or social media, for instance, you’re reading about a version of the world built just for you, designed to keep you looking, clicking and reacting – and keep you very finely attuned to what is acceptable or dangerous behaviour.

In Deleuze’s terms, this is pure modulation: not a forceful “No” but a softly spoken, “How about this?” Your phone doesn’t lock you in – it draws you in. It shapes what you see, rewards your cooperation, ignores your silence, and always keeps score. And it does this 24/7. You might unlock it hundreds of times a day. And each time it’s updated to guide your next move more precisely.

At the same time our phones quietly turn us into a set of credentials useful for regulating physical access to workplaces, bank accounts, information: In the societies of control, writes Deleuze, “what is important is no longer either a signature or a number, but a code: the code is a password.”

Data points not people?

Deleuze warned that, in a control society: “Individuals have become ‘dividuals,’ and masses have become samples, data, markets, or ‘banks.’” A dividual to Deleuze is a person transformed into a set of data points and metrics.

You are your credit rating, your search history, your likes and clicks – a different dataset to every institution. Such fragments are used to make decisions about you until they effectively replace you. In fact, for Deleuze a dividual has internalised this treatment and thinks of themselves as a net worth, a mortgage size, a car value – psychological anchors for control.

He illustrates this point with healthcare, predicting a

new medicine ‘without doctor or patient’ that singles out potential sick people and subjects at risk, which in no way attests to individuation.

How many health decisions are now made for us collectively before we ever see a doctor? We should be grateful for advances in public health and epidemiology, but this has certainly impacted our individuality and how we are treated.

Hard to detect

An unsettling part of Deleuze’s perspective is that control doesn’t usually feel like control. It’s often dressed up as convenience, efficiency or progress. You set up internet-linked video cameras because then you can work from home. You agree to long terms and conditions because your banking app won’t work otherwise.

One problem is there are no longer clear barriers we can rail against. As Deleuze said:

In disciplinary societies one was always starting again (from school to the barracks, from the barracks to the factory), while in control societies one is never finished with anything.

Control doesn’t always crush – it can enable. Digital networks bring real freedom, economic possibility, even joy. We move more easily – both mentally and geographically – than ever before. But while we move, it always inside a kind of invisible map shaped by capitalism.

It’s no conspiracy because nobody has the whole map. So it’s difficult to work out exactly what action, if any, to take. As Deleuze concludes: “The coils of a serpent are even more complex than the burrows of a molehill.”

So what can we do?

Postscript doesn’t offer a political program beyond the sardonic comment that:

Many young people strangely boast of being ‘motivated’ […] It’s up to them to discover what they’re being made to serve.

There are ways to resist control. Some people demand more privacy or digital rights. Others opt out selectively – logging off, turning off, refusing to be nudged. Some look to art as a way of resisting its smooth grip. These acts – however small – may offer what Deleuze and his collaborator, the French psychiatrist and philosopher Félix Guattari, called lines of flight: creative ways to move not just against control, but beyond it.

The real message of Postscript, however, is its invitation to consider a timeless perspective. Any society must have a way to make people useful. So, what kind of society do we want? What kinds of restrictions are we willing to live under? And, crucial to this current age, how explicit should control be?

The Conversation

Cameron Shackell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Computers tracking us, an ‘electronic collar’: Gilles Deleuze’s 1990 Postscript on the Societies of Control was eerily prescient – https://theconversation.com/computers-tracking-us-an-electronic-collar-gilles-deleuzes-1990-postscript-on-the-societies-of-control-was-eerily-prescient-254579