‘To my happy surprise, it grew beyond my imagination’: Robert Redford’s Sundance legacy

Source: The Conversation – Global Perspectives – By Jenny Cooney, Lecturer in Lifestyle Journalism, Monash University

Robert Redford at The Filmmakers’ Brunch during 2005 Sundance Film Festival. George Pimentel/WireImage

When Robert Redford launched the Utah-based Sundance Institute in 1981, providing an independent support system for filmmakers named after his role in Butch Cassidy and the Sundance Kid (1969), it would transform Hollywood and become his biggest legacy.

Redford, who has passed away age 89, was already a huge movie icon when he bought land and created a non-profit space with a mission statement “to foster independent voices, champion risky, original stories, and cultivate a community for artists to create and thrive globally”.

Starting with labs, fellowships, grants and mentoring programs for independent filmmakers, he finally decided to launch his own film festival in nearby Park City, Utah in 1985.

“The labs were absolutely the most important part of Sundance and that is still the core of what we are and what we do today,” Redford reflected during my last sit-down with him in 2013 at the Toronto International Film Festival, while promoting his own indie, All is Lost.

After the program had been running for five years, he told me

I realised we had succeeded in doing that much, but now there was nowhere for them to go. So, I thought, ‘well, what if we created a festival, where at least we can bring the filmmakers together to look at each other’s work and then we could create a community for them?’ And then, to my happy surprise, it grew beyond my imagination.

That’s putting it mildly. An astonishing list of filmmakers can all thank Redford for their career breakthroughs. Alumni of the Sundance Institute include Bong Joon-ho (who workshopped early scripts at Sundance labs before Parasite), Chloé Zhao and Taika Waititi, who often returns as a mentor.

Three people on a stage
President and founder of Sundance Institute Robert Redford, executive director of Sundance Institute Keri Putnam and Sundance Film Festival director John Cooper during the 2018 festival.
Nicholas Hunt/Getty Images

First films that debuted at the festival include Quentin Tarantino’s Reservoir Dogs (1992), Steve Soderbergh’s Sex, Lies, and Videotape (1989), Richard Linklater’s Slackers (2002), Paul Thomas Anderson’s Cigarettes and Coffee (1993), Nicole Holofcener’s short film Angry (1991), Darren Aronofsky’s Pi (1998) and Damian Chazelle’s Whiplash (2014).

Australian films which recently made their Sundance debut include Noora Niasari’s Shayda (2023), Daina Reid’s Run, Rabbit, Run (2023) and Sophie Hyde’s Jimpa (2025).




Read more:
A pretty face helped make Robert Redford a star. Talent and dedication kept him one


Creating a haven

For anyone lucky enough to have attended Sundance in the early days, it was a haven for indie filmmakers. It was not uncommon to see “Bob”, as he was always known in person, walking down the main street on his way to a movie premiere or a dinner with young filmmakers eager for his advice.

Watching Redford portray Bob Woodward in the Watergate thriller All the President’s Men (1976) was one of my earliest inspirations for pursuing a career in journalism. Also, nurturing a crush since The Sting (1973) and The Way We Were (1973) made it hard not to be intimidated crossing paths with him in Park City.

Robert Redford and Andie MacDowell at the Sundance Film Festival in 2003.
Randall Michelson/WireImage

Bob, however, quickly made you forget the icon status. Soon, you’d just be chatting about a new filmmaker he was excited to support, or his environmental work (he served as a trustee for five decades on the non-profit organisation, Natural Resources Defense Council).

Everyone felt equal in that indie film world, and Redford was responsible for that atmosphere.

In 1994, I waited in a Main Street coffee shop for Elle MacPherson to ski off a mountain and do an interview promoting her acting role in the Australian film Sirens. Later that day, I commiserated over a hot chocolate with Hugh Grant as he complained about frostbitten toes from wearing the wrong shoes and finding himself trekking through a snowstorm to the first screening of Four Weddings and a Funeral.

In the early days, Sundance was a destination for film lovers, not hair and makeup people, inappropriately glamorous designer gowns or swag lounges.

The arrival of Hollywood

But eventually, there was no denying the clout of any film making it to Sundance, and Hollywood came knocking.

“In 1985, we only had one theatre and maybe there were four or five restaurants in town, so it was a much quieter, smaller place and over time it grew so incredibly the atmosphere changed,” Redford reflected during our interview.

Suddenly all these people came in to leverage off our festival and because we are a non-profit, we couldn’t do anything about it. We had what we called ‘ambush mongers’ coming in to sell their wares and give out swag and I’m sure there will always be those people, but we are strong enough to resist being overtaken by it.

The festival resisted but the infrastructure gave in. In 2027, the Sundance Film Festival will finally relocate to Boulder, Colorado after a careful selection process aimed at ensuring the spirit of Sundance remains.

Redford stepped back from being the public face of the festival in 2019, dedicating himself instead to spend more time with filmmakers and their projects. But he supported the move to Colorado, and said in his statement of the announcement

Words cannot express the sincere gratitude I have for Park City, the state of Utah, and all those in the Utah community that have helped to build the organization.

The spirit of Sundance lives on, but it just won’t be the same without Bob on the streets or in the movie theatres.

The Conversation

Jenny Cooney does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. ‘To my happy surprise, it grew beyond my imagination’: Robert Redford’s Sundance legacy – https://theconversation.com/to-my-happy-surprise-it-grew-beyond-my-imagination-robert-redfords-sundance-legacy-265478

A pretty face helped make Robert Redford a star. Talent and dedication kept him one

Source: The Conversation – Global Perspectives – By Daryl Sparkes, Senior Lecturer, Media Studies and Production, University of Southern Queensland

Miroslav Zajic/CORBIS/Corbis via Getty Images

Hollywood is the place where having a great face will get you far. Think Errol Flynn, James Dean, George Clooney, Brad Pitt – a handsome appearance opens acting doors.

Those good looks, the magical smile, the natural charm all became synonymous with Robert Redford, who has died aged 89.

But good looks can only get you so far. You still need the acting chops as well as the strength of character to make a real impression in the world of cinema, and in the world itself.

Redford had this all in spades.

The young actor

After a rough start in life, including the death of his mother and dropping out of college, Redford began acting at 23 on Broadway and in small roles in quality television productions such as The Untouchables (1963), Maverick (1960), Dr Kildare (1962) and The Twilight Zone (1962), to name a few, which all honed his screen presence.

He made his feature film debut with a minor role in Tall Story (1960), alongside Jane Fonda (also her debut). This started a lifelong friendship between the two. They would act on several productions together, and Fonda admitted she was in love with Redford her whole life.

His talent was soon recognised. He was nominated for his first Emmy in 1962 for his supporting role in the TV movie The Voice of Charlie Pont.

After this, Redford soon became an in-demand actor. Larger roles in film and TV soon came his way, many as a romantic character.

Films such as Inside Daisy Clover (1965), This Property is Condemned (1966) and Barefoot in the Park (1967) portrayed Redford as the lover/husband to strong female characters, the first two with Natalie Wood, the third, again, with Fonda.

The birth of an icon

His good looks sometimes grated on Redford, which led him to refuse a role in Who’s Afraid of Virginia Woolf (1966) and being turned down for the lead in The Graduate (1967). He went in search of more diverse roles.

This led to a film that didn’t just make Redford a star, but a Hollywood icon.

Butch Cassidy and the Sundance Kid (1969) was one of the greatest actor partnerships in Hollywood history. Paul Newman was a much bigger star than Redford at the time of the movie’s release, but arguably it propelled Redford’s star beyond anyone else at that time.

Redford portrayed Sundance with sly wit, simmering masculinity, sardonic smartness and, well, just outright sexiness. Suddenly both teenage boys and girls had his poster on their bedroom wall. The world fell in love with him.

a poster for George Roy Hill's 1969 biopic Butch Cassidy and the Sundance Kid.

Movie Poster Image Art/Getty Images

Redford was on a roll. Over the next half-decade came hit after hit, including The Candidate (1972), The Way We Were (1973) with Barbara Streisand, The Sting (1972) again with Newman, and The Great Gatsby (1974), to name but a few. Redford was cemented as the lead man du jour.

The saying “lightning never strikes twice” never reckoned on Redford. In 1976 he took on his next highly iconic role alongside Dustin Hoffman in All the President’s Men.

It could be said that Hoffman, well regarded as the actor’s actor, was eclipsed by Redford in his role as Watergate journalist Bob Woodward. To me it was a travesty that Redford (or Hoffman, for that matter) was not nominated for Oscars in these roles.

By now Redford wasn’t just seen as the “pretty boy” but as a serious actor who took on more and more dramatic roles in The Electric Horseman (1979), Brubaker (1980), Out of Africa (1985) and Indecent Proposal (1998).

Being on screen for over five decades, younger audiences possibly wondered who the grizzled old man playing agent Alexander Pierce in two Marvel movies in 2014 and 2019 was.

A lasting legacy

Beginning in the 70s, Redford increasingly yearned to also be behind the camera.

As early as 1969 he took on the executive producer role in Downhill Racer.

Into the 80s he began directing. His feature directorial debut, Ordinary People (1980), won him his one and only Oscar (although he was given an honorary one in 2002).

He would go on to direct and produce notable films such as The Horse Whisperer (1998), A River Runs Through It (1992) and Quiz Show (1994), among others.

He was still working as an executive producer up until recently on the TV series Dark Wind (2022–25).

Away from the cameras, Redford was widely known as a philanthropist, environmentalist and a strong supporter of American First Nations and LGBTQI+ rights.

Publicly, though, Redford will probably be most remembered for the Sundance Institute and the film festival that sprang from it.

Redford poses for a photo in front of a snow capped mountain.
Redford at the Sundance Film Festival in Salt Lake City, Utah, in 1994.
Tom Smart/Liaison

The largest independent festival in the United States, it gave a leg up to hundreds of up-and-coming independent filmmakers over the years including Quentin Tarantino, Robert Rodriguez, Jane Schoenbrun, Kevin Smith and Paul Thomas Anderson.

When we look back on his body of work, though, one thing becomes plainly obvious.

While Redford may have used his looks to initially open the Hollywood doors to success and fame, it was his talent and dedication to his craft that kept those doors open.

A versatile actor, director and producer who gave back to the industry just as much, if not more, than he took. For this, Redford was much, much more than a pretty face.

The Conversation

Daryl Sparkes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A pretty face helped make Robert Redford a star. Talent and dedication kept him one – https://theconversation.com/a-pretty-face-helped-make-robert-redford-a-star-talent-and-dedication-kept-him-one-265426

Charlie Kirk shooting suspect had ties to gaming culture and the ‘dark internet’. Here’s how they radicalise

Source: The Conversation – Global Perspectives – By Matthew Sharpe, Associate Professor in Philosophy, Australian Catholic University

Tyler Robinson, the 22-year-old Utah man suspected of having fatally shot right-wing activist Charlie Kirk, is reportedly not cooperating with authorities. Robinson was apprehended after a more than two-day manhunt and is being held without bail at the Utah County Jail.

While a motive for the shooting has yet to be established, Utah Governor Spencer Cox has highlighted Robinson’s links to gaming and the “dark internet”.

Bullet casings found at the scene were inscribed with various messages evoking gaming subcultures. One of the quotes – “Notices bulges, OwO what’s this” – can be linked to the furry community, known for role-playing using animal avatars.

Another message – “Hey, fascist! Catch! ↑ → ↓↓↓” – features arrow symbols associated with an action that allows players to drop bombs on their foes in Helldiver 2, a game in which players play as fascists fighting enemy forces.

One casing reads “O Bella ciao, Bella ciao, Bella ciao, Ciao, ciao!”, words from an Italian anti-Mussolini protest song, which also appears in the shooter game Far Cry 6. Yet another is a homophobic jibe: “if you read this you are gay LMAO”.

If Robinson does turn out to be a shooter radicalised through online gaming spaces, he would not be the first. Previous terrorist shootings at Christchurch (New Zealand), Halle (Germany), Bærum (Norway), and the US cities of Buffalo, El Paso and Poway were all carried out by radicalised young men who embraced online conspiracies and violent video games.

In each of these cases, the shooter attempted (and in all but the Poway shooting, succeeded) to live stream the atrocities, as though emulating a first-person shooter game.

A growing online threat

The global video game market is enormous, with an estimated value of almost US$300 billion (about A$450 billion) in 2024. Of the more than three billion gamers, the largest percentage is made up of young adults aged 18–34.

Many of these are vulnerable young men. And extremist activists have long recognised this group as a demographic ripe for radicalisation.

As early as 2002, American neo-Nazi leader Matt Hale advised his followers “if we can influence video games and entertainment, it will make people understand we are their friends and neighbours”.

Since then, far-right groups have produced ethnonationalist-themed games, such as “Ethnic Cleansing” and “ZOG’s Nightmare”, in which players defend the “white race” against Islamists, immigrants, LGBTQIA+ people, Jews and more.

Studying radicalisation in gamer circles

For many, the Kirk shooting has resurfaced the perennial question about the link (or lack thereof) between playing violent video games and real-world violence.

But while this is an important line of inquiry, the evidence suggests most radicalisation takes place not through playing video games themselves, but through gaming platform communication channels.

In 2020, my colleagues and I studied an extraordinary data dump of more than nine million posts from the gaming platform Steam to understand this process.

We found evidence of radicalisation occurring through communication channels, such as team voice channels. Here, players establish connections with one another, and can leverage these connections for political recruitment.

The radicalisation of vulnerable users is not instantaneous. Once extremists have connected with potential targets, they invite them into platforms such as Discord or private chat rooms. These spaces allow for meme and image sharing, as well as ongoing voice and video conversations.

Skilful recruiters will play to a target’s specific grievances. These may be personal, psycho-sexual (such as being unable to gain love or approval), or related to divisive issues such as employment, housing or gender roles.

The recruit is initiated into a fast-changing set of cynical in-jokes and in-group terms. These may include mocking self-designations, such as the Pepe the Frog meme, used by the far-right to ironically embrace their ugly “political incorrectness”. They also use derogatory terms for “enemies”, such as “woke”, “social justice warriors”, “soyboys”, “fascists” and “cultural Marxists”.

Gradually, the new recruit becomes accustomed to the casual denigration and dehumanisation of the “enemies”.

Dark and sarcastic humour allow for plausible deniability while still spreading hate. As such, humour acts an on-ramp to slowly introduce new recruits to the conspiratorial and violent ideologies that lie at the heart of terrorist shootings.

Generally, these ideologies claim the world is run by nefarious and super-powerful plutocrats/Jews/liberals/communists/elites, who can only be stopped through extreme measures.

It then becomes a question of resolve. Who among the group is willing to do what the ideology suggests is necessary?

What can be done?

The Australian Federal Police, as well as the Australian parliament, has recognised the threat of violence as a result of radicalisation through online gaming. Clearly, it’s something we can’t be complacent about.

Social isolation and mental illness, which are sadly as widespread in Australia as they are elsewhere, are some of the factors online extremists try to exploit when luring vulnerable individuals.

At the same time, social media algorithms function to shunt users into ever more sensational content. This is something online extremists have benefited from, and learned to exploit.

There is a growing number of organisations devoted to trying to prevent online radicalisation through gaming platforms. Many of these have resources for concerned parents, teachers and care givers.

Ultimately, in an increasingly online world, the best way to keep young people safe from online radicalisation is to keep having constructive offline conversations about their virtual experiences, and the people they might meet in the process.

The Conversation

Matthew Sharpe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Charlie Kirk shooting suspect had ties to gaming culture and the ‘dark internet’. Here’s how they radicalise – https://theconversation.com/charlie-kirk-shooting-suspect-had-ties-to-gaming-culture-and-the-dark-internet-heres-how-they-radicalise-265279

Since WWII, it’s been taboo to force nations to cede land after war. Russia wants to normalise conquest again

Source: The Conversation – Global Perspectives – By Jon Richardson, Visiting Fellow, Centre for European Studies, Australian National University

A frequent question around peace talks over Russia’s invasion of Ukraine is whether Ukraine should give up land as part of an interim or final settlement.

United States President Donald Trump has often suggested this would be a natural and inevitable outcome, particularly given Ukraine has – in his view – a weak hand of “cards”. When Ukrainian President Volodymyr Zelensky visited the White House last month, Trump told him there was no getting back Crimea, which has been occupied by Russia since 2014.

Trump has jokingly described his motivation for promoting peace in Ukraine as a desire to “get to heaven”. But as the saying goes, the path to hell is paved with good intentions.

Indeed, Trump has aligned himself with many Russian officials on territorial concessions, including Foreign Minister Sergei Lavrov, who has said history has many examples of peace agreements that shift borders.

It is important to debunk this notion. Acquisition of territory through war has, in fact, been taboo since the end of the second world war and the establishment of the United Nations.

While there have been many military conflicts, there are no evident examples of a UN member country ceding recognised, independent territory to another UN member following a war or invasion.

Wars and conquest

Until the early 20th century, territorial concessions were the norm after wars, backed by all sorts of narratives about hereditary rights, ancient borders, superior civilisations, punishments for unpaid debts or simple law of the jungle.

A classic example was the Treaty of Guadeloupe-Hidalgo, which ended the Mexican-American War of 1846–48. Mexico was forced to cede 55% of its territory, including present-day New Mexico, Utah, Nevada, Arizona, California, Texas and western Colorado.

Mexican territory that was relinquished in the Treaty of Guadalupe Hidalgo, coloured white.
Wikimedia Commons

In a recent article, Yale academics Oona Hathaway and Scott Shapiro explain that before the first world war, shifting borders was a legally recognised means by which states resolved disputes. They calculate there were more than 150 territorial conquests around the world before 1945.

The end of the second world war saw massive border changes in Eastern Europe. Soviet leader Joseph Stalin shifted the borders of Poland hundreds of kilometres westward at the expense of Germany, while the Soviet Union swallowed swathes of eastern Poland. Italy also lost some of its pre-war territory to Yugoslavia and France.

The Soviet Union also got to keep regions it had absorbed in the wake of the 1939 Nazi-Soviet Non-Aggression Pact, including the Baltic States, Moldova, western Ukraine and parts of Finland. These changes reflected the facts on the ground and were accepted at the Yalta and Potsdam conferences.

But in the broader zeitgeist, it was time to put an end to wars of conquest. This was articulated in Article 2 of the UN Charter, which requires states to refrain from the use of force against the “territorial integrity or political independence” of any other state.

The principle was further cemented in UN Security Council resolution 242 following the 1967 Arab-Israeli Six-Day War, which decrees that acquisition of territory following war cannot be accepted.

That is why the international community has largely rejected any move towards Israeli sovereignty over the occupied Palestinian territories of the West Bank, Gaza and East Jerusalem, along with the Golan Heights. (The United States, however, accepted the latter in 2019.)

The taboo on conquest since 1945

The only successful territorial conquests broadly accepted by the international community since 1945 have been a few cases of newly independent countries in the 1960s taking over enclaves or neighbouring territory formerly held by colonial powers. This includes, for example, India taking Goa from Portugal.

But other seizures of ex-colonial territories have been broadly rejected, or at least strongly contested. The main examples are Morocco’s annexation of Western Sahara and Indonesia’s seizure of East Timor. Indonesia’s takeover of West Papua was accepted by the international community as part of a UN-mandated self-determination process, though this has since been condemned by many as deeply flawed.

South Vietnam’s ultimate takeover by the North might be regarded as a conquest, but neither Vietnam recognised the other as a separate country, seeing the conflict effectively as a continuation of civil war. Neither was a UN member.

Before Russia’s invasion of Ukraine, the most blatant attempt to conquer independent territory was Iraqi dictator Saddam Hussein’s invasion and annexation of Kuwait. This was repelled by a UN-sanctioned force.

Global opposition to Russia’s seizures

Distinct from invasions, there have been many unresolved border disputes that have occasionally flared into armed conflict. Russia, however, had no such dispute with Ukraine before its 2014 takeover of Crimea.

After the dissolution of the Soviet Union, Russia and Ukraine negotiated a border treaty to delineate their borders in precise detail. Russian President Vladimir Putin signed the treaty in 2003 and later affirmed that Russia had no territorial claim against Ukraine.

An overwhelming number of UN members have rejected Russia’s annexation of Crimea and four other regions of southeastern Ukraine.

However, the initial outrage at the invasion has weakened over time. Many countries have accused the US of a double standard, given its invasion of Iraq in 2003 (even if that didn’t involve territorial conquest). Trump’s statements about acquiring Greenland, Canada, Gaza and the Panama Canal have only further weakened confidence in US opposition to territorial conquest.

As political scientist Tanisha Fazal argues, the norm against territorial conquest risks suffering a “death of a thousand cuts”. Allowing Russia to keep parts of Ukraine could be a terminal blow.

What a lasting peace should look like

Some commentators have argued for an interim settlement under which Russia would retain control of occupied territory without Ukraine ceding it formally. A final settlement would be left to the future.

Some have called this de facto recognition of Russian annexation, but that is a misguided notion. De facto recognition implies acceptance of a new status quo, along with a return to business as usual.

The outcome of the war will only be partially about territory. Russia has imposed a brutal occupation on these regions, with widespread allegations of torture, killings, disappearances, population transfers and thefts of Ukrainian businesses and homes. Ukrainian language, culture and identity are being erased under a draconian regime.

Ukraine appears willing to accept an interim ceasefire to stop the bloodshed. But its territorial integrity should be fully supported by making clear to Russia that its invasion and occupation remain illegal and unacceptable.

This would include maintaining economic sanctions, demanding accountability for war crimes, returning property stolen from Ukrainians, and allowing Ukrainians transferred to Russia to return home. Ukraine must also be given the means to defend itself against a renewed Russian attack.

Advocates of anything less would be condoning and normalising flagrant territorial aggression. They would merit neither earthly rewards, such as Nobel Prizes, nor divine blessings.

The Conversation

Jon Richardson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Since WWII, it’s been taboo to force nations to cede land after war. Russia wants to normalise conquest again – https://theconversation.com/since-wwii-its-been-taboo-to-force-nations-to-cede-land-after-war-russia-wants-to-normalise-conquest-again-264590

Viral violent videos on social media are skewing young people’s sense of the world

Source: The Conversation – Global Perspectives – By Samuel Cornell, PhD Candidate in Public Health & Community Medicine, School of Population Health, UNSW Sydney

When news broke last week that US political influencer Charlie Kirk had been shot at an event at Utah Valley University, millions of people around the world were first alerted to it by social media before journalists had written a word.

Rather than first seeing the news on a mainstream news website, footage of the bloody and public assassination was pushed directly onto audiences’ social media feeds. There weren’t any editors deciding whether the raw footage was too distressing, nor warnings before clips auto-played.

Australia’s eSafety commissioner called on platforms to shield children from the footage, noting “all platforms have a responsibility to protect their users by quickly removing or restricting illegal harmful material”.

This is the norm in today’s media environment: extreme violence often bypasses traditional media gatekeepers and can reach millions of people, including children, instantly. This has wide-ranging impacts on young people – and on society at large.

A wide range of violence

Young people are more likely than older adults to come across violent and disturbing content online. This is partly because they are more frequent users of platforms such as TikTok, Instagram and X.

Research from 2024 from the United Kingdom suggests a majority of teenagers have seen violent videos in their feeds.

The violence young people see on social media ranges from schoolyard fights and knife attacks to war footage and terrorist attacks.

The footage is often visceral, raw and unexpected.

A wide range of harms

Seeing this kind of violent footage on social media can make some children not want to leave the house.

Research also shows engaging with distressing media can cause symptoms similar to trauma, especially if the violence feels close to our own lives.

Research shows social media is not simply a mirror of youth violence but also a vector for it, with bullying, gang violence, dating aggression, and even self-directed violence playing out online. Exposure to these harms can have a negative effect on young people’s mental health, behaviour and academic performance.

For others, violent content on social media risks “desensitisation”, where people become so used to suffering and violence they become less empathetic.

Communication scholars also point to cultivation theory – the idea in this case that people who consume more violent content begin to see the world as potentially more dangerous than it really is.

This potentially skewed perception can influence everyday behaviour even among those who do not directly experience violence.




Read more:
How images of knives intended to stop youth knife crime may actually be making things worse


A long history of violence

Violence distributed by media is as old as media itself.

The ancient Greeks painted their pottery with scenes of battles and slaying. The Romans wrote about their gladiators. Some of the first photographs ever taken were of the Crimean War. And in the second world war, people went to the cinema to watch newsreels for updates on the war.

The Vietnam war was the first “television war” – images of violence and destruction were beamed into people’s homes for the first time. Yet television still involved editorial judgement. Footage of violence was cut, edited, narrated and contextualised.

Seeing violence as if you were there has been transformed by social media.

Now, footage of war, recorded in real time on phones or drones, is uploaded to TikTok or YouTube and shared with unprecedented immediacy. It often appears without any additional context – and often isn’t packaged any differently to a video of, say, somebody walking down the street or hanging out with friends.

War influencers have emerged – people who post updates from conflict zones, often with no editorial training, unlike war journalists. This blurs the line between reporting and spectacle. And this content spreads rapidly, reaching audiences who have often not sought it.

Israel’s military even uses war influencers to “thirst trap” social media users for propaganda purposes. A thirst trap is a deliberately eye-catching, often seductive, social media post designed to attract attention and engage users.

How to opt out of violence

There are some practical steps that can be taken to reduce your chances of encountering unwanted violent content:

  • turn off autoplay. This can prevent videos from playing unprompted

  • use mute or block filters. Platforms such as X and TikTok let you hide content with certain keywords

  • report disturbing videos or images. Flagging videos for violence can reduce how often they are promoted

  • curate your feed. Following accounts that focus on verified news can reduce exposure to random viral violence

  • take a break from social media, which isn’t as extreme as it sounds.

These actions aren’t foolproof. And the reality is that users of social media have very limited control over what they see. Algorithms still nudge users’ attention toward the sensational.

The viral videos of Kirk’s assassination highlight the failures of platforms to protect their users. Despite formal rules banning violent content, shocking videos slip through and reach users, including children.

In turn, this highlights why more stringent regulation of social media companies is urgently needed.

The Conversation

Samuel Cornell receives funding from an Australian Government Research Training Program Scholarship.

T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.

ref. Viral violent videos on social media are skewing young people’s sense of the world – https://theconversation.com/viral-violent-videos-on-social-media-are-skewing-young-peoples-sense-of-the-world-265371

Can Charlie Kirk really be considered a ‘martyr’? A Christianity historian explains

Source: The Conversation – Global Perspectives – By Jonathan L. Zecher, Associate Professor, Institute for Religion and Critical Inquiry, Australian Catholic University

Charlie Kirk: white nationalist, conservative Christian, right-wing social media personality, shooting victim, and now, a “martyr”. That is, according to his supporters.

Since Kirk’s death last week, a number of his followers from the Christian right have ascribed him the title of “martyr”. President Donald Trump himself called Kirk a “martyr for truth and freedom”.

Similarly, Rob McCoy, a pastor emeritus from California, said at a Sunday morning church service

Today, we celebrate the life of Charlie Kirk, a 31-year-old God-fearing Christian man, a husband, father of two, a patriot, a civil rights activist, and now a Christian martyr.

Looking back at the history of martyrdom offers insight into what it means for Kirk to be hailed a martyr, both for his memory, and for the future of the United States.

From witness to criminal to witness again

The term martyr emerged in ancient law courts with the Greek word martus, meaning a witness or person who gives testimony.

From their earliest days, Christians appropriated it to refer to those who testified to the gospel of Jesus Christ. The gospel of Luke even concludes with Jesus telling his disciples: “You are witnesses – martyres – of these things” (Luke 24:48).

Early Christians regularly ran afoul of Roman authorities, and were brought to court as criminals. The charges generally revolved around questionable loyalty to the Roman state and religion. Could someone worship Jesus and also offer sacrifice to the traditional gods, including the emperor or his divine spirit (his “genius”)?

Christians and Romans alike thought not. From the 2nd century onward, accounts of these trials centred on a single question: “are you a Christian?”. If the answer was “yes”, execution followed.

For local authorities, the executed person was a criminal. But for fellow Christians, they were witnesses to the truth of the gospel, and their deaths were evidence of the Christian God. They were both witness and testimony – “martyrs” in every sense.

In 2004, scholar of early Christianity, Elizabeth Castelli, argued martyrs are born only after their death. The martyr isn’t a fact, but a figure produced by the stories told about them, and the honour afforded them in ritual commemorations. A person isn’t a martyr until other people within a specific community decide they are.

To understand what makes someone a martyr, we have to ask two questions:

  1. what are they a witness to? As in, what ideal or cause led to their death and how did their death testify to it?

  2. who are they a witness for? Who tells their story and who calls them a martyr?

Boundaries and borderline cases

The history of martyrdom is also a history of debates over what kind of death “counts”, and what role martyrs play in the church.

Questionable cases have accumulated through the decades. Some “martyrs” volunteered eagerly, perhaps too eagerly.

On April 29 304 CE, an archdeacon named Euplus stood outside the city council chamber in Catania, Sicily, shouting: “I want to die; I am a Christian”. After some discussion, the governor sentenced him to torture and he died of his injuries. Was this martyrdom, or suicide?

Under Christian emperors from the 4th century on, soldiers who died fighting Persians (or later Arabs) also came to be called martyrs. A soldier’s death is especially considered martyrdom if they fought against members of a different religion.

However, the soldier-martyr label has also raised anxieties. The most recent example came from the troubling claim by Russian Orthodox Patriarch Kyrill that Russian soldiers who die fighting in Ukraine are martyrs – despite fighting fellow Orthodox Christians. What do these soldiers testify to?

The stories of martyrs define community borders. Those who kill martyrs tend to be treated as enemies of the faith, whether they are Roman authorities, enemy combatants, or even people assumed to be complicit in the event.

The MAGA martyr

Let’s apply the two questions above to Charlie Kirk, who has been dubbed both “martyr” and “patron saint of MAGA”.

What would Kirk be a martyr to? To his supporters and those on the MAGA right, he died for free speech, for Judeo-Christian values, for a commitment to “Western civilisation”, and supposedly for the “truth” itself.

To others, especially those he attacked and denigrated publicly – such as queer and trans people, immigrants, Muslims and feminists – he died for white nationalism, hatred and exclusion.

This takes us back to the second question: who is Charlie Kirk a martyr for? Clearly, the answer to this is Christian nationalists, MAGA supporters and the broader American right.

He testified in life to their shared beliefs and values, and in death is their “patron saint”. The legacy of Kirk’s death will be to define who is part of this community, and who is excluded. The question then is, will a division framed in such polarising terms come to define American society as a whole?

From revenge to love

Following Kirk’s death, people on the far-right called for violent revenge against the left – even though the shooting suspect’s political motivations are unknown.

Media have reported a surge in radicalisation on right-wing platforms. There was even a website, now removed, dedicated to doxxing anyone who spoke negatively about Kirk and using that information to get them fired.

Against this rhetoric of revenge, the history of martyrdom offers a different way forward. The early theologian, Clement of Alexandria, said someone becomes a martyr not because of their death, but because of their love.

The only true witness, he argued, is love, because God is love. The only honour one can offer the martyrs is to love as they loved. Clement suggests it’s possible to reject vengeance and sectarianism, even if one loves the martyrs.

The Conversation

Jonathan L. Zecher does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Can Charlie Kirk really be considered a ‘martyr’? A Christianity historian explains – https://theconversation.com/can-charlie-kirk-really-be-considered-a-martyr-a-christianity-historian-explains-265283

Ukraine is starting to think about memorials – a tricky task during an ongoing war

Source: The Conversation – Global Perspectives – By Kerry Whigham, Associate Professor of Genocide and Mass Atrocity Prevention, Binghamton University, State University of New York

Three and a half years after Russia invaded Ukraine, there are few immediate signs of a cessation to the ongoing hostilities. Yet amid the steady toll of front-line fighting and near-daily Russian airstrikes, Ukrainians are already considering how to remember the tens of thousands of lives lost over the course of this conflict.

A spontaneous memorial of flags and photographs already exists and grows daily, having first sprung up in 2022 in Kyiv’s Independence Square. Now, government and civil society groups have begun conversations on how such acts of commemoration can be made more permanent through monuments and memorials across the country.

As a scholar of public memory and how societies remember large-scale violence and mass atrocities, I study and support the work of governments and organizations developing memory sites around the world. As Ukraine negotiates its own related challenges, lessons from research on how memorials have changed and the role they can play in post-violence societies can help guide these processes.

The long history of remembering war dead

The impulse to create public monuments to remember collective death, like war, is millennia old. The first known war memorial dates to over 4,000 years ago in modern-day Syria. Obelisks and triumphal arches that dotted ancient Egypt and ancient Rome have served similar purposes.

As societies have progressed and architectural tastes have changed, so too have war monuments. Still, there are some underlying traits that have remained relatively consistent for thousands of years.

Traditionally, war memorials used monumental architecture to remember those who died during conflict. Typically, they were aimed at honoring soldiers who died fighting for their country. The monuments framed the death of soldiers as a sacrifice for a higher cause, often using larger-than-life architectural elements and materials like marble and granite to convey a sense of both grandeur and memory permanence.

In that traditional vein of glorification, war memorials typically feature recognizable symbols, like sculptures of soldiers and inscriptions with names or information. By honoring the soldiers who died fighting a war, the monuments also legitimize the war and the state that waged it, marking it as a cause worthy of dying for. In this way, such war memorials are not only about revering soldiers but also venerating the nation-state.

That all started to change following World War I, however. The scale of destruction and death was so widespread and total that countries began erecting war memorials depicting soldiers with their faces downcast or their bodies tired. The sacrifices of the soldiers were still framed as valiant, but the monuments also revealed a war weariness not present in earlier memorials.

At the same time, communist countries in Eastern Europe and the Soviet Union maintained the tradition of using memorials to celebrate the state. In Soviet-era Ukraine, for instance, the 335-foot-tall Mother Ukraine was erected to tower over Kyiv as a monument to World War II.

A picture of a large statue.
Mother Ukraine Monument statue in Kiev.
Igor Golovniov/SOPA Images/LightRocket via Getty Images

Memorializing the horrors of the 20th century

Outside of the former Soviet Union and Eastern Bloc, however, the horrors of World War II completely transformed the way societies memorialized collective death. This is largely because the deaths that needed commemoration were not only those of soldiers but of the millions of civilians murdered by the Nazi regime, especially European Jews.

Indeed, the Holocaust changed everything about the way the world memorializes large-scale death. The architectural language of the war memorial was completely insufficient for remembering the victims of genocide. They did not sacrifice themselves to the glory of the nation, but instead were slaughtered by governmental leaders.

As a result and over time, memorials focused far less on monumental forms and realistic imagery glorifying the state and opted for abstract and immersive styles intended to invoke a sense of loss and a commitment to preventing future violence. These memorials to victims of genocides and other atrocities also respond to an increasingly recognized “right to memory,” as victims demand acknowledgment of the trauma they have experienced.

One of the most influential examples of how memorialization has changed is the Memorial to the Murdered Jews of Europe in Berlin. Designed by American architect Peter Eisenman and inaugurated in 2005, it features over 2,700 concrete columns arranged in a grid over almost 5 acres of land in central Berlin. Visitors are invited to walk through the grid-work of columns, which are meant to evoke an emotional response in visitors.

Echoes of this abstract and immersive space can now be seen in numerous other memorials to collective death globally, including the National Memorial for Peace and Justice in Montgomery, Alabama, which memorializes the Black victims of racial terror lynchings in the United States, and the Parque de la Memoria in Buenos Aires, Argentina, which remembers the thousands of people disappeared in the 1970s and 1980s during a military dictatorship.

A monument of stone pieces.
The morning light illuminates the Memorial to the Murdered Jews of Europe, or the Holocaust Memorial, in Berlin.
AP Photo/Markus Schreiber

Ukraine and memorializing the present

As Ukrainians begin the process of determining how best to commemorate their own recent losses, they face some notable challenges.

For one, Ukraine has lost both soldiers, who have died fighting for their country, and civilians, killed through the attacks of invading forces from Russia. Can and should these losses be memorialized together? Or should there be separate memorials to those who died on the battlefield and those others who were killed in atrocities, like the massacre in Bucha of March 2022, which saw the death, torture and rape of hundreds of civilians, including children, by Russian forces?

Ukraine is not without experience in memorializing both war and atrocity. Many of its war memorials were constructed during the Soviet period, however, so they tend to utilize the socialist realism style that characterizes most communist-era monuments. But Ukraine has also experienced atrocities, such as the Holodomor, the human-made famine implemented by Josef Stalin in the 1930s that led to the deaths of millions of Ukrainians. The Memorial in Commemoration of the Holodomor-Genocide in Ukraine opened in Kyiv in 2008, 75 years after the Holodomor began.

But determining how to memorialize more recent violence can be a challenge. Memorials serve to literally and figuratively concretize memory. But memory — that is, the story a society tells itself about its past and its impact on the present and future — evolves over time. Communities of victims may desire a memorial as a recognition of the harms that they have suffered, and this can indeed be an important step in symbolically repairing those damages.

But it may be difficult to get a full “picture” of the story a memorial should tell while violence is ongoing. The victim count is increasing every day. And now there is also some pushback within Ukraine against the way President Volodymyr Zelenskyy is governing and approaching issues of internal corruption.

Ukrainians had 75 years to determine how they wanted to relate to and remember the Holodomor. With so much uncertainty, any memorial built now to the current war may need to be reconsidered in the very near future as government officials, victim groups and other stakeholders continue to discuss how they want to remember this violence.

Soldiers stand before a memorial.
Senior members of the Ukrainian military establishment leave vigil lanterns at the Bitter Memory of Childhood monument at the National Museum of the Holodomor-Genocide in Kyiv.
Kirill Chubotin/Ukrinform/Future Publishing via Getty Images

Today, many experts and practitioners advocate for conversations on memorialization to take place alongside other processes that societies undergo to deal with histories of violence and human rights abuses. Often labeled “transitional justice,” these processes of truth-seeking, justice, reparations and reform can complement processes of memorialization. Engaging actively with all the consequences of past violence can be crucial in developing a consensus on how to remember that violence and educate future generations about it.

Undertaking such tasks while violence is ongoing, however, can be difficult, if not impossible. The underlying instability caused by war, along with the uncertainty around what the future will bring, leaves so many open questions that it may be too soon to start answering them. That said, the groundwork can be laid now so that these processes can begin as quickly as possible once the war finally comes to an end.

Ukrainians are understandably ready to move forward and deal with the repercussions of this horrific violence. But building a memorial will not, in itself, mark the end of the conflict and, as such, may be putting the cart before the horse. Victims have a right to memory, but they first and foremost have a right to peace. The picture of what story should be told through public memorials and monuments will become clearer once it is not so obscured by the fog of war.

The Conversation

Kerry Whigham is affiliated with the Auschwitz Institute for the Prevention of Genocide and Mass Atrocities, an international non-governmental organization that works on atrocity prevention.

ref. Ukraine is starting to think about memorials – a tricky task during an ongoing war – https://theconversation.com/ukraine-is-starting-to-think-about-memorials-a-tricky-task-during-an-ongoing-war-263598

How hardships and hashtags combined to fuel Nepal’s violent response to social media ban

Source: The Conversation – Global Perspectives – By Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

Riot police fire tear gas into crowds of demonstrators in Kathmandu on Sept. 8, 2025. Prabin Ranabhat/AFP via Getty Images

Days of unrest in Nepal have resulted in the ousting of a deeply unpopular government and the deaths of at least 50 people.

The Gen Z-led protests – so-called due to the predominance of young Nepalese among the demonstrators – appeared to have quieted down with the appointment on Sept. 12, 2025, of a new interim leader and early elections.

But the protests leave behind dozens of burned government offices, destroyed business centers and financial losses estimated in the billions of dollars.

The experience has also underscored the importance of social media in Nepal, as well as the consequences of government attempts to control the flow of online information.

I study the economic, social and political impacts of social media and other emerging technologies. Being based in Kathmandu, I have watched firsthand as what began as a protest over a short-lived ban on social media snowballed into something far greater, leading to the toppling of Prime Minister K.P. Sharma Oli.

Indeed, social media has played a crucial role in this ongoing turmoil in two ways. First, the government’s decision on Sept. 4 to ban social platforms served as the immediate catalyst to the unrest. It provoked anger among a generation for whom digital spaces are central not only to communication, identity and political expression, but also to education and economic opportunities.

And second, the pervasive use of these platforms primed the nation’s youth for this moment of protest. It heightened Gen Z’s awareness of the country’s entrenched social, economic and political problems. By sharing stories of corruption, privilege and inequality, social media not only informed but also galvanized Nepal’s youth, motivating collective mobilization against the country’s systemic injustice.

The role of social media

As with many other nations, social media is central to daily life and commerce in Nepal, a landlocked nation of 30 million people situated between two Asian giants: China and India.

As of January 2025, just short of half the population had social media accounts. This includes some 13.5 million active Facebook users, 3.6 million Instagram users, 1.5 million LinkedIn users and 466,100 X users.

Indeed, social media platforms drive roughly 80% of total online traffic in the country and serve as vital channels for business and communication. Many users in Nepal depend on these platforms to run and promote their businesses.

As such, the government’s decision to block 26 social media platforms sparked immediate concern among the Nepalese public.

The move wasn’t completely out of the blue. Nepal’s government has long been concerned over the growth of social media platforms.

In November 2023, the Ministry of Communication and Information Technology introduced new social media regulations, requiring platforms to register with the government, set up a local contact point, appoint a grievance officer and designate an oversight official. Platforms were also obliged to cooperate in criminal investigations, remove illegal content and comply with Nepali law.

The Nepalese government, citing concerns over fake accounts, hate speech, disinformation and fraud, said the measures were to ensure accountability and make operators responsible for content on their platforms. Then, in January 2025, the government introduced a Social Media Bill that placed further requirements on social media platforms.

Censorship concerns

Regardless of their intent, these government measures sparked immediate civil liberties concerns. Critics and rights groups argued that both the ban and the bill function as tools for censorship, threatening freedom of expression, press freedom and fundamental rights.

Ncell, Nepal’s second-largest telecommunications service provider, noted that shutting down all platforms at once was, in any case, technically difficult and warned that the move would severely impact business. Small business owners, who rely on social media to promote and sell their products, were especially worried with a busy festive season looming.

The ban also had significant implications for education. Many students rely on social media platforms to access online classes, research materials and collaborative learning tools.
More generally, the Nepalese public criticized the government’s measures disproportionate impact on ordinary users.

As such, this deep reliance on social media by Nepalese society turned the ban into a flashpoint for public dissent.

The rise of #NepoKids

Even before the protests began on Sept. 8, the pervasive use of social media, along with exposure to content showcasing inequality and elite privilege, had heightened Gen Z’s awareness of Nepal’s entrenched social, economic and political problems.

A few weeks before the protests began, the hashtags #NepoBaby and #NepoKids began trending, fueled by viral videos of politicians’ lavish lifestyles.

The content drew attention to the country’s inequality by contrasting the lives of the children of the country’s elite – with designer clothing and foreign vacations – with images of Nepali migrant workers returning home in coffins from dangerous jobs abroad.

The hashtag campaigns gained traction on TikTok and Reddit, leading to calls for asset investigations, anti-corruption reforms and even transferring the assets of the wealthy to public ownership.

One particularly notable viral video featured the son of a provincial government minister posing in front of a tree made from boxes of luxury labels including Louis Vuitton, Cartier and Gucci.

Such posts served to further fuel public outrage over perceived elite privilege.

The immediacy and interactivity of social media platforms amplified the outrage, encouraging group mobilization. In this way, social media acted both as a magnifier and accelerator, linking perceived injustice to on-the-ground activism and shaping how the movement unfolded even before the Sept. 8 protests began.

Flames are seen coming out of a large white buildins.
Fire rages through the Singha Durbar, the main administrative building for Nepal’s government, in Kathmandu on Sept. 9, 2025.
Prabin Ranabhat/AFP via Getty Images

A deeper story of hardship and corruption

Yet a social media campaign is nothing without a root cause to shine a light on.

Economic insecurity and political corruption have for years left many of Nepal’s youth frustrated, setting the stage for today’s protest movement. While the overall unemployment rate in 2024 was 11%, the youth unemployment rate stood significantly higher at 21%.

But these figures only scratch the surface of Nepal’s deep economic problems, which include pervasive vulnerable employment – informal and insecure work that is prone to poor conditions and pay – and limited opportunities that constrain long-term productivity.

Between 2010 and 2018, fewer than half of new entrants into the workforce secured formal, stable jobs; the remainder were primarily engaged in informal or precarious work, which often lacked consistent income, benefits or legal protections. Most available positions are informal, poorly compensated and offer little stability or room for career growth.

All told, children born in Nepal today face a grim economic reality. By age 18, they are likely to achieve only about 51% of their productivity potential – that is, the maximum economic output they could reach if they had full access to quality health, nutrition and education.

Meanwhile, corruption is widespread. In 2024, Nepal ranked 107th out of 180 countries on Transparency International’s Corruption Perceptions Index, with 84% of people perceiving government corruption to be a major problem.

An upshot of corruption is the growing influence of Nepal’s politically connected business elite, who shape laws and regulations to benefit themselves. In the process, they secure tax breaks, inflate budgets and create monopolies that block competition.

This capture of public policy by an entrenched elite stifles economic growth, crowds out genuine entrepreneurs and exacerbates inequality, while basic public services remain inadequate.

Combined, these economic and political pressures created fertile ground for social mobilization. While persistent hardships helped fuel the rise of the #Nepokids movement, it was social media that gave voice to Nepali youths’ frustration.

When the government attempted to silence them through a ban on social media platforms, it proved to be a step too far.

The Conversation

Nir Kshetri does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How hardships and hashtags combined to fuel Nepal’s violent response to social media ban – https://theconversation.com/how-hardships-and-hashtags-combined-to-fuel-nepals-violent-response-to-social-media-ban-264932

12,000-year-old smoked mummies reveal world’s earliest evidence of human mummification

Source: The Conversation – Global Perspectives – By Hsiao-chun Hung, Senior Research Fellow, School of Culture, History & Language, Australian National University

A middle-aged woman, discovered in a tightly flexed position at the Liyupo site in
southern China, preserved through smoked mummification.
Hsiao-chun Hung

Smoke-drying mummification of human remains was practised by hunter-gatherers across southern China, southeast Asia and beyond as far back as 12,000 years ago, my colleagues and I report in new research published today in the Proceedings of the National Academy of Sciences.

This is the earliest known evidence of mummification anywhere in the world, far older than better-known examples from ancient Egypt and South America.

We studied remains from sites dated to between 12,000 and 4,000 years ago, but the tradition never vanished completely. It persisted into modern times in parts of the New Guinea Highlands and Australia.

Hunter-gatherer burials in southern China and Southeast Asia

In southern China and Southeast Asia, tightly crouched or squatting burials are a hallmark of the hunter-gatherers who inhabited the region between roughly 20,000 and 4,000 years ago.

Archaeologists working across the region for a long time have classified these graves as straightforward “primary burials”. This means the body was laid to rest intact in a single ceremony.

Map of southern China and southeast Asia with 95 locations marked.
Hunter-gatherer burials in a crouched or squatting posture have been found across southern China and southeast Asia.
Hung et al. / PNAS

However, our colleague Hirofumi Matsumura, an experienced physical anthropologist and anatomist, noticed some skeletons were arranged in ways that defied anatomical sense.

Combined with this observation, we often saw some bones in these bodies were partly burnt. The signs of burning, such as charring, were visible mainly in the points of the body with less muscle mass and thinner soft tissue coverage.

We began to wonder if perhaps the deceased were treated through a more complicated process than simple burial.

A casual conversation in the field

A turning point came in September 2017, during a short break from our excavation at the Bau Du site in central Vietnam.

The late Kim Dung Nguyen highlighted the difficulties of interpreting the situation where skeletons were found, likely intentionally placed and seated against large rocks. Matsumura noted problems with their bone positions.

People digging at an archaeological site.
The team excavating an ancient hunter-gatherer cemetery in Guangxi, southern China.
Hsiao-chun Hung

I remember blurting out – half joking but genuinely curious – “Could these burials be similar to the smoked mummies of Papua New Guinea?”

Matsumura thought about this idea seriously. Thanks to generous support and cooperation from many colleagues, that moment marked the real beginning of our research into this mystery.

How we identified the ancient smoked mummies

With our new curiosity, we began looking at photographs of modern smoked-dried mummification practices in the New Guinea Highlands in books and on the internet.

In January 2019, we went to Wamena in Papua (Indonesia) to observe several modern smoked mummies kept in private households. The similarity to our ancient remains was striking. But most of the skeletons in our excavation showed no outwardly obvious signs of burning.

A dressed and mummified body in a crouching posture.
A modern smoke-dried mummy kept in Pumo Village, Papua (Indonesia).
Hsiao-chun Hung

We realised we needed a scientific test to prove our hypothesis. If a body was smoked by low-temperature fire – while still protected by skin, muscle and tissue – the bones would not be obviously blackened. But they could still retain subtle signs or microscopic traces of past firing or smoking.

Then came the COVID pandemic, which led to travel restrictions, preventing us from travelling anywhere. My colleagues and I were spread across different regions, but we sought various ways to continue the project.

Eventually, we tested bones from 54 burials across 11 sites using two independent laboratory techniques called X-ray diffraction and Fourier-transform infrared spectroscopy. These methods can detect microscopic changes in the structure of bone material caused by high temperatures.

The results confirmed the remains had been exposed to low heat. In other words, almost all of them had been smoked.

More than 10,000 years of ritual

The samples, discovered in southern China, Vietnam and Indonesia, represent the oldest known examples of mummification. They are far older than the well-known practices of the Chinchorro culture in northern Chile (about 7,000 years ago) and even ancient Egypt’s Old Kingdom (about 4,500 years ago).

Remarkably, this burial practice was common across East Asia, and likely also in Japan. It may date back more than 20,000 years in Southeast Asia.

It continued until around 4,000 years ago, when new ways of life began to take hold. Our research reveals a unique blend of technique, tradition and belief. This cultural practice has endured for thousands of years and spread across a very broad region.

A visible form bridging time and memory

Ethnographic records show this tradition survived in southern Australia well into the late 19th and early 20th centuries.

In the New Guinea Highlands, some communities have even kept the practice alive into recent times. Significantly, the hunter-gatherer groups of southern China and Southeast Asia were closely connected to Indigenous peoples of New Guinea and Australia, both in some physical attributes and in their genetic ancestry.

In both southern Australia and Papua New Guinea, ethnographic records show that preparing a single smoked mummy could take as long as three months of continuous care. Such extraordinary devotion was possible only through deep love and powerful spiritual belief.

This tradition echoes a truth as old as humanity itself: the timeless longing that families and loved ones might remain bound together forever – carried across the ages, in whatever form that togetherness may endure.

The Conversation

Hsiao-chun Hung receives funding from the Australian Research Council (DP140100384, DP190101839).

ref. 12,000-year-old smoked mummies reveal world’s earliest evidence of human mummification – https://theconversation.com/12-000-year-old-smoked-mummies-reveal-worlds-earliest-evidence-of-human-mummification-265261

Volcanoes can help us untangle the evolution of humans – here’s how

Source: The Conversation – Global Perspectives – By Saini Samim, PhD Candidate, School of Geography, Earth and Atmospheric Sciences, The University of Melbourne

NASA’s Earth Observatory

How did humans become human? Understanding when, where and in what environmental conditions our early ancestors lived is central to solving the puzzle of human evolution.

Unfortunately, pinning down a timeline of early human evolution has long been difficult – but ancient volcanic eruptions in East Africa may hold the key.

Our new study, published in Proceedings of National Academy of Sciences, refines what we know about volcanic ash layers in Turkana Basin, Kenya. This place has yielded many early human fossils.

We have provided high-precision age estimates, taking a small step closer to establishing a more refined timeframe of human evolution.

Millions of years of volcanic eruptions

The Great Rift Valley in East Africa is home to several world-renowned fossil sites. Of these, the Turkana Basin is arguably the most important region for early human origins research.

This region is also within an active tectonic plate boundary – a continental rift – that has triggered volcanic eruptions over millions of years.

As early humans and their hominin ancestors walked these Rift Valley landscapes, volcanic eruptions frequently blanketed the land in ash particles, burying their remains.

Over time, many fossil layers have become sandwiched between volcanic ash layers. For archaeologists today, these layers are invaluable as geological time stamps, sometimes across vast regions.

Excellent timekeepers

Volcanic eruptions are excellent timekeepers because they happen very quickly, geologically speaking. As hot magma erupts, it cools and solidifies into volcanic ash particles and pumice rocks.

Pumice often contains crystals (minerals called feldspars) which act as natural “stopwatches”. These crystals can be directly dated using radiometric dating.

By dating the ash layers that lie directly above and below fossil finds, we can establish the age of the fossils themselves.

Volcanic ash layer (Lower Nariokotome tuff) with an embedded pumice in the famous palaeonthropological site where the most complete Homo erectus skeleton, the Nariokotome Boy, was found in West Turkana.
Saini Samim

Even when such minerals are absent, volcanic ash layers can still help in dating archaeological sites. That’s because ash particles from different eruptions have unique chemical signatures.

This distinct geochemical “fingerprint” means we can trace a particular eruption across large distances. We can then assign an age to the ash layer even without datable crystals.

For instance, an ash layer found in Ethiopia, or even on the ocean floor, can be matched to one in Kenya. As long as their chemical compositions match, we know they came from the same eruption at the same geological point in time. This approach has been applied in the region for many decades.

Previous landmark studies have already established the geology of the Turkana Basin.

However, the region’s frequent eruptions are often separated by just a few thousands of years. This makes many ash layers essentially indistinguishable in time. Furthermore, some ash layers have very similar “fingerprints”, making it difficult to confidently tell them apart.

These challenges have made it tricky to date the Nariokotome tuffs, three volcanic ash layers in the Turkana Basin. While it’s clear from the rock record these are three separate ash layers, their age estimates and chemical signatures are very similar. We set out to narrow them down.

The Nariokotome Tuff Complex, showing several ash layers in the Nariokotome Boy paleonthropological site, West Turkana.
Hayden Dalton

What did we find?

Compared to previous methods, modern dating tools can achieve an order-of-magnitude improvement in precision.

In other words, we can now confidently distinguish volcanic ash layers that erupted within just 1,000 to 2,000 years of each other. Applying this high-precision method to the Nariokotome tuffs, we resolved them as three distinct volcanic events, each with a precise eruption date.

However, determining the ages is not enough to fully distinguish these volcanic layers. Because the ash layers landed so close together in time – and potentially from very similar volcanoes – they also have remarkably similar major element geochemical “fingerprints”. Major elements are the most abundant elements in rocks, but they can’t always tell us much about the age and source of the rock material.

That’s where trace elements prove especially useful. These are elements that occur in very small amounts in rocks but provide much more distinctive chemical signatures.

Using laser-based mass spectrometry, we analysed the trace element composition of both the ash particles and their associated pumices. This provided us with unique trace-element fingerprints for each layer – still similar, but distinct.

Retracing human history

Once we had both precise age estimates and distinct geochemical profiles, we traced these ash layers in key archaeological sites.

For instance, the Nadung’a site in West Turkana, believed to be a prehistoric butchering site, has yielded some 7,000 stone tools. Our updated age estimates now makes this site approximately 30,000 years older than previously thought.

More importantly, we showed these refined methods can be applied beyond Kenya. We traced the ash layers of equivalent ages from Kenya to the Konso Formation in Ethiopia, indicating they came from three individual eruptions, in which material was spread across large distances.

The Nariokotome tuffs are an important case study that shows the powerful combination of high-precision dating with detailed geochemical fingerprinting. As we apply these techniques to more ash layers, both within the Turkana Basin and potentially beyond Kenya, we’ll have a better understanding of key questions in human evolution.

Did new tool technologies and species emerge gradually or suddenly? Did more than one hominin species exist simultaneously? How did shifting environments, climate and frequent volcanism affect early human evolution?

Now that we have precise geological timelines for the places where these artefacts were found, we’re a step closer to answering these long-standing questions about early humankind.


The authors would like to acknowledge the contributions of David Phillips and Janet Hergt to this article.

The Conversation

Saini Samim receives funding from the Melbourne Research Schorship provided by the University of Melbourne. She has also received funding from the Australian Research Council and the Turkana Basin Institute for this project.

Hayden Dalton receives funding from The Turkana Basin Institute via a Proof of Concept Research Grant (TBI030)

ref. Volcanoes can help us untangle the evolution of humans – here’s how – https://theconversation.com/volcanoes-can-help-us-untangle-the-evolution-of-humans-heres-how-255013