Blue Origin’s New Glenn rocket lifted off for its second orbital flight on Nov. 13, 2025. AP Photo/John Raoux
Blue Origin’s New Glenn rocket successfully made its way to orbit for the second time on Nov. 13, 2025. Although the second launch is never as flashy as the first, this mission is still significant in several ways.
For one, it launched a pair of NASA spacecraft named ESCAPADE, which are headed to Mars orbit to study that planet’s magnetic environment and atmosphere. The twin spacecraft will first travel to a Lagrange point, a place where the gravity between Earth, the Moon and the Sun balances. The ESCAPADE spacecraft will remain there until Mars is in better alignment to travel to.
And two, importantly for Blue Origin, New Glenn’s first stage booster successfully returned to Earth and landed on a barge at sea. This landing allows the booster to be reused, substantially reducing the cost to get to space.
Blue Origin launched its New Glenn rocket and landed the booster on a barge at sea on Nov. 13, 2025.
As a space policy expert, I see this launch as a positive development for the commercial space industry. Even though SpaceX has pioneered this form of launch and reuse, New Glenn’s capabilities are just as important.
New Glenn in context
Although Blue Origin would seem to be following in SpaceX’s footsteps with New Glenn, there are significant differences between the two companies and their rockets.
For most launches today, the rocket consists of several parts. The first stage helps propel the rocket and its spacecraft toward space and then drops away when its fuel is used up. A second stage then takes over, propelling the payload all the way to orbit.
While both New Glenn and Falcon Heavy, SpaceX’s most powerful rocket currently available, are partially reusable, New Glenn is taller, more powerful and can carry a greater amount of payload to orbit.
Blue Origin plans to use New Glenn for a variety of missions for customers such as NASA, Amazon and others. These will include missions to Earth’s orbit and eventually to the Moon to support Blue Origin’s own lunar and space exploration goals, as well as NASA’s.
NASA’s Artemis program, which endeavors to return humans to the Moon, is where New Glenn may become important. In the past several months, several space policy leaders, as well as NASA officials, have expressed concern that Artemis is progressing too slowly. If Artemis stagnates, China may have the opportunity to leap ahead and beat NASA and its partners to the lunar south pole.
These concerns stem from problems with two rockets that could potentially bring Americans back to the Moon: the space launch system and SpaceX’s Starship. NASA’s space launch system, which will launch astronauts on its Orion crew vehicle, has been criticized as too complex and costly. SpaceX’s Starship is important because NASA plans to use it to land humans on the Moon during the Artemis III mission. But its development has been much slower than anticipated.
In response, Blue Origin has detailed some of its lunar exploration plans. They will begin with the launch of its uncrewed lunar lander, Blue Moon, early next year. The company is also developing a crewed version of Blue Moon that it will use on the Artemis V mission, the planned third lunar landing of humans.
Blue Origin officials have said they are in discussions with NASA over how they might help accelerate the Artemis program.
New Glenn’s significance
New Glenn’s booster landing makes this most recent launch quite significant for the company. While it took SpaceX several tries to land its first booster, Blue Origin has achieved this feat on only the second try. Landing the boosters – and, more importantly, reusing them – has been key to reducing the cost to get to space for SpaceX, as well as others such as Rocket Lab.
That two commercial space companies now have orbital rockets that can be partially reused shows that SpaceX’s success was no fluke.
With this accomplishment, Blue Origin has been able to build on its previous experience and success with its suborbital rocket, New Shepard. Launching from Blue Origin facilities in Texas since 2015, New Shepard has taken people and cargo to the edge of space, before returning to its launch site under its own power.
New Glenn is also significant for the larger commercial space industry and U.S. space capabilities. It represents real competition for SpaceX, especially its Starship rocket. It also provides more launch options for NASA, the U.S. government and other commercial customers, reducing reliance on SpaceX or any other launch company.
In the meantime, Blue Origin is looking to build on the success of New Glenn’s launch and its booster landing. New Glenn will next launch Blue Origin’s Blue Moon uncrewed lander in early 2026.
This second successful New Glenn launch will also contribute to the rocket’s certification for national security space launches. This accomplishment will allow the company to compete for contracts to launch sensitive reconnaissance and defense satellites for the U.S. government.
Blue Origin will also need to increase its number of launches and reduce the time between them to compete with SpaceX. SpaceX is on pace for between 165 and 170 launches in 2025 alone. While Blue Origin may not be able to achieve that remarkable cadence, to truly build on New Glenn’s success it will need to show it can scale up its launch operations.
Wendy Whitman Cobb is affiliated with the US School of Advanced Air and Space Studies. Her views are her own and do not necessarily reflect the views of the Department of Defense or any of its components. Mention of trade names, commercial products, or organizations do not imply endorsement by the U.S. Government, and the appearance of external hyperlinks does not constitute DoD endorsement of the linked websites, or the information, products or services therein.
Dogs — in their many shapes and sizes — are considered one of the most diverse species of animals on the planet.
Most of these breeds are thought to have emerged during the 19th-century Victorian era.
But a new paper, published this week in the journal Science, suggests that about half of the vast diversity in dogs we see today was evident by the middle of the Stone Age.
Dog breeding by humans has created one of the most diverse species of animal on the planet. (
Your dog tilts its head when you cry, paces when you’re stressed, and somehow appears at your side during your worst moments. Coincidence? Not even close.
A team of researchers across Europe analysed hundreds of dog and wolf skulls spanning the past 50,000 years to track how the animals first emerged.
This evolution might be tied to the animal’s domestication, says Carly Ameen, the study’s co-lead researcher and a bioarchaeologist at the University of Exeter.
“We found that dogs were already remarkably diverse in their skull shapes and sizes more than 11,000 years ago,” Dr Ameen said.
“This means much of the physical diversity we associate with modern breeds actually has very deep roots, emerging soon after domestication.”
Evolving from wolves to dogs
While dogs have been human companions for thousands of years, untangling exactly when our furry friends went from wolves to domestic animals is difficult to do.
Timelines using different scientific techniques to determine when this evolutionary transition occurred don’t match up.
Genetic evidence shows dogs diverged from wolves about 11,000 years ago, but much older fossils suggest the first dogs roamed around as early as 35,000 years ago.
Modern dogs (pink) and modern wolves (green), have subtle changes in their morphology.
C Brassard / VetAgro Sup / Mecadev
To examine their evolution in a different way, the researchers took 643 skulls of ancient wolves and dogs, and made 3D scans of them to analyse how their shapes changed over time and place.
These subtle shape changes provide clearer evidence of when wolves became dogs, but also of how dogs diversified into the modern era, the researchers said.
Using these 3D models, they found a distinctive dog-like skull shape emerged around 11,000 years ago, which lines up with genetic evidence.
But the models also showed a surprising amount of diversity among ancient dogs across Europe.
“While we don’t see some of the most extreme forms of skull shape that we see today — like pugs or bull terriers — the variation we see by the [middle of the Stone Age] is already half the total amount of variation we see in modern breeds,” Dr Ameen said.
“But for those features to develop, domestication must have started much earlier.”
While the research suggests a large amount of diversity existed as early as the Stone Age, many of the dogs we keep as pets today emerged during the 19th century, when intensive breeding produced speciality animals for fights and shows.
Early humans moved with dogs
Melanie Fillios, an anthropological archaeologist at the University of New England, said the study’s findings — including that almost half the variation in dogs occurred long before the Victorian era — suggested humans weren’t the sole cause of breed diversity.
“There’s all this variability 11,000 years ago, but we’re not sure why,” Professor Fillios, who was not involved in the paper, said.
“Humans have had a hand in it … but there’s also part of the story that may not have been humans.”
A modern dog skull used in a study to track changes in early dogs.
C Ameen / University of Exeter
According to Dr Ameen, the environment may also have shaped the earliest varieties of dogs.
“Some [dogs] lived with mobile hunter-gatherers in cold northern environments, others with groups in temperate forests or early farming communities,” she said.
“Each context brought different demands — for hunting, guarding, or companionship — and that variation likely shaped dogs’ morphology and evolution from the very beginning.”
An archaeological canid skull used in a study to track changes in early dogs.
C Ameen / University of Exeter
A second study, also published today in Science, pushes this idea further, suggesting that dogs likely moved with humans as they began migrating around the globe about 11,000 years ago after the last glacial period ended.
The study’s authors noted that dogs were occasionally traded among populations, which might mean they were important for culture and potentially even trading between groups.
“There’s all of these factors coming together around this time period… You’re getting this giant melting pot,” Professor Fillios said.
Dingoes don’t fit the mould
While Professor Fillios said the study brought together “a lot of information for the first time”, it left plenty of questions still unanswered.
“It’s another piece of the puzzle… but it doesn’t solve the question of dog domestication or human intervention in that process.”
She also noted that studies like this struggled to explain the emergence of species such as dingoes, which occur on an evolutionary “side branch”.
Dingos have been in Australia for an estimated 3,500 to 8,000 years.
Alex Gisby
It’s unknown how long dingoes have been in Australia, but estimates suggest somewhere between 3,500 and 8,000 years ago.
“It would be a really nice story if all this [dog] diversity … corresponds with genetic evidence, and people moving around the world,” she said.
But when it comes to dingoes, this timeline didn’t work, she said.
“There is no relationship between dingoes and these other branches that came to be the domestic village dogs and Asian and European dogs that we see today.”
For both Dr Ameen and Professor Fillios, more research is needed to understand how dogs first became our companions.
“Dogs were the first species we ever domesticated, and they’ve been evolving with us ever since,” Dr Ameen said.
“While dogs are among the most studied domestic species… a lot remains to be discovered.”
– Published by EveningReport.nz and AsiaPacificReport.nz, see: MIL OSI in partnership with Radio New Zealand
When I first began appropriating the plots of British-Irish novelist Iris Murdoch’s novels to explain scientific concepts, I never stopped to think about whether Murdoch herself would have approved of such an endeavour.
As a professor of molecular biophysics, I find that in both scientific research and all aspects of life, there can be great advantage in thinking differently. I’ve recently given some sessions on this at the Physics of Life summer school, and the fun, ideas and feedback were beyond my wildest dreams – especially as I’d been encouraged to conceal this side of myself as a young scientist.
Back in the 1990s, I did my PhD on protein folding – a conundrum underpinning all biology which has challenged scientists for decades. I wrote about it for The Conversation when a breakthrough won the Nobel prize in chemistry in 2024.
At its heart is a question of competing energies: entropic forces, which motivate a protein and its surrounding medium to move as freely as possible, versus enthalpic, in which positive charges gravitate towards negative charges and things with oily properties congregate. Protein folding is driven by finding the best balance in a three-dimensional shape to satisfy as many of these forces as possible.
An early book by the Booker-winning author A.S. Byatt, Degrees of Freedom, examines the power structures and layers of control that drive Murdoch novels. It’s a comparable scenario to protein folding: the compromise between many clashing forces.
When Degrees of Freedom first came out in 1965, Murdoch had published nine novels. The book was reissued in 1994 with additional material, when only Murdoch’s final novel, Jackson’s Dilemma – written when Alzheimer’s disease was just beginning to invade her beautiful mind – had yet to emerge.
Reading Murdoch’s 1975 novel A Word Child in 2003, I was struck by the helix-shaped nature of the plot, with London Underground’s Circle Line platform pubs at Sloane Square and Liverpool Street acting as points of vulnerability. I immediately turned to Byatt’s book to see whether her analysis matched my own.
In finding there was no chapter on A Word Child, I trawled the internet and discovered the Iris Murdoch Society, which one could join for the princely sum of £5. Signing up at that time required emailing Anne Rowe at Kingston University, and I couldn’t resist explaining my thoughts on A Word Child and the molecular mechanisms underpinning Alzheimer’s disease. She invited me to submit an abstract to a conference – and from then on, I was hooked.
So far, I’ve used ten out of Murdoch’s 26 novels to illustrate topics as broad as alcoholism and its effect on the liver, sex hormone signalling, evolution, molecular crowding and electron microscopy. While I’m not in any immediate danger of running out of Murdoch material, the recent publication of Poems from an Attic, a collection assembled from material found in her Oxford home many years after her death, adds a glorious new angle to my exploits.
While Murdoch is obsessed with nature – wild swimming, the changing seasons, flora, fauna and the meditative effects of being outdoors – she often speculates in her poems as to why things are as they are, which is an undeniably scientific way of thinking. There are examples of this in many of the poems, whatever their topic.
The word science occurs three times in the new volume – the first in the poem To B, who brought me two candles as a present (B was Murdoch’s lover, Brigid Brophy):
What you require of me no science gives –
To make these fires constant but not consumed.
What blazes every moment when it lives
Has eaten its own substance as it bloomed.
Yet though they burn not all the evening through,
While they are burning each to each is true.
This provides a satisfying analogy to justify sustaining Murdoch’s simultaneous passions. It invokes the same fuel-based resignation as American poet Edna St Vincent Millay’s First Fig:
My candle burns at both ends;
It will not last the night;
But ah, my foes, and oh, my friends
It gives a lovely light!
The other two mentions of science in the new collection appear in You by Telephone – in which Murdoch muses over the changes, both positive and negative, that the invention of the phone had on the practicalities of relationships:
For I cannot close with kisses the lips that may speak me daggers,
Nor give you a gentle answer just by taking your hand.
The poem also includes this delightful digression:
In spite of the case of Odysseus, who might have got home much sooner
If at the start he could have dialled Ithaca one.
But he might have offended Hermes, that rival tele-communer,
And science would have precluded a lot of Homer’s fun.
I am relieved Murdoch didn’t have to grapple with smart phones, social media and today’s attention spans. Years ago, I scoured her archive for thoughts on science, which were mostly touched upon in correspondence, and her entertaining annotations of The Question Concerning Technology and Other Essays by Martin Heidegger, and The Tao of Physics by Fritjof Capra.
Murdoch was certainly interested in science, albeit with a healthy dose of scepticism, while being alarmed at its pace of development. I like to fantasise that I could have talked her down.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
This article features references to books included for editorial reasons, and may contain links to bookshop.org – if you click on one of the links and go on to buy something, The Conversation UK may earn a commission.
Rivka Isaacson receives funding from the UK Research and Innovation Biotechnology and Biological Sciences Research Council.
A group of friends sit around a table sharing stories and sipping mead. The men sport beards and the women sip from drinking horns – but these aren’t Vikings, they’re modern-day hipsters.
These meaderies often draw on Viking imagery in their branding. Their wares are called things like Odin’s Mead or Viking Blod and their logos include longships, axes, ravens and drinking horns. A few even have their own themed Viking drinking halls. This is part of what might be called the “Viking turn”, the renewed pop culture vogue for the Vikings in the past 20 years, which has made them the stars of a rash of films, TV shows, video games and memes.
Since the rowdy banquet scene in the 1958 film The Vikings, wild, boozy feasting has been a staple of the hyper-masculine pop culture Viking. This theme continues in the 21st century, from the History Channel’s Vikings TV series (2013-present) to games like Skyrim (2011) and Assassin’s Creed: Valhalla (2020).
But while modern media suggest that Vikings drank mead as often as water, history tells a slightly different story.
The banquet scene from The Vikings (1958).
Three stories are foundational for the Viking association with mead. The first is the Anglo-Saxon poem Beowulf, which survives in a single manuscript written in Old English and now in the British Library.
The story it tells is set in southern Sweden and Denmark in the early 6th century, so the warrior culture and lifestyle that Beowulf idealises are actually of a period considerably earlier than the Viking age (usually dated from the later 8th century onward). It does share a great deal of its substance with later Viking notions of the good life and so, for good or ill, they have tended to be conflated.
Most of Beowulf’s action plays out around mead-halls – the power centres of lords such as the Danish king Hrothgar, where the leader would entertain his followers with feasts and drinking in return for their support and military service. This relationship, based upon the consumption of food and drink, but inextricably bound up with honour and loyalty, is the basis of the heroic warrior society that is celebrated by the poet. Not surprisingly, therefore, episodes in which mead is drunk are frequent and clearly emotionally loaded.
A second high-profile appearance of mead comes in Norse mythology. At the god Odin’s great hall, Valhöll, the Einherjar – the most heroic and honoured warriors slain in battle – feast and drink. They consume the unending mead that flows from the udders of a goat named Heiðrún who lives on the roof. Norse myth, it should be noted, is sometimes quite odd.
Odin excreting mead in the form of an eagle, from an Icelandic 18th century manuscript. Det Kongelige Bibliotek
Lastly, another important myth tells of Odin’s theft of the “mead of poetry”. This substance was created by two dwarves from honey and the blood of a being named Kvasir, whom they had murdered. The mead bestows gifts of wisdom and poetic skill upon those who drink it.
The whole myth is long and complicated, but it culminates with Odin swallowing the mead and escaping in the form of an eagle, only to excrete some of it backwards when he is especially hotly pursued.
These are striking and impressive episodes that clearly demonstrate the symbolic and cultural significance of mead in mythology and stories about heroes of the Viking age. But that is far from proof that it was actually consumed on a significant scale in England or Scandinavia.
As far back as the 1970s, the philologist Christine Fell noted that Old English medu, (mead), and compound words derived from it appear far more frequently in strongly emotive and poetic contexts such as Beowulf than in practical ones such as laws or charters.
This contrasts strongly with the pattern of usage of other words for alcohol such as ealu (ale), beor (counter intuitively probably “cider”) or win (wine), which are far more frequently used in a functional and practical way. This led Fell to believe that the concentration on mead in the likes of Beowulf was a “nostalgic fiction”. Mead, she concluded, was a fundamental part of an idealised and backwards-looking imagined heroic world rather than something customarily drunk in the course of everyday life.
In 2007, a PhD candidate at the University of York demonstrated the same point in the Scandinavian sources: mjǫðr (“mead”) is far more common in the corpus of Eddic and skaldic poetry than it is in the saga stories of everyday life. Equally, both the word mjǫðr and compound words derived from it are used far less frequently in the sort of practical and purposeful contexts in which ǫl and mungát (the Old Norse words for ale) are plentiful.
Drinking horns on display at a Viking-themed pub in York. Author provided, CC BY
The strong impression in both England and Scandinavia is that, by the time sources like Beowulf were written from the 10th century onward, the plentiful drinking of mead by a lord’s retinue was largely symbolic. It represented the contractual bonds of honour in an idealised warrior society.
This was more a poetic image than a reflection of frequent real-life practice. The standard drink at feasts, let alone at normal everyday household meals, was far more likely to be ale.
Mead was once a highly prized drink – probably the most desirable beverage well before the Viking age, as its honoured place in Valhöll and Hrothgar’s hall suggests. However, honey’s scarcity made mead expensive and hard to source in northern Europe. By the Viking age, exotic Mediterranean wine, mentioned as Odin’s drink in the Grímnismál, may have begun to replace mead as the elite’s preferred choice.
So what, then, for modern mead-drinking Viking enthusiasts? The point is not, of course, that Vikings or any other early medieval people never drank mead – some clearly did, if not perhaps quite so often as is sometimes alleged – but rather that it served more as a symbol of a story-filled heroic neverland. But that is arguably exactly how many of today’s mead-drinkers also use it.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Simon Trafford does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Some people are so good with faces that there’s a name for them – super-recognisers. And a new study using eye-tracking technology has given us some insights into how they do it.
Although most of us perform reasonably well when tasked with learning a new person’s face or recognising someone we already know, there are people whose abilities are at the extremes. Those who struggle with faces (even of close friends and family) are known as prosopagnosic or face blind. Some people are born with this difficulty, while others may develop it later in life as a result of a stroke or injury.
In contrast, super-recognisers naturally excel at recognising faces. Studies also show they may be better than most of us when deciding whether images of unfamiliar people depict the same individual (like comparing a stranger to their ID photo), and that this ability may even extend to voices.
The new study suggests the direction of super-recognisers’ gaze when learning a face is important in explaining why they perform so well.
What do super-recognisers do differently?
Since super-recognisers are outstanding at recognising faces, it is interesting and potentially useful to discover what they do differently to the rest of us.
Previous research has shown these people look at faces in a different way when learning them. They make more fixations (stop and focus on more points) while spending less time on the eye region, compared with the average viewer. Their attention is spread more broadly, sampling more information across the face as a whole.
Also, their style of responding differs from those who are highly trained (over many years) in matching face images, tending to place more confidence in their decisions (both when correct and incorrect) and responding faster.
Evidence suggests super-recognisers’ face recognition skills are likely to have a strong genetic basis, perhaps explaining why attempts to improve average people’s abilities through short periods of training have generally failed.
What eye-tracking data reveals
Since we know super-recognisers look at faces differently to the average person, researchers in Australia decided to investigate whether this might explain their superior performance levels.
They used eye-tracking data collected in 2022 for a previous study from 37 super-recognisers (identified based on their scores across several face perception tests) and 68 typical viewers, to reconstruct exactly what these participants were looking at when learning new faces.
Super-recognisers stop and focus on more points as they learn a new face, while spending less time on the eye region. Prostock-studio/Shutterstock
They viewed the faces through a simulated “spotlight” (see it here) which moved with their gaze as they explored the face. This meant the researchers could be sure of what information the participant could see during viewing.
Next, all of the regions a participant viewed were combined to create a composite image. This composite was then compared with a full, original image of either the same person (but showing a different facial expression) or a different person (with similar demographic characteristics). High similarity to images of the same person, and low similarity to different people, would mean the composite contained useful identity information.
The researchers’ analyses showed that super-recognisers accessed more valuable information, which resulted in better discrimination between “same person” and “different people” image pairs when compared with typical participants.
After accounting for the fact that super-recognisers simply took in more information than typical viewers, the results showed that the quality of their information was still higher.
More extensive exploration of faces
The researchers suggest that more extensive exploration of faces during learning could help super-recognisers in discovering the most useful features for identification. This may lead to better-formed internal representations of each learned face.
Since super-recognisers look at faces differently to the rest of us from the very earliest stages of viewing, it’s very difficult to train people to match their natural ability. However, forensic facial examiners (professionals whose job involves face comparisons) show it is possible.
They have been found to perform just as well as super-recognisers when comparing pairs of unfamiliar images, presumably due to the extensive and lengthy training and mentoring that they receive – in particular focusing on useful features in the images like the ears and any facial marks.
So there may actually be two types of face experts: those with natural ability (super-recognisers) and those with extensive training (facial examiners). But examiners might choose to pursue this particular career because of an innate ability, so further investigation is needed.
Although the existence of people with exceptional face abilities has been known for nearly two decades, researchers are still trying to understand what makes them excel. As this new study demonstrates, the way super-recognisers (and the rest of us) look at faces as we learn them could play a crucial role in how good – or bad – we are at recognising people in our daily lives.
Robin Kramer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The arrival of the world’s largest aircraft carrier, the USS Gerald R. Ford, in the Caribbean basin on November 11 has intensified fears of a large-scale conflict in the region. The carrier has been deployed as part of US president Donald Trump’s campaign against boats in the Caribbean and Pacific allegedly transporting drugs bound for the US.
But some experts suspect that the real objective is to support a possible US military strike aimed at toppling the regime of Nicolás Maduro in Venezuela. Trump has long accused the Venezuelan government of being a criminal organisation, offering US$50 million (£38 million) earlier in 2025 for information leading to Maduro’s arrest.
Trump recently authorised the CIA to conduct covert lethal operations inside Venezuela, adding that his administration was now considering operations on land. The deployment of the USS Gerald R. Ford, which has 4,000 sailors and dozens of aircraft on board, further raises the stakes.
However, US military action in Venezuela would carry immense risks. The Venezuelan government has long been preparing for an asymmetric conflict with the US, eccentric as this may have sounded in the past.
Venezuela’s military doctrine
In 2002, the Venezuelan government was subject to a US-backed coup attempt. This prompted Hugo Chávez, Venezuela’s leader at the time, to promote an overhaul of national military thinking to deal with a possible US invasion.
His strategy incorporated principles of “people’s war”, a Maoist tactic used extensively by Vietnamese military commander Vo Nguyen Giap in the Vietnam war. This tactic accepts ceding territory to an invading force initially, in favour of engaging the enemy in guerilla-style warfare until the conflict becomes impossible to sustain.
A key part of the tactic is that it blurs the boundaries between society and the battlefield, relying on the support and participation of the population. Reflecting on so-called people’s wars in the first half of the 19th century, Prussian military theorist Carl von Clausewitz observed that the strongest wars are those driven by the determination of the people.
The concept of people’s war and asymmetric warfare has been codified in anti-imperialist doctrines throughout the 20th century. This is especially true for the Vietnamese guerrilla leaders. But it was also adopted more loosely by insurgencies such as the Taliban, which fought Soviet forces in Afghanistan in the 1980s.
The Iraqi resistance against US forces in the early 2000s featured highly in Chávez’s mind. Venezuela’s then-president had thousands of copies of Spanish political scientist Jorge Verstrynge’s 2005 book, Peripheral Warfare and Revolutionary Islam, distributed within the Venezuelan army. The book draws on the experience of jihadist groups to emphasise the power of smaller, irregular formations in deciding asymmetric conflicts.
The Bolivarian Militia, a special branch of the Venezuelan armed forces created in 2008, embodies the doctrine of people’s war by incorporating civilians into national security mobilisation. Membership of the militia grew from 1.6 million in 2018 to 5 million by 2024, according to official figures. Under Maduro, the Venezuelan government has said it wants to expand membership to 8.5 million people.
The goal of the militia is not to duplicate conventional Venezuelan armed forces, but to extend their presence across the country. Venezuela’s territorial defence system is based on military deployments at regional, state and municipal levels, with personnel and missions assigned according to local geography and population.
Under Chávez, this system was broken down into much smaller units, covering specific municipal areas and communities. This level of capillarity is possible because it relies on civilian soldiers from the Bolivarian Militia and their profound knowledge of local areas.
For a large proportion of militia men and women, especially older members, their main task would not involve weapons. They would probably be tasked with carrying out what the government calls “popular intelligence” – in other words, surveillance.
This has already been reinforced with a recently launched mobile phone app which allows Venezuelans to report “everything they see and hear” in their neighbourhood that they consider suspicious.
Political and economic quagmire
A powerful US invading force would probably be allowed to march into Venezuela relatively easily. The problem would be the ensuing political and military quagmire that Venezuela’s military doctrine has been designed to create.
There are many uncertainties surrounding this scenario. On the Venezuelan side, civil-military coordination in wartime would be highly complex. Large-scale exercises have seen hundreds of thousands of regular troops, militia members and police simulating possible wartime scenarios. But their logistics have never been tested in real life.
Another uncertainty concerns the cohesion of combatants. Trump’s hardline posture towards Venezuela could trigger a “rally round the flag” effect, reinforcing loyalty to the government in the early stages of war. But the ideological commitment of militia members in a protracted scenario is another question.
On the US side, Trump’s plan for Venezuela remains unclear. Assuming Washington’s aim is to install an opposition government, it’s not obvious how such an administration could survive in the days and weeks after taking power. A conflict could also trigger another wave of Venezuelan emigration, adding to the 8 million-strong diaspora living mostly in Latin American countries.
The Bolivarian doctrine hopes that the prospect of “another Iraq” in Latin America serves as a deterrent against US intervention in Venezuela. But it is unclear whether Trump is taking this prospect seriously.
The US president reportedly considers Venezuela “unfinished business” from his first term in office. At that time, he imposed harsh sanctions on Venezuela’s oil sector, saying in 2023: “When I left, Venezuela was ready to collapse. We would’ve taken it over, and would’ve gotten all that oil.”
Yet a military solution now would still risk leaving this business unfinished for the foreseeable future.
Pablo Uchoa has received UKRI funding for his research on the transformation of Venezuala’s military under Hugo Chávez.
As she carefully prepares the UK’s reaction to her second budget the chancellor, Rachel Reeves, has now hinted that she may be ready to scrap the two-child benefits cap.
This controversial policy prevents parents from claiming child tax credit or universal credit for more than two children (this is different to child benefit payments which are not limited by family size). According to the government’s own figures, the cap affects the households of 1.7 million children, and ditching it would cost upwards of £3.6 billion a year.
Introduced in 2017 as part of measures intended to cut public expenditure on welfare, the policy was designed to ensure that households on means-tested benefits “face the same financial choices about having children as those supporting themselves solely through work”.
However, when it was brought in, the then Conservative government’s impact assessment offered limited detail on the expected costs and benefits. A more comprehensive economic analysis of scrapping the policy would need to consider both the direct fiscal implications and the broader social and economic effects.
The direct fiscal cost is perhaps the most straightforward part of the calculation. Scrapping the cap would require the government to resume payments for families with more than two children, and the £3.6 billion annual cost is considerable at a time when the UK treasury doesn’t have a lot of money to spend.
So what about the potential economic benefits? These fall into two broad areas.
The first concerns the direct impact on children. For example, there is good evidence that additional household income during childhood improves future educational attainment and health. Increasing the money available to poorer households could therefore bring long-term social benefits.
However, the evidence to date on the specific effect of the two-child limit is limited. The Institute for Fiscal Studies recently examined the impact of the two-child limit on early years development (up to the age of five) and found no measurable effect on school readiness.
This finding may have come as a surprise to campaigners who argue that the policy harms child development. But it is consistent with evidence from the US which found that giving extra money to poorer families had no impact on early child development.
It seems then that the short-term effects of lifting the two-child benefit cap may not be significant. But longer-term influences, particularly on educational attainment, health and lifetime earnings could still emerge.
The second area of potential economic benefit relates to encouraging people to have more children. The logic here is that reinstating benefits payments for more than two children would lead to higher fertility rates (the average number of children a woman has over her lifetime).
This is particularly relevant given that birth rates in the UK have declined significantly in recent years from 812,970 births in 2012 to 694,685 in 2021.
As the population ages and lives longer, there is a risk that a shrinking working-age population will threaten economic prosperity. This is partly through a reduction in the number of workers supporting those who are not working, but also through a reduction in innovation, the key driver of economic growth.
Yet evidence that the two-child limit has significantly deterred parents from having more children is weak. Research suggests only a small decline in birth rates among low-income households likely to be affected by the policy.
Child poverty
Another important consideration is the policy’s effect on the labour market. Evidence indicates that the introduction of the two-child limit led to small increases in hours worked, and an increased likelihood of mothers of three children entering the workforce. This implies that the two-child limit incentivised some people to work more.
If scrapping the cap reverses these effects, the fiscal cost could be even higher because of reduced tax revenue and lower economic output.
That said, this reduction in employment could also be framed as a benefit. Stricter benefit rules that increase employment may also lead to negative mental health outcomes, which also carry social and fiscal costs.
From an efficiency standpoint then, the case for scrapping the two-child limit is ambiguous. The evidence on its impact on fertility and childhood outcomes is mixed, and there may be a effects on the labour market whose net benefit is uncertain.
But from an equity perspective, the case is much stronger. It is easy to argue that reducing poverty is a desirable policy goal in its own right, regardless of whether it leads to other measurable social benefits.
Scrapping the cap is one of the most cost-effective ways of reducing child poverty. The Resolution Foundation thinktank estimates that abolishing the two-child limit would lift around 500,000 children out of poverty and is the single most effective policy lever available to government. It may now be a lever that Reeves intends to pull.
The Conversation and LSE’s International Inequalities Institute have teamed up for a special online event on Tuesday, November 18 from 5pm-6.30pm. Join experts from the worlds of business, taxation and government policy as they discuss the difficult choices facing the chancellor, Rachel Reeves, in her budget. Sign up for free here
William Cook receives funding from UKRI, the Education Endowment Fund and the Youth Endowment Fund.
The increasing number of injectable cosmetic treatments and fillers carried out around the world is driven by a seemingly universal need to look younger than we are. Most are administered to women, but a growing number of men are having them too.
This beauty-is-youth belief has a geological cost. Over 14 million stainless steel hypodermic needles are used and discarded annually for cosmetic treatments around the world. The metals used to create them are considered critical.
Stainless steel is an iron and chromium alloy with nickel added to most of it. The iron in a needle might have come from the Pilbara in Western Australia. It was born over a billion years ago when oxygen from the photosynthesis of early bacteria combined with iron in the ancient oceans and settled on the sea floor.
The chromium could have come from the Bushveld Complex in South Africa, an igneous intrusion created when magma found its way to the Earth’s crust through vertical cracks, then cooled, allowing the chromite to differentiate itself, crystallising in distinct layers.
And then there’s the nickel. Like chromite, it began its life in the upwelling and cooling of magma associated with the formation of the continents as we know them now, and through the weathering of igneous rocks. It’s likely to have come from Indonesia, where deposits of nickel are close to the surface and economical to extract.
A critical mineral is one that is considered essential for a state’s economy, national security and clean energy technologies, and has a supply chain vulnerable to disruption by war, tariffs and scarcity. Critical minerals cannot easily be replaced by something else.
The needles used to perform injectable cosmetic surgey are made using various critical minerals. fast-stock/Shutterstock
The critical list
What is on a particular country’s critical minerals list says something about the geopolitics of the places where commodities are mined, the characteristics of the commodity itself and the priorities of the country compiling the list.
Chromium is considered critical by the US, Canada and Australia because it is essential for stainless steel production and other high-performance alloys. Demand for chromium is expected to grow by 75 times between 2020 and 2040 due, in part, to the clean energy transition. Reserves are concentrated, with South Africa producing over 40% of supply in 2023, followed by Kazakhstan, Turkey, India and Finland.
Nickel was added to the UK’s critical mineral list in 2024. Described as the “Swiss army knife” of energy transition minerals, it is used to increase energy density in lithium batteries, allowing for their miniaturisation and increasing the range in electric cars. Indonesia holds 42% of the world’s reserves.
Even iron ore is on the list. High-quality iron ore was put on Canada’s critical minerals list in 2024 because of its importance for “green steel” production and decarbonisation goals.
The rapidly increasing demand for stainless steel for cosmetic purposes is tangled up with urgent demands from other sectors. It is essential for construction, transportation, food production and storage, medicine and the manufacture of consumer goods.
It is vital for defence. Stainless steel is used in aircraft and vehicle components, naval vessels, missile parts and ballistics.
Needles used in cosmetic procedures are also entangled with other resource-related issues that have no easy answer: mining-related conflict, concerns about the environmental and social impact of mining and controversy over new mining frontiers, like the deep seabed and the Moon.
Then there is the carbon footprint of the multiple processes required to turn rocks into needles and disposing of them safely. Each one has to be mined, shipped, smelted, manufactured, trucked, used, put in a sharps bin and then incinerated.
Do we have to choose between cosmetic procedures or the green transition? Cosmetic procedures or defence? No. Our increasing demand for injectable cosmetic procedures isn’t responsible for making chromium, nickel and iron ore critical. But it’s part of that story and it comes with a cost.
Don’t have time to read about climate change as much as you’d like?
Source: The Conversation – UK – By Rahmat Poudineh, Honorary Research Associate, Oxford Sustainable Finance Group, University of Oxford
shutterstockPiyaset / shutterstock
Nearly a decade after the Paris agreement, the world is emitting more greenhouse gases than ever. Global emissions reached a record 53 billion tonnes in 2024 – about 10% higher than in 2015, when the deal was signed. Despite near-universal participation, the international effort to cut emissions is failing.
The Paris system, built on voluntary pledges, has turned into more of a reporting exercise than a coordination mechanism.
Even if all countries’ pledges were fully implemented, global emissions would be only 2.6% lower than 2019 levels by 2030 (versus 43% required).
Paris succeeded in creating a shared language of ambition and reporting, but not in enforcing collective compliance. It now functions less as a steering mechanism and more as a global scoreboard, showing who is ahead or behind. The absence of binding rules made universal participation possible – but also removed incentives to stay on course.
Emissions within acceptable limits
The world is entering the age of “managed emissions” – an era of containment, not cure. Instead of eliminating greenhouse gases, governments are learning to live with them, keeping pollution within politically acceptable limits.
Deep decarbonisation is being pushed further into the future, perhaps the 2060s or 2070s. Each revision of global scenarios quietly redefines delay as progress.
To be managed – not eliminated. JKVisuals / shutterstock
Climate policy as industrial strategy
The erosion of cooperation hasn’t led to inaction. Instead, it has sparked a new kind of race: competitive decarbonisation.
Major economies are cutting emissions mainly to strengthen energy security, secure industrial advantage, and expand geopolitical influence. Clean-energy investment reached around US$2.2 trillion in 2024, mostly concentrated in China, the EU, and North America. Climate action is now shaped more by a desire to promote key industries than by multilateral coordination.
A new industrial climate regime has emerged where success is measured by national market share in clean technologies, not by collective progress toward global goals.
This shift is also geopolitical. The rivalry between the US and China has spilled into climate policy, with each using green leadership to project influence and set global standards. Competition over clean technologies has encouraged export restrictions and trade disputes, stifling open collaboration.
The race for critical minerals adds another layer. These resources are essential for renewable technologies, and nations are moving from cooperation to resource nationalism, securing supplies by forming strategic partnerships and investing heavily in domestic mining.
At home, governments are tailoring climate policies to domestic interests. Action on climate is now tied to industrial jobs, competitiveness, and voter expectations.
Protecting economies, not the planet
To prevent “carbon leakage” – where companies relocate to countries with weaker rules – rich nations are introducing trade measures such as carbon border adjustments. These policies aim to protect national industries while maintaining environmental standards, but they also risk deepening global divides.
Developing countries argue that wealthy nations have failed to deliver on climate finance and technology transfer, promises central to the Paris deal. The result is an erosion of trust: poorer countries see a system that benefits the industrialised world while restricting their own growth.
These trends reveal something deeper than a shortfall in ambition. They expose an illusion of control. Despite record investment, global emissions continue to rise because today’s governance tools no longer match the scale and complexity of the energy system. The world is not defying Paris by choice, but by design – through a framework relying on voluntary pledges in a fiercely competitive global economy.
This is not necessarily a story of failure. The shift from cooperation to competition has unleashed investment, innovation and the deployment of clean technologies. Yet without global alignment, progress is uneven at best.
The challenge ahead is not only technological but moral: can global governance resist the comfort of incremental progress? Can it reclaim a sense of shared direction?
If “managed emissions” become the accepted destination, humanity may master adaptation yet forfeit transformation. At the UN’s Cop30 climate summit, the task is not merely to promise more – but to recover belief in collective action before it quietly disappears.
Rahmat Poudineh is head of electricity research at the Oxford Institute for Energy Studies (OIES). OIES is an independent and autonomous energy research institute based at Oxford.
Shannon McCoole ran one of the world’s largest dark web child abuse forums for around three years in the early 2010s. The forum provided a secure online space in which those interested in abusing children could exchange images, advice and support. It had around 45,000 users and was fortified with layers of online encryption that ensured near-complete anonymity for its users. In other words, it was a large and flourishing community for paedophiles.
McCoole eventually became the subject of an international investigation led by Taskforce Argos – a specialist unit in Australia’s Queensland Police Service dedicated to tackling online child abuse networks.
Key to the investigation – and McCoole’s eventual arrest and conviction – was a piece of linguistic evidence: his frequent use of an unusual greeting term, “hiyas”, as noticed by an investigating officer.
Investigators began searching relevant “clear web” sites (those openly accessible through mainstream search engines) for any markers of a similar linguistic style. They knew the kinds of websites to search because McCoole would speak about his outside interests on the forum, including basketball and vintage cars.
A man was discovered using the giveaway greeting on a four-wheel drive discussion forum. He lived in Adelaide and used a similar handle to the paedophile forum’s anonymous chief administrator. Another similarly named user – also using “hiyas” as a preferred greeting term – was discovered on a basketball forum. Suddenly, the police had their man.
This linguistic evidence contributed to the identification, arrest and eventual conviction of McCoole. But it didn’t end there. After McCoole’s arrest, Taskforce Argos took over his account and continued to run the forum, as him, for another six months. Police were able to gather vital intelligence that led to the prosecution of hundreds of offenders and to the rescue of at least 85 child victims.
McCool’s case is breathtaking, and it offers a compelling demonstration of the power of language in identifying anonymous individuals.
The power of language
My journey into forensic linguistics began in 2014 at Aston University, where I began learning about the various methods and approaches to analysing language across different contexts in the criminal justice system.
A forensic linguist might be called upon to identify the most likely author of an anonymously written threatening text message, based on its language features; or they might assist the courts in interpreting the meaning of a particular slang word or phrase.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
Forensic linguists also analyse the language of police interviews, courtroom processes and complex legal documents, pointing out potential barriers to access to understanding, especially for the most vulnerable groups in society. Without thoughtful consideration of the linguistic processes that occur in legal settings and the communication needs of the population, these processes can (and do) result in serious miscarriages of justice.
A particularly egregious example of this occurred when Gene Gibson was wrongly imprisoned for five years in Australia after being advised to plead guilty to manslaughter. Gibson was an Aboriginal man with a cognitive impairment and for whom English was a third language. The conviction was overturned when the court of appeal heard Gibson had not understood the court process, nor the instructions he was given by his appointed interpreter.
So forensic linguistics is not just about catching criminals, it’s also about finding ways to better support vulnerable groups who find themselves, in whatever capacity, having to interact with legal systems. This is an attempt to improve the delivery of justice through language analysis.
Something that struck me in the earliest days of my research was the relative lack of work exploring the language of online child sexual abuse and grooming. The topic had long received attention from criminologists and psychologists, but almost never linguists – despite online grooming and other forms of online child sexual offending being almost exclusively done through language.
There is no doubt that researching this dark side of humanity is difficult in all sorts of ways, and it can certainly take its toll.
Nonetheless, I found the decision to do so straightforward. If we don’t know much about how these offenders talk to victims, or indeed each other, then we are missing a vital perspective on how these criminals operate – along with potential new routes to catching them.
These questions became the central themes of both my MA and PhD theses, and led to my ongoing interest in the language that most people never see: real conversations between criminal groups on the dark web.
Anonymity and the dark web
The dark web originated in the mid-1990s as a covert communication tool for the US federal government. It is best described as a portion of the internet that is unindexed by mainstream search engines. It can only be accessed through specialist browsers, such as Tor, that disguise the user’s IP address.
This enables users to interact in these environments virtually anonymously, making them ideal for hidden conversations between people with shared deviant interests. These interests aren’t necessarily criminal or even morally objectionable – consider the act of whistleblowing, or of expressing political dissent in a country without free speech. The notion of deviance depends on local and cultural context.
Nonetheless, the dark web has become all but synonymous with the most egregious and morally abhorrent crimes, including child abuse, fraud, and the trafficking of drugs, weapons and people.
Combating dark web crime centres around the problem of anonymity. It is anonymity that makes these spaces difficult to police. But when all markers of identity – names, faces, voices – are stripped away, what remains is language.
And language expresses identity.
Through our conscious and unconscious selections of sounds, words, phrases, viewpoints and interactional styles, we tell people who we are – or at least, who we are being from moment to moment.
Language is also the primary means by which much (if not most) dark web crime is committed. It is through (written) linguistic interaction that criminal offences are planned, illicit advice exchanged, deals negotiated, goals accomplished.
For linguists, the records and messages documenting the exact processes by which crimes are planned and executed become data for analysis. Armed with theory and methods for understanding how people express (or betray) aspects of their identity online, linguists are uniquely placed to address questions of identity in these highly anonymous spaces.
What kind of person wrote this text?
The task of linguistic profiling is well demonstrated by the case of Matthew Falder. Falder pleaded guilty to 137 charges relating to child sexual exploitation, abuse and blackmail in 2018. The case was dubbed by the National Crime Agency (NCA) as its first ever “hurt-core” prosecution, due to Falder’s prolific use of “hidden dark web forums dedicated to the discussion and image and video sharing of rape, murder, sadism, torture, paedophilia, blackmail, humiliation and degradation”.
As part of the international investigation to identify this once-anonymous offender, police sought out the expertise of Tim Grant, former director of the Aston Institute for Forensic Linguistics, and Jack Grieve from the University of Birmingham. Both are world-leading experts in authorship analysis, the identification of unknown or disputed authors and speakers through their language. The pair were tasked with ascertaining any information they could about a suspect of high interest, based on a set of dark web communications and encrypted emails.
Where McCoole’s case was an example of authorship analysis (who wrote this text?), Falder’s demanded the slightly different task of authorship profiling (what kind of person wrote this text?).
When police need to identify an anonymous person of interest but have no real-world identity with which to connect them, the linguist’s job is to derive any possible identifying demographic information. This includes age, gender, geographical background, socioeconomic status and profession. But they can only glean this information about an author from whatever emails, texts or forum discussions might be available. This then helps them narrow the pool of potential suspects.
Grant and Grieve set to work reading through Falder’s dark web forum contributions and encrypted emails, looking for linguistic cues that might point to identifying information.
They were able to link the encrypted emails to the forum posts through some uncommon word strings that appeared in both datasets. Examples included phrases like “stack of ideas ready” and “there are always the odd exception”.
They then identified features that offered demographic clues to Falder’s identity. For example, the use of both “dish-soap” and “washing-up liquid” (synonymous terms from US and British English) within the same few lines of text. Grant and Grieve interpreted the use of these terms as either potential US influence on a British English-speaker, or as a deliberate attempt by the author to disguise his language background.
Ultimately, the linguists developed a profile that described a highly educated, native British English-speaking older man. This “substantially correct” linguistic profile formed part of a larger intelligence pack that eventually led to Falder’s identification, arrest and conviction. Grant’s and Grieve’s contribution earned them Director’s Commendations from the NCA.
Linguistic strategies
The cases of McCoole and Falder represent some of the most abhorrent crimes that can be imagined. But they also helped usher into public consciousness a broader understanding of the kinds of criminals that use the dark web. These online communities of offenders gather around certain types of illicit and criminal interests, trading goods and services, exchanging information, issuing advice and seeking support.
For example, it is not uncommon to find forums dedicated to the exchange of child abuse images, or advice on methods and approaches to carrying out various types of fraud.
In research, we often refer to such groups as communities of practice – that is, people brought together by a particular interest or endeavour. The concept can apply to a wide range of different communities, whether professional-, political- or hobby-based. What unites them is a shared interest or purpose.
But when communities of practice convene around criminal or harmful interests, providing spaces for people to share advice, collaborate and “upskill”, ultimately they enable people to become more dangerous and more prolific offenders.
The emerging branch of research in forensic linguistics of which I am part explores such criminal communities on the dark web, with the overarching aim of assisting the policing and disrupting of them.
Work on child abuse communities has shown the linguistic strategies by which new users attempt to join and ingratiate themselves. These include explicit references to their new status (“I am new to the forums”), commitments to offering abuse material (“I will post a lot more stuff”), and their awareness of the community’s rules and behavioural norms (“I know what’s expected of me”).
Research has also highlighted the social nature of some groups focused on the exchange of indecent images. In a study on the language of a dark website dedicated to the exchange of child abuse images, I found that a quarter of all conversational turns contributed to rapport-building between members – through, for example, friendly greetings (“hello friends”), well-wishing (“hope you’re all well”) and politeness (“sorry, haven’t got those pics”).
This demonstrates the perhaps surprising importance of social politeness and community bonding within groups whose central purpose is to trade in child abuse material.
Linguistic research on dark web criminal communities makes two things clear. First, despite the shared interest that brings them together, they do not necessarily attract the same kinds of people. More often than not they are diverse, comprising users with varied moral and ideological stances.
Some child abuse communities, for example, see sexual activity with children as a form of love, protesting against others who engage in violent abuse. Other groups openly (as far as is possible in dark web settings) seem to relish in the violent abuse itself.
Likewise, fraud communities tend to comprise people of highly varied motivations and morality. Some claim to be seeking a way out of desperate financial circumstances, while others proudly discuss their crimes as a way of seeking retribution over “a corporate elite”. Some are looking for a small side hustle that won’t attract “too many questions”, while a small proportion of self-identifying “real fraudsters” brag about their high status while denigrating those less experienced.
A common practice in these groups is to float ideas for new schemes – for example, the use of a fake COVID pass to falsely demonstrate vaccination status, or the use of counterfeit cash to pay sex workers. That the morality of such schemes provokes strong debate among users is evidence that fraud communities comprise different types of people, with a range of motivations and moral stances.
Community rules – even in abuse forums
Perhaps another surprising fact is that rules are king in these secret groups. As with many clear web forums, criminal dark web forums are typically governed by “community rules” which are upheld by site moderators. In the contexts of online fraud – and to an even greater extent, child abuse – these rules do not just govern behaviour and define the nature of these groups, they are essential to their survival.
Rules of child sexual exploitation and abuse forums are often extremely specific, laying out behaviour which are encouraged (often relating to friendliness and support among users) as well those which will see a user banished immediately and indefinitely. These reflect the nature of the community in question, and often differ between forums. For instance, some forums ban explicitly violent images, whereas others do not.
Rules around site and user security highlight users’ awareness of potential law enforcement infiltration of these forums. Rules banning the disclosure of personal information are ubiquitous and crucial to the survival and longevity of these groups.
Dark web sites often survive only days or weeks. The successful ones are those in which users understand the importance of the rules that govern them.
The rise of AI
Researching the language of dark web communities provides operationally useful intelligence for investigators. As in most areas of research, the newest issue we are facing in forensic linguistics is to try and understand the challenges and opportunities posed by increasingly sophisticated AI technologies.
At a time when criminal groups are already using AI tools for malicious purposes like generating abuse imagery to extort children, or creating deepfakes to impersonate public figures to scam victims, it is more important than ever that we understand how criminal groups communicate, build trust, and share knowledge and ideas.
By doing this, we can assist law enforcement with new investigative strategies for offender prioritisation and undercover policing that work to protect the most vulnerable victims.
As we stand at this technological crossroads, the collaboration between linguists, technology and security companies, and law enforcement has become more crucial than ever. The criminals are already adapting. Our methods for understanding and disrupting their communications must evolve just as quickly.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Emily Chiang has received funding from UKRI – Innovate UK.