“Project Hail Mary,” the movie adaptation to Andy Weir’s 2021 novel about a science teacher attempting to save the Earth from sun-eating microbes, was released in March 2026 to stellar ratings from critics and audiences alike. The movie explores a few unique forms that extraterrestrial life could take, from space microorganisms that produce both infrared light and an unfathomable amount of energy, to rocklike aliens that live under crushing pressure and breathe ammonia.
Over the past decade, scientists have come up with a variety of frameworks to guide their search for life in the universe. While it’s most convenient to start looking for life using the knowledge that biologists have about life on Earth, scientists have also begun integrating broader conceptions of life, including life that perhaps evolved in different chemical environments.
To expand on the idea that life out in space might look nothing like life on Earth, here are five articles The Conversation U.S. rounded up from our archives, and written by astronomers and astrobiologists.
Planets that are close enough to their Sun that liquid water wouldn’t freeze, but far enough away that it wouldn’t evaporate, fall into what’s called the Goldilocks Zone. But why base the search on water, which complex life on Earth uses to survive, if an extraterrestrial life-form might use different chemistry?
Cole Mathis, a physicist and astrobiologist at Arizona State University who studies complex adaptive systems, explained that out of convenience, astronomers start by looking for signals similar to those produced by life on Earth.
Detecting chemical signatures using the instruments on telescopes is tricky – it’s like playing hide-and-seek, but you’re outside the house and can only peer in through the window. You might as well start by ruling out the easy and more obvious hiding spots.
By measuring the depth of the dip in brightness and knowing the size of the star, scientists can determine the size or radius of the planet. NASA Ames
Missions to Mars have looked for signs of photosynthesis – the process by which plants take in energy – and telescopes peering deep into space look for oxygen, which organisms on Earth release into the atmosphere.
“Most astronomers and astrobiologists know that if we only look for life that’s like Earth life, we might miss the signs of aliens that are really different,” Mathis wrote. “But honestly, we’ve never detected aliens before, so it’s hard to know where to start. When you don’t know what to do, starting somewhere is usually better than nowhere.”
Sometimes, scientists find chemical ingredients that make up life on Earth out in space, but they can’t assume that these ingredients on their own indicate life. Geological and environmental processes on planets may produce these chemical signatures without any living organisms involved.
His research team came up with a framework that, instead of looking for a specific type of life-form, looks at patterns in collections of chemicals and evaluates whether they could have been produced by processes like metabolism and evolution.
“If we assume that alien life uses the same chemistry, we risk missing biology that is similar – but not identical – to our own, or misidentifying nonliving chemistry as a sign of life,” wrote Aghazadeh.
Like Aghazadeh, many astrobiologists are starting to look more broadly at how complexity emerges, rather than searching for a specific type of molecule that could indicate the presence of extraterrestrial life. Other forms of life may be made up of entirely different chemical ingredients to humans, but to be considered life, they would still have to adapt and evolve over time.
Evolution is the process of change in systems. It can describe how a group of something becomes more complex – or even just different – over time.
Chris Impey, an astronomer from the University of Arizona, attended a workshop where scientists across disciplines came together to try to understand how and why systems in the universe – from organisms to languages and information – change or grow more complex over time.
Figuring out these underlying drivers of complexity, or finding signals that indicate the presence of a complex system, could help scientists search for unique forms of life in the universe.
“As astrobiologists try to detect life off Earth, they’ll need to be creative,” Impey wrote. “One strategy is to measure mineral signatures on the rocky surfaces of exoplanets, since mineral diversity tracks terrestrial biological evolution. As life evolved on Earth, it used and created minerals for exoskeletons and habitats.”
Another option for searching for life has nothing to do with biology. Some scientists, wrote astronomers Macy Huston and Jason Wright from Penn State University, look for “technosignatures:” signals that would come from technology originating beyond Earth.
Human technology – from TV towers to satellite and spacecraft communications – emits enough radio waves to create faint but detectable signals traveling through space. Scientists use this idea to search for artificial signals that could potentially come from an extraterrestrial civilization.
Other technosignatures could include chemical pollution, artificial heat or light from industry, or signals from a large number of satellites.
Advanced civilizations may produce a lot of pollution in the form of chemicals, light and heat that can be detected across the vast distances of space. NASA/Jay Freidlander
“While many astronomers have thought a lot about what might make for a good signal, ultimately, nobody knows what extraterrestrial technology might look like and what signals are out there in the universe,” wrote Huston and Wright.
Detecting extraterrestrial life in any form would be a momentous occasion, so, as Impey wrote, making a declaration might not be cut-and-dried. In “Project Hail Mary,” the fictional scientists sample and study the “space dots” they find extensively before drawing a conclusion.
Scientists must first rule out any possible non-biological explanations for a discovery, meaning the discovery would have to be unexplained by any chemical or geological processes. If scientists ever found a potential life-form very different from all life on Earth, it might take extensive research before they could rule out all other possibilities and determine that it’s a living organism. But setting this bar so high protects scientists from making a claim they would later need to walk back.
“A detection of life would be a remarkable development,” Impey wrote. “On scales large and small, astronomers try to set a high bar of evidence before claiming a discovery.”
Source: The Conversation – USA – By Jeffrey Tully, Associate Clinical Professor of Anesthesiology, University of California, San Diego
HBO Max’s enormously popular television series “The Pitt” is receiving plaudits for its realistic depiction of the trials and tribulations of health care in an urban emergency room.
Now in its second season, which premiered on Jan. 8, 2026, the show follows Dr. Michael “Robby” Robinavitch (played by Noah Wyle) and his colleagues through a single 15-hour clinical shift, divided into one-hour episodes. The team treats patients against a backdrop of all-too-common American societal plagues, from substance use disorder to medical bankruptcies and mass shootings.
Spoiler alert: About halfway through the season, Dr. Robby and the staff at the fictional Pittsburgh Trauma Medical Center grapple with chaos ensuing from a less commonly depicted disaster – a hospital cyberattack. The hospital’s network and computers were incapacitated, resulting in scenes of millennial residents struggling with fax machines, laboratory orders disappearing in a shuffle of papers, and constant communication breakdowns culminating in a missed life-threatening diagnosis.
All this might prompt viewers to wonder: Does this actually happen in real life?
These attacks have severe clinical consequences. In an unfortunate case of art imitating life, the show’s cyberattack story arc began on the same day that the University of Mississippi Medical Center suffered the same fate, resulting in the sudden closure of more than 30 affiliated clinics across the state while also disrupting Mississippi’s only Level I trauma center.
Modern health care is critically dependent on digital technologies, such as electronic health records, laboratory machines and radiology platforms, that shut down when hospital networks are taken offline. Losing access to these tools for prolonged periods of time puts patients’ lives at grave risk.
A ransomware attack sent the fictional Pittsburgh Trauma Medical Center emergency room back to the dark ages.
What’s at stake
The most dire real-life cyberattacks on hospitals involve ransomware, a class of malicious software that encrypts data and locks down computers and networks, demanding significant amounts of cash for the promise of relief. Unfortunately, these events are not rare. Comparitech, a cybersecurity research firm, recorded 445 ransomware attacks on hospitals and clinics in 2025 – a new peak following several years of annual increases.
Moreover, the health impacts of ransomware are not confined to the hospitals under attack. “The Pitt” demonstrates this phenomenon well in earlier episodes. When Westbridge, another hospital in the community, is struck first, a wave of patients arriving by ambulance strains Pittsburgh Trauma Medical Center’s already packed emergency room, leading to delays in care and overwhelming already-strained clinicians. Our team found that a hospital cyberattack cut the odds of surviving a cardiac arrest without devastating brain damage by nearly 90% at nearby hospitals, not just the one that was attacked.
And even when a hospital’s computer systems are restored and normal care resumes, a cyberattack leaves enormous financial damage in its wake. Class action lawsuits, fragmented billing and steep regulatory fines due to patient privacy breaches and other issues often result in tens to hundreds of millions of dollars of losses.
In the worst cases, hospitals or clinics in rural areas have been forced to shutter their doors, leaving their communities with one less place to receive care and exacerbating existing health care deserts.
Modern health care depends on digital technologies, which leaves hospitals vulnerable to crippling cyberattacks.
Protecting cyber infrastructure
We have no doubt that Dr. Robby will rally his team to ultimately save the day from malicious cyberattacks on “The Pitt.” But what is the prognosis for the rest of us, in the real world?
The good news is that a number of efforts are underway to improve the cybersecurity of the U.S. health care system.
Several states, including New York and Connecticut, have taken further action, enshrining new bills in 2025 and 2026 mandating hospitals develop specific cybersecurity plans to protect patients. And the Food and Drug Administration now evaluates the cybersecurity of new medical devices prior to their arrival to market, and can issue recalls of those found to have significant vulnerabilities.
Cybersecurity remains one of the few bipartisan issues on Capitol Hill. A health care cybersecurity bill co-sponsored by Senators Bill Cassidy, R-La., and Mark Warner, D-Va., introduced in December 2025, would require hospitals to adopt security practices, including multifactor authentication and data encryption, allocate additional grants for hospitals and clinics, and strengthen the pipeline for cybersecurity professionals working in the health care sector, among other provisions.
However, this problem isn’t going away. Artificial intelligence and the expansion of remote and virtual care mean that malicious hackers have sophisticated new tools and increased opportunities to target hospitals. Researchers like us will have to find new ways to prevent cyberattacks when possible and protect patients when they inevitably erupt.
Jeffrey Tully is the co-founder of Inoculum Labs. He receives funding from the Advanced Research Projects Agency for Health. He is affiliated with the Healthcare and Public Health Sector Coordinating Council Cybersecurity Working Group.
Christian Dameff co-founded inoculum labs. He receives funding from ARPA-H.
Suspicion and affection. Apprehension and excitement. Most people have mixed feelings about AI English, whether or not they always recognize it. When reading text generated by AI, people feel it sounds off, or fake. When reading English by a human, people are more likely to feel it has a characteristic voice or a personal touch.
What exactly makes English sound human, or sound like AI? And does it matter if AI English never truly achieves a human feel?
When generative AI language tools came along, they scaled up these problems. English-based large language models are trained on text from the public internet. Human instructions tell the models to sound like formal English. Because of that, large language models end up trained on all the bias baked into standardized human texts and ideas.
In my work, I encounter people who would never trust the internet to tell them what is right and wrong, yet they trust generative AI to tell them how to write.
Human vs. AI
The first step to becoming a more informed user of AI English is to try to understand what people mean when they say writing sounds human. This understanding will improve your AI literacy. Most importantly, it will allow you to learn to recognize two qualities that make human English different from AI English: variation and readability.
Human English contains persistent, if subtle, linguistic patterns of variation and readability. By contrast, AI uses what I call exam English – a rather formal, dense English that is favored in academic tests and papers. It is less varied and less readable. People perceive it as robotic, but they also perceive it as smart.
Here’s a quick test: Read the two text messages below and guess which one is by a human and which one is by ChatGPT.
“i’m not sure how to break this to you. there’s no easy way to put it…i can’t make the friday-night fun. sorry. however, feel free to text me during the evening if there are any lulls in conversation. anyway, hope ur exotic trip goes well. see u next term.”
“Hey! I’m really sorry, but I won’t be able to make it Friday night. I hope you all have a great time, and I’ll see you next term!”
A human reader would probably notice several patterns right away. The first message has more “textese”: It defaults to lowercase and includes phonetic spellings “ur” and “u.” The second text has exam English capital letters, commas and spelling.
People are likely to have other impressions, too. Perhaps the first text feels more personal, and less sure of itself. Maybe the second text feels stiff, like it was written by an acquaintance. The first text contains different kinds of phrases and clauses, while the second text repeats the same clause structure four times.
On some level, human readers pick up on such patterns. Most people would say that the first text is by a human and the second is by AI. Indeed, the second passage was generated by ChatGPT.
Even this basic illustration shows that human English includes variation in word usage and grammatical structures that breaks up information and conveys personal meaning. AI English has less variation and more dense noun phrases. In research studies, these patterns appear repeatedly across genres and registers.
Some AI English patterns change
AI writing tools evolve, and large language models vary. GPT 5 was infamously cold-sounding compared with its predecessor GPT 4, for example.
But the patterns I am talking about are likely to persist. AI English favors what exam English has always rewarded: homogeneity and information density. And thus far, instructional tuning – training AI models to follow human instruction – only makes AI English less like human English. It doesn’t help that AI writing is part of what AI bots train on.
The net effect today is that AI English has been trained on English that is much more narrow than actual, collective human English in practice. Humans, by contrast, don’t just use language that is probable, but language that is possible – based on the varied language use they have observed, their creative capacity for new utterances and their propensity to blend personal and impersonal language patterns.
AI models can produce conventionally correct, smart-sounding language, but that language lacks the variation, accessibility and creativity that make language human.
How AI and human English can coexist
If you can become more aware of differences between AI and human English, those insights can help you use both language forms more productively. Here are a few steps to take:
Use language labels. When describing a given passage, use labels like “dense,” “plain,” “interpersonal” or “informational”, not social labels like “sounds smart” or “sounds off.” Consider exploring the actual patterns in human and AI English and trying to describe language patterns, not feelings about them, in other words.
Use AI tools selectively. Not only does human English have more accessible and varied patterns, it also engages the brain more than using AI language tools. To help prevent AI English from overshadowing human varied language in the world, use AI selectively.
Use curated tools. Tools like small language models and programs that you can add to a web browser to root out bias, such as Bias Shield, can help people make principled choices about AI English use. Tools such as translingual chatbots can also bring to AI English much more of the global variation in human English.
Be conscious of what sounds smart, and why.A century and a half of exam English makes it easy to think that dense, impersonal writing patterns are smart. But like any language patterns, they have pros and cons. They are not particularly personable or readable, especially for diverse audiences, and they are not representative of the range of global English in use today.
There can be good reasons to use exam English, but not just because AI bots generate it, or because people have learned to perceive it as smarter.
At its best, AI English is a language database driven by statistics. It’s big, but it’s canned. History tells us that a full range of global human English gives people the greatest possibilities for expression and connection.
Laura Aull does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Millions of people have a heart rhythm disorder called atrial fibrillation, which causes the heart’s upper chambers or atria to beat chaotically rather than in a smooth, coordinated rhythm. For many, the symptoms can be mild with palpitations, fatigue or breathlessness, but the greatest danger is something far more serious – a stroke.
Tucked inside the heart is a tiny pouch called the left atrial appendage. When the heart beats erratically, blood can pool and sit still in this pouch instead of flowing normally – and still blood tends to clot. If one of those clots breaks free and travels to the brain, it can block bloodflow and cause a stroke. Atrial fibrillation makes you about five times more likely to have a stroke. The question for researchers, then, has been whether that pouch could simply be taken out of the equation.
Researchers recently revealed one possible answer – a new technique, so far tested only in animals, in which a magnetically guided liquid is injected into the heart, hardening to permanently seal the pouch from the inside. Early tests in rats and pigs suggest that this method could one day lower the risk of stroke in people with atrial fibrillation.
Current treatments are effective but imperfect. Today, most patients are prescribed blood-thinning drugs, such as anticoagulants. These drugs reduce the ability of blood to clot and significantly lower the risk of having a stroke.
However, anticoagulants come with trade-offs. They increase bleeding risk, which can be dangerous for some patients – particularly older adults or those with other medical conditions such as stomach ulcers, hypertension, liver or kidney disease and cancer. Some people cannot tolerate them or must stop treatment because of bleeding complications.
Another option is a procedure called left atrial appendage occlusion, in which doctors implant a small device to plug the appendage. The most widely known devices are delivered using a catheter and expand like a small metal umbrella to seal the opening.
These devices can be effective, but they are not perfect. Because the appendage varies widely in shape and size between patients, rigid implants may not always create a complete seal. Sometimes a little blood can leak around the edges, and small clots can form on the surface of the device. The parts that hold the device in place can also damage the heart tissue.
The newly reported approach takes a radically different path. Instead of inserting a rigid implant, researchers inject a magnetically responsive liquid, sometimes called a magnetofluid, directly into the left atrial appendage through a catheter.
Once inside the cavity, an external magnetic field helps guide and hold the fluid in place, so it fills the entire appendage, even against the force of circulating blood. Within minutes, the liquid reacts with water in the blood and transforms into a soft “magnetogel” that seals off the cavity.
Because the material begins as a liquid, it can adapt precisely to the highly irregular shape of each patient’s left atrial appendage. In theory, this allows it to create a more complete seal than conventional rigid devices. The gel also appears capable of integrating with the heart’s inner lining, forming a smooth surface that may reduce the chance of a clot forming.
Encouraging early results
So far, the technique has only been tested in animals. Researchers first evaluated the concept in rats and then progressed to experiments in pigs, an important milestone in cardiovascular research.
In the pig study, the magnetogel remained stable inside the appendage for 10 months with no evidence of a clot or leakage. The heart’s inner lining grew over the surface of the gel, creating a continuous, apparently healthy layer.
When compared with conventional metal occlusion devices in pigs, the magnetogel produced a smoother lining and avoided the tissue damage associated with anchoring barbs. Equally important, the researchers did not observe harmful biological effects in the animals.
Pigs are widely used in cardiovascular research because their hearts closely resemble human hearts, being similar in size, structure and function. Showing that the magnetofluid works safely in a pig heart therefore provides a valuable proof-of-concept. But it does not yet guarantee that the technology will be safe or effective in people.
Despite the promising results, the technique remains firmly in the experimental stage. Before human trials can begin, researchers must demonstrate long-term safety, refine how the material is delivered and ensure it behaves predictably in larger animal studies.
There are also some practical problems to fix. For example, the magnetic material can affect MRI heart scans, making parts of the heart harder to see. Problems like this need to be solved before it can be used in patients. Also, medical devices have to go through a lot of testing, so it will probably take many years before it can be used in real treatments.
If the technology ultimately proves safe and effective in humans, it could offer a new way to protect people with atrial fibrillation from stroke. A catheter-delivered liquid seal might provide an alternative for patients who cannot tolerate anticoagulant drugs and could overcome some of the limitations of existing occlusion devices.
Given that atrial fibrillation affects tens of millions of people worldwide, even modest improvements in stroke prevention could have a substantial impact on global health.
For now, the magnetic gel remains a laboratory innovation rather than a clinical therapy. But it highlights how advances in materials science and biomedical engineering are opening new possibilities for tackling one of cardiology’s most persistent challenges.
David C. Gaze does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Sinan Aşçı, Postdoctoral Researcher at the Anti-Bullying Centre, Dublin City University
The Irish government has signalled that it is exploring options to introduce age restrictions on social media use for under-16s. The proposal sits within the government’s new National Digital and AI Strategy 2030, which frames online safety and age verification as part of Ireland’s broader ambition to act as a European digital regulatory hub.
The proposals include a “digital wallet” age-verification system. Detailed technical specifications have not yet been published. However, digital identity wallet models typically work by allowing a user to verify their age once through a trusted authority. After that, they can share only a simple confirmation – such as whether they are over 16 – rather than handing over full identity documents. The government has not set out the final architecture, but the stated aim is to reduce repeated data sharing with individual platforms.
Ireland is not alone in looking at age restrictions. Australia introduced a statutory ban, and other European countries are considering stricter access rules. But Ireland’s position is distinctive. It hosts the European headquarters of many major technology companies. It also plays a central role in EU enforcement of the Digital Services Act, which requires very large platforms to assess and mitigate systemic risks to minors.
The debate is not simply whether social media is good or bad for children. Blanket restrictions for under-16s raise an important question: are bans the most effective way to reduce harm? Or do they offer reassurance while leaving deeper problems – such as platform design – unchanged?
The Irish context
Ireland’s situation is significant because structural regulatory tools already exist at European level. Under the EU Digital Services Act, very large platforms must conduct systemic risk assessments, including risks to minors, and implement mitigation measures. Ireland plays a key role in this through Coimisiún na Meán, the country’s statutory media and online safety regulator.
Established under the Online Safety and Media Regulation Act 2022, the regulator has powers to oversee video-sharing platforms, develop binding online safety codes and investigative non-compliance by the technology companies based in Ireland. This includes in relation to the EU Digital Services Act in Ireland. This raises the question of whether new access restrictions are set to be introduced before these structural obligations are fully deployed.
Ireland’s proposed digital wallet pilot also intersects with EU plans for a European Digital Identity framework. The EU’s forthcoming European Digital Identity Wallet is intended to support digital proof of certain facts about a person, such as their age. No specific design for any Irish pilot has been produced. However, alignment with EU interoperability standards would be required if it is to integrate into the wider European system.
Evidence driving the debate
Ireland’s proposed ban is framed primarily in child-protection terms. These include concerns about youth mental health pressures, exposure to harmful or age-inappropriate material, and risks such as online grooming and exploitation. These concerns are not unfounded.
A 2020 review of research studies found associations between heavy social media use and anxiety or depressive symptoms. However, large-scale analyses suggest that average effects on wellbeing are small and highly variable. They can differ significantly depending on context and individual vulnerability. Risks exist, but they are not uniform.
Exposure to harmful content, including self-harm material, misogynistic narratives, or extremist content, is often shaped by how platforms recommend and amplify posts. Research from my colleagues in the DCU Anti-Bullying Centre shows how recommender systems can contribute to the circulation of toxic content.
Social media platforms are not neutral spaces. Their business models rely on maximising engagement and attention. Recommender systems prioritise emotionally charged material, and feedback mechanisms reward visibility and interaction.
These systems operate regardless of age. If a 17-year-old and a 15-year-old encounter harmful amplified content, the risk doesn’t go away for one user just because they’re over 16.
Age restrictions may form part of a broader safeguarding approach. However, on their own, they do not address recommender systems, addictive design features or the amplification of harmful material.
Risk and opportunity
At the same time, research consistently shows that risk and opportunity are intertwined. Children who are more active online may encounter greater exposure to harm. On the other hand, they may also gain more social connection and access to information. That complexity matters when designing policies intended to reduce harm without undermining participation.
Research on children’s own experiences suggests that many see social media as a normal part of their lives and use in-app safety tools to manage risks. Many also say they prefer safer platform design and clearer accountability rather than outright bans.
Children’s rights bodies in Ireland have similarly emphasised the need to balance protection with participation. They also point out that children’s views should be considered in the development of any pilot measures.
Ireland’s proposal reflects a broader shift away from relying solely on platform self-regulation. However, the key question is whether systems that amplify harmful content and reward attention can be effectively governed.
Ireland’s Digital and AI Strategy 2030 positions the country as both a host to global platforms and a digital regulatory leader. That dual role gives particular weight to how these measures are designed and enforced. Ultimately, the effectiveness of Ireland’s approach will depend not only on age thresholds, but on how robustly structural risk obligations are implemented.
Sinan Aşçı is employed as a postdoctoral researcher by DCU Anti-Bullying Centre on the Observatory project funded by the Department of Justice.
A huge wall of water and debris swept down the Teesta valley in the eastern Himalayas on October 3 2023, causing widespread devastation and the tragic loss of over 50 people. This powerful flood in India was the result of a landslide which caused a glacial lake higher up the valley to spill over. This phenomenon is known as a glacial lake outburst flood, or GLOF.
In a 2025 study of glacial lakes across the Bolivian Andes, my colleagues and I found that 11 are highly susceptible to producing potentially hazardous GLOFs. Such lakes are increasing in size and number as glaciers retreat around the world. In Bolivia, we saw 60 new lakes form in just six years.
Over the same six-year period, glaciers in the region shrank rapidly. If they continue to melt at the same rate, Bolivia will be entirely ice free by the 2080s. Unfortunately, this is likely to be a conservative estimate.
We modelled the shape of the land surface underneath the existing ice to predict where lakes might form in future. We found more than 50 potential lake sites. Further monitoring will ascertain which of these emerging lakes might pose a risk to downstream populations or infrastructure.
In our study, we used high resolution satellite imagery to monitor glaciers and glacial lakes across the Bolivian Andes. We mapped glacier and lake boundaries at annual intervals between 2016 and 2022.
Bolivia is home to nearly one-fifth of the world’s tropical glaciers. These glaciers are important in their own right, particularly during the dry season, when meltwater provides essential supplies for human consumption, agriculture and industry. Glaciers also play a role in the cultural life and heritage of Indigenous peoples in this region.
We found an alarming rate of shrinkage among these glaciers. Between 2016 and 2022, the total surface area of glaciers in Bolivia decreased by nearly 10% – at an average rate of almost two square miles per year. If these glaciers continue to retreat at the same rate, there will be none left in the region by the 2080s.
Surface area (blue bars) and number of lakes (red line) by year. Jamie MacManaway, CC BY-NC-ND
Yet this represents a best case scenario. As glaciers get smaller, they shrink more rapidly, so the rate of decline will probably increase over time.
Such rapid deglaciation not only threatens water security but may also damage ecosystems. In the Andes, high-altitude wetlands known as “bofedales” store vast amounts of carbon and help absorb water too. Should they dry out as a result of decreasing water availability, they may release the carbon they have been storing – driving further warming of the atmosphere.
As glaciers melted and shrank across the region, the number and size of glacial lakes increased. Around 60 new lakes formed over the course of the study period. Many of these lakes were small and would be unlikely to produce a GLOF capable of doing significant damage, but 120 were considered large enough to represent a potential hazard.
We analysed these lakes in order to assess their susceptibility to producing a GLOF and found that 11 were worthy of further investigation. For example, ascertaining the potential consequences on downstream populations of an outburst flood from one of these lakes could help to inform future monitoring and mitigation efforts.
To reduce the risk of future catastrophe, local communities can prepare in a range of ways. That includes the physical construction of spillways and diversion canals, strategic land-use planning and the design of flood-resistant infrastructure. Disaster preparedness also requires social measures, such as education and awareness raising so that residents understand clearly communicated evacuation plans or early warning systems.
Modelled lakes in the Cordillera Real, Bolivia. Blue lakes are those predicted to form given continued glacier recession, while cyan lakes were correctly predicted by the model to form between 2000 and 2022. Red lakes are those predicted by the model which did not form. Jamie MacManaway, CC BY-NC-ND
Modelling the hollows
Using existing global glacier thickness data combined with our findings, we created a digital model representing the shape of the land surface underneath the ice. Glaciers are immensely powerful erosive agents and can carve deep hollows into the bedrock that they travel over. As the ice retreats, these hollows often fill with water and become lakes.
We found 55 potential future lake sites. Not all of these lakes will definitely form. Shallow depressions may fill with sediment instead of water while deeper ones may be drained by gorges which can’t be detected by modelling because they’re just too narrow for the tech to find. Models would be even more reliable with access to higher resolution datasets which are not currently available for the Bolivian Andes.
Future lakes across Bolivia may represent important sources of water – partially offsetting the consequences of losing glacial meltwater. Nevertheless, these lakes may be susceptible to producing GLOFs, so rapid and sustained international action to reduce the effect of climate change on the world’s glaciated regions is critical.
Source: The Conversation – UK – By Francesca Lessa, Associate Professor in International Relations of the Americas, UCL
Nearly 50 years have passed since Argentina’s former president Isabel Martínez de Perón was overthrown by a civic-military coup on March 24, 1976. A military dictatorship led by Jorge Videla, Emilio Massera and Orlando Agosti seized control of the country.
There had been five previous coups in Argentina between 1930 and 1966. But the regime that came to power in 1976, calling itself the “process of national reorganisation”, stood out for its systematic campaign of political violence and terror until the end of its rule in 1983.
The dictatorship violently dismantled political parties, trade unions, social and student movements and guerrilla opposition groups. Censorship was also widespread. The military controlled the media, supervised universities and persecuted thousands of intellectuals and artists or forced them into exile.
The extent of the atrocities committed under the dictatorship remains debated. The National Commission on the Disappearance of Persons (Conadep) documented 8,961 victims, who are known as the desaparecidos, while human rights organisations put this figure closer to 30,000.
Around 250,000 people were forced into exile to escape the dictatorship and roughly 500 children, abducted alongside their parents or born in detention, were illegally adopted and had their identities changed.
After transitioning back to democracy, Argentina transformed into a pioneer of accountability. In 1983, the newly inaugurated president, Raúl Alfonsín, created the Conadep and ordered the prosecution of nine military commanders for the crimes of murder, unlawful deprivation of freedom and torture committed between March 1976 and June 1982.
The Conadep became the first truth commission in the world to complete a final, publicly available report in 1984. And the following year, five of the nine military commanders on trial (including Videla and Massera) were convicted. Argentina’s supreme court confirmed this verdict in 1986, officially acknowledging that systematic political repression had unfolded throughout the country.
However, progress soon slowed. Rising tensions within the armed forces led to the parliamentary sanctioning of a “full stop law” in 1986. This effectively halted investigations into atrocities committed by members of the security forces. The full stop law was followed by a “due obedience law” in 1987, which granted immunity to military personnel for crimes committed during the dictatorship. Two rounds of presidential pardons occurred in 1989 and 1990.
Survivors, their relatives, human rights groups and lawyers maintained their demands for accountability throughout this period. These efforts culminated in a 2005 supreme court decision that invalidated the impunity laws and reopened criminal trials for past atrocities.
Since then, 361 verdicts have been issued. Over 1,200 people have been convicted for their crimes, including Videla over the theft of babies from political prisoners. Almost 1,000 people are still under investigation. Argentina became a global leader in what has become known as the “justice cascade”, the worldwide shift towards increased accountability for past human rights abuses.
Progress under threat
After becoming Argentina’s president in 2023, Javier Milei has taken steps to dismantle the country’s human rights policy. He has simultaneously launched a strong and vicious smear campaign against the victims of the dictatorship and their relatives, as well as human rights groups.
Since entering office, the Milei administration has downgraded Argentina’s National Secretariat for Human Rights to a sub-secretariat. This change in status means the secretariat now has fewer powers and resources, and has lost nearly 60% of its staff. It no longer participates actively in trials, witness support has been reduced and the recording of hearings has been halted.
Under Milei, there has also been a high rate of home detention sentences or acquittals. In 2025, 84% of those currently detained in Argentina for crimes against humanity committed under the dictatorship (425 out of 504 people) were being held under house arrest. And 51 of the 60 people whose cases were decided that year were acquitted.
Meanwhile, the ministry of defence has dismantled the team responsible for surveying the archives of the armed forces. This team had played a fundamental role in identifying those responsible for “death flights”, where drugged prisoners were thrown from aircraft into the Atlantic Ocean. Similar teams working across other ministries have likewise been dissolved.
And in January 2025 the navy was authorised to destroy documents that are held in its general archive. Some of these documents could contain information regarding crimes committed during the dictatorship. Federal judge Alicia Vence has since ordered the navy to preserve documents that could serve as evidence of dictatorship crimes.
The return of military officials to key decision‑making roles in defence and security is another notable setback. Argentina had implemented substantial reforms to promote democratic civilian control of the armed forces and reduce the military’s political involvement. But in 2025, army chief Alberto Presti was appointed as defence minister, making him the first active-duty officer to assume the role since 1983.
Argentina has suffered setbacks in its human rights policy before. The administration of Mauricio Macri, which governed between 2015 and 2019, had introduced a similar pattern of defunding key policies combined with denialist discourses from government officials. But Milei’s actions display a different speed and depth compared with his predecessors.
Together with the prospect that Milei may sign a presidential pardon for military officers convicted of crimes against humanity on the eve of the anniversary, these developments raise concerns about the future of memory, truth and justice in Argentina.
What happens next will show whether this moment represents a temporary interruption or the beginning of a new chapter in Argentina’s struggle to safeguard the achievements secured over four decades of democracy.
Francesca Lessa’s projects “Operation Condor” and “Plancondor.org” received funding from University College London, the University of Oxford John Fell Fund, The British Academy/Leverhulme Trust, the University of Oxford ESRC Impact Acceleration Account, the European Commission under Horizon 2020, and the Open Society Foundations. Lessa is also the Honorary President of the Observatorio Luz Ibarburu, a network of human rights NGOs in Uruguay, as well as the principal researcher and the coordinator of the Plancondor.org collaborative project.
Lorena Balardini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
I thought when someone was bereaved it was the first couple of months and then everything was okay again. I was so naive. It is so different.
When I met Ella, it had been ten years since her father had died by suicide. She was 17 at the time, repeating important school exams. Although her parents had separated when she was young and contact with her father had been limited, they had started rebuilding their relationship.
She described that period as a happy one: her father was making more effort, both parents had new partners, and things felt “in a good place”. Then he died.
The aftermath was not contained to the weeks after the funeral. Ella missed half a school year as she struggled with the shock and strain of bereavement.
A decade later, she spoke to me about her grief in metaphors, as something ongoing rather than completed, a process that had shifted shape over time but had not ended.
Ella’s experience is not unusual.
Emily was 12 when her father died suddenly. She was present when it happened. Growing up in the Republic of Ireland in a family of five, she returned to school carrying not only the shock of his death but a growing sense that her grief was somehow too much.
“I just started hiding it because I thought that that was the right thing to do,” she told me, 42 years later.
What stayed with Emily was not only the grief itself, but the feeling that her sadness had been somehow inappropriate.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
Ella and Emily’s stories were part of a research project which involved in-depth, in-person interviews with 13 adults in Ireland who had lost a parent or sibling while in primary or secondary school.
I heard versions of their stories again and again.
Years, and in some cases decades, after the deaths participants were still grappling not only with grief, but with the fear that they had not grieved “properly”. Ella even told me:
I thought I was doing it wrong. Like I’d skipped a stage or something. Everyone else seemed to be moving on, and I just felt stuck. I kept thinking, ‘Is there something wrong with me?’
The people I spoke to were worried they were falling behind an invisible emotional timetable. That they had missed a “stage”. That they had failed to arrive at the elusive destination of “acceptance”.
Beneath these anxieties lies a powerful cultural story: that grief follows a recognisable path and, in time, comes to an end.
Yet for many of the people I spoke with, no matter how many years had passed, grief did not end.
It certainly changed, it sometimes resurfaced or intensified, particularly at unexpected moments (exams, milestones, becoming a parent themselves). But it did not disappear. The problem for many people I spoke with was not the enduring grief – it was the expectation that it should have finished.
Silence from fear
I have been thinking about grief in both a personal and professional capacity for the last 20 years. It began in 2006, after an experience early in my teaching career that, in hindsight, changed the direction of my thinking entirely. I realised then that many children come to school carrying far more than the bags on their backs.
It was a bright May morning in 2006 when I began a substitute teaching position in a primary school in Ireland, teaching a class of eight-year-olds. That morning, the principal took me aside to let me know that one pupil would be returning after the death of her mother by suicide. I remember the principal saying: “Good luck, I know you will be great.” How could she know I would be great at handling this situation? I certainly didn’t feel like I would be great.
I stood in the classroom, lesson plans in hand, heart in my throat, with no training, no manual, and no idea what to do. I saw the child immediately, her small shoulders hunched, her eyes averted. I never said anything to her about the death that day. I honestly did not know what to say and I was afraid that I would make things worse.
Instead, I tried to be extra kind. I smiled more at her. I offered extra academic help. I also overlooked behaviour I would normally address in the classroom. I now know that this can make things worse (if peers see a student getting preferential or special treatment).
My silence, though well-intentioned, came from fear. And in hindsight, it came at a cost because I look back now and feel like I did not do all that I could have to support this young girl.
As the author C.S. Lewis wrote after the death of his wife: “No one ever told me that grief felt so like fear.” Faced with someone else’s grief, our uncertainty can often turn into silence. We wait and we avoid. We hope grief will run its course and that the person will eventually be “over it”.
This experience stayed with me long after I had left that school. It prompted questions that would become the foundation of my academic journey: why are we not taught how to support grieving children? Why is death, one of the most significant human experiences, absent from so many parts of our lives?
Research shows that grief is different for everyone and doesn’t follow a simple path. Shutterstock/Madiwaso
Two years later, when given the opportunity to complete an undergraduate dissertation, I chose to explore childhood bereavement and the role of the teacher. That early project led to several years of classroom teaching and, eventually, to my research exploring childhood bereavement in Irish primary and secondary schools.
I kept returning to the same unanswered questions about grief that I was encountering in everyday school life. What I began to realise is that grief is everywhere in our schools and in our lives – and yet it is largely invisible.
Why we expect grief to end
If we want to understand why we expect grief to end, we need to look beyond psychology textbooks and towards history, culture, and the stories we keep telling ourselves.
Our ideas about “normal” grief are deeply shaped by the world we have inherited. When psychology was emerging as a discipline in the late 19th century, it promised order and understanding in a world that had become profoundly unstable.
It is no coincidence then that many of our dominant grief models took shape in this moment. If we look back to Victorian Britain and Ireland, we can see that death was very much visible and part of everyday life. Mourning was public, it was prolonged, and it was socially recognised. Black clothing signalled to everyone that you had experienced a death. Memorial jewellery held hair or photographs of the dead. It was not uncommon to pose the dead and take photographs of them.
Grief had a shared language, but most importantly, it had a permitted place in public life. People were not expected to hide their sorrow or to rush through it to the finish line. But that visibility did not survive the 20th century.
When two world wars arrived, they brought death and grief with them on an unprecedented scale.
A sea of poppies: Art installation, entitled Blood Swept Lands and Seas of Red at Tower of London in 2014 which features 888,246 ceramic poppies, each represents a British WW1 military war dead. Shutterstock/BBA Photography
It was not surprising then, that the response to this experience was that the pain of grief had to be contained. In some cases, public mourning was often replaced by stoicism, and silence.
In this context, grief came to be managed rather than expressed, echoing stoic traditions that view excessive sorrow as disruptive to one’s responsibilities. Such restraint has been defended in philosophical and religious ethics as promoting gratitude and has been known to provide comfort to some experiencing grief. But it does not provide a universal model for responding to grief.
Cross-cultural bereavement research shows significant variations in how emotions are displayed and supported publicly, suggesting that stoic containment of grief reflects a cultural model, rather than an inevitable response to grief.
So, it was not surprising to me that some people I spoke to mentioned wanting to hide their grief.
Caoimhe, for example, grew up in the Republic of Ireland in a family of five which included her parents and two brothers. Caoimhe’s father died when she was nine after being ill for four years. Caoimhe was in primary school at the time of the death. When I met her, it had been 41 years since her father’s death.
She said she felt that, even now, she has not dealt with her grief because her family did not acknowledge her father’s death and spoke about him in a way that made her feel that he was still alive and this made her feel like she had to suppress it:
I was very much aware that I did not want to cause grief for my mother so I think I did withdraw a little bit. I did pull away from friends and spend a lot of time in my room just thinking and wanting to be on my own.
Emily too talked about how when she returned to boarding school, she felt like she had to hide her grief:
It wasn’t something that I was encouraged to talk about, I learnt very quickly when I came back that none of the nuns and none of the adults were going to engage with me at any level, so even as a child I realised this is how everyone deals with this and just get on with it.
Grief framed as ‘work’ and ‘stages’
In his seminal 1917 essay Mourning and Melancholia, Sigmund Freud suggested that healthy grieving required detachment from the deceased. The bereaved, he argued, must gradually withdraw emotional energy from the person who has died so that life could continue.
Grief was framed as work: something difficult, but purposeful, with a clear endpoint. Later, Erich Lindemann’s research with survivors of mass tragedy reinforced the idea that grief followed recognisable patterns and could be managed through “grief work”.
These ideas found their most enduring cultural expression in Elisabeth Kübler-Ross’s five stages of grief: denial, anger, bargaining, depression, and acceptance. Although originally developed in her work with people facing a terminal diagnosis, the model was quickly adopted as a universal roadmap for bereavement. It was comforting because it reassured us that grief would unfold in order and it would, eventually, end.
It became prominent in popular culture: we see in an episode of The Simpsons (Season 4, Episode 16), as Homer’s enforced abstinence from beer results in behaviour that mirrors the stages of grief – a pattern Lisa recognises and explicitly identifies as the five-stage model. Even Bridget Jones is not immune. In Mad About the Boy, Bridget’s friends gather around a wine bar table and gently inform her that she is nearing the “final stage”: acceptance. The moment is played lightly, but the message is clear – grief is something you progress through and there is an end-stage.
The five so-called ‘stages’ of grief: shock and denial, anger, bargaining, depression and acceptance. Shutterstock/Madua
It is easy to roll your eyes at scenes like this, but their appeal runs deep. In moments of profound powerlessness, stage models give us a sense of control. They offer a map when we feel lost, and the promise of an ending when the pain of grief feels endless. Wouldn’t it be comforting if grief really did come with a calendar? A final checkpoint. Roll the credits. Life resumes.
The problem is not that these models were created, but that they became expectations. When grief returns, lingers, or refuses to soften, people often turn the discomfort inward.
Let’s go back to Ella who thought, “when someone was bereaved it was the first couple of months and then everything was okay again.”
The harm lies not in grieving deeply, but in believing that continuing to feel a connection or a bond with the deceased is in itself, a failure.
It is against this backdrop of silence, stoicism, and stage-based thinking that more contemporary grief theories began to emerge post-1990. It is important to remember that they did not emerge in order to deny the pain of death, but to offer language for many people who described their grief in different ways – such as holding on.
Why ‘letting go’ isn’t the point
By the 1990s, grief researchers had begun to ask themselves a different question. What if the problem was not that people were failing to “let go”, but that our theories had actually misunderstood what grieving really feels like?
Out of this shift came the idea of “continuing bonds”, developed by psychology researchers Dennis Klass, Phyllis Silverman and psychiatrist Steven Nickman.
Their work named something many bereaved people already knew intuitively: relationships do not simply end when someone dies. Instead, they change. People carry the dead forward through memory, ritual, internal conversation, and the quiet ways they shape their lives around the death.
For others, the bond continues in more private ways, kept hidden not because it is unhealthy, but because grief itself can feel like something you are meant to hide away.
Mia grew up in the Republic of Ireland in a family of five which included her parents and two older brothers. When Mia was 14, her 22-year-old brother died in an accident. Mia’s other brother was 26 at the time and was suffering from mental health problems. Mia felt as if, in some ways, she was suffering a double bereavement for both her brothers.
The overarching emotion that emerged from Mia’s interview was that of anger: anger towards her parents and school for their lack of support during this difficult period. Mia felt that as a result of trying to cope at home, she began to struggle with her mental health: “I became quite depressed … I suppose I hid it very well.”
This oscillation between appearing “okay” and feeling overwhelmed is captured in the dual process model of grief, developed by bereavement researchers Margaret Stroebe and Henk Schut. Rather than progressing neatly from loss to recovery, the model suggests that people move back and forth between confronting their grief and setting it aside in order to function.
This is a mode which chimes with another of my interviewees. Sophie grew up in southern Ireland in a family of five which included her parents, and her older brother and sister. She experienced four bereavements between the age of eight and 14. When Sophie was eight her grandmother died, when she was nine her cousin, whom she was very close to, died suddenly. Following this, Sophie’s grandfather died when she was 13 and the following year, when Sophie was 14, her brother died in an accident when he was only 22.
Sophie was in secondary school when the death occurred. She felt that the death of her brother had the most impact on her and recognised that her three previous experiences of death may have prepared her for it in some way. It had been 14 years since the death of her brother when I met her. Sophie discussed her experience without getting emotional. She felt that her family coped well with the death as she received a lot of familial support, particularly from her father who was instrumental in seeking support from Barretstown, a bereavement support service in Ireland. She recognised that her grief is still there, but comes in waves during different stages of her life:
The first six months I had insomnia … I couldn’t sleep and that kind of improved after the first six months and I got back into a routine but I remember being triggered off at certain points … I went through a period where I was getting very upset that he wasn’t around. It subsided and then it was triggered off you know birthdays and stuff or transitions … I think then going to college was particularly a big one because transitioning into adult … being in the place and stage that he was when he died.
The story we tell ourselves
So grief does not disappear. It ebbs and flows, often resurfacing unexpectedly, long after others assume it should have settled.
Psychologist Robert Neimeyer argues that bereavement does not just remove a person from our lives; it shatters our assumptions about how the world works because the future we imagined for ourselves is suddenly gone.
This idea was mentioned by many women who took part in my research who had lost a father. They mentioned, unprompted, that they wondered who would walk them down the aisle when they got married. The future story they had told themselves about how their life would unfold was ripped out and needed to be rewritten. The sense that life is predictable or fair is disrupted.
Earlier, theorist Colin Murray Parkes described this as the loss of an “assumptive world” and for many people, grief becomes the slow, uneven work of trying to rebuild a narrative that can hold what has happened. Ella, in my study captured that feeling of rupture when she said:
A person full of insecurities, full of sadness; I know that the world isn’t this silly dreamy place that it might have been before … you have to look after yourself a lot more.
This is why grief so often returns at moments of transition: birthdays, exams, weddings, or becoming a parent. It is not that we have not moved on; it is that the story keeps changing.
This helps explain what the singer Bob Geldof described in his reflections on death that grief does not simply fade, but can “erupt” without warning, even years later. In this sense, grief is not a single emotional state to be resolved, but a recurring human experience that surfaces as life continues.
These theories help explain why grief lingers and returns, but they also point us back to something more fundamental: most of us encounter grief not through theory, but through formative loss that shapes how we come to understand death at all.
The first death
I was 11 when I experienced my first bereavement: the death of my grandfather, my father’s father. My memories of him are bathed in warmth, he was a gentle, soft-spoken man, kind to his core. The eldest of nine children himself, he went on to raise nine of his own, my father being the first.
He always made time for his grandchildren (of which he had many) with small, meaningful gestures. When we visited, he would often reach into his pocket and produce a square of chocolate or a shiny pound coin (what would be the equivalent of a €1 coin today). Those simple gifts felt like treasures.
Though rooted in traditional rural life, he was ahead of his time in many ways. Family lore tells us that he insisted on pushing the pram when his children were small, an act that scandalised my grandmother, who maintained that such things simply were not done by men of his generation. But he did it anyway. That was the kind of man he was: grounded, thoughtful, and quietly progressive. His death was my first real encounter with grief, and though I did not have the words for it then, I now recognise that it left an imprint on how I understand grief.
I remember being allowed to visit my grandfather in the two weeks before he died. He had developed pneumonia and was struggling to breathe, a consequence of years spent smoking at a time when the true dangers were not known. It was difficult to see him that way, frail and gasping, and I remember finding it upsetting. But looking back now, I see the quiet wisdom in what my parents did.
They gave us the choice to visit, gently involving us in what I now recognise as the process of anticipatory grief. It was their way of helping us prepare, not by shielding us from death, but by offering us a way to begin understanding it.
What stands out most clearly from that time is the gentle support and encouragement of my father. He has never been afraid to talk about death. His calm presence and quiet faith offered us a kind of anchor, not through denial or platitudes, but through openness, steadiness, and trust in something greater. His belief did not erase the pain, but it gave it a shape, a space to be held. In a moment that could have felt frightening or isolating, his comfort gave me strength.
When my grandfather died, I remember his body being brought home and laid out in a large front room, as was tradition in rural Ireland. Through my adult eyes now, I can see the beauty in that ritual, a final gesture of love and inclusion.
But as a child, I was afraid. It took time to summon the courage to go into that room and see him laid out. I remember the stillness of the room, the unfamiliar scent in the air, the cold stiffness of his fingers. Time felt suspended.
I cried a lot, as the truth settled in: the people we love can die. My parents could die. My siblings. Even me. That was the moment when the permanence of death first imprinted itself on my young mind.
The individuality of grief
What continues to strike me in both my personal reflections and research is how profoundly individual the experience of grief can be, even when shared within the same household. In writing about the death of my grandfather, I decided to speak with my siblings to understand how they remembered that time.
One of my siblings, who is characteristically less openly emotional, began to cry as we spoke. This was an unexpected reaction that neither of us had anticipated. They recalled how, even though their belief in God and religion had disappeared, they had spent weeks praying that our grandfather would recover. “I used to pray every night,” they said. “And when he died, I just stopped. What was the point?” Later in the conversation, they added quietly: “Our parents just didn’t talk about him afterwards. They just didn’t talk about him.”
I was struck by how different their experience was from mine. I remembered that period as one of openness and inclusion, marked by quiet support, meaningful rituals, and humorous stories. For my sibling, however, it was defined by silence and confirmation of their disillusionment about religion.
The contrast was sharp, but it was an important reminder of the individual reality of grief. We had lived through the same bereavement, in the same house, with the same emotional support from our parents, and yet we had constructed completely different narratives around it.
It was the same in my research. Over and over again, participants described the vastly different ways that grief manifested within their families.
What this teaches us is that grief is not experienced equally and it is not synchronised. Each person carries a different understanding of the person who has died, a different level of emotional maturity, and a different internal process.
We must resist the urge to generalise. We cannot assume that because one person in a family appears to be “doing okay,” their sibling must be too. Nor can we assume that a lack of visible distress equates to emotional resilience. Grief is deeply personal, shaped by both internal and external factors, and influenced by what is spoken, what is avoided, and what is felt alone in silence.
What happens if there is no end point?
So what happens when we stop expecting grief to end? The five stages endure because they promise an endpoint when grief makes time feel suspended. It is not surprising that for many of us in moments of profound grief, that promise can feel like a lifeline.
What is striking is that those who write most honestly about grief, those who speak from inside it rather than about it from a distance, rarely describe an ending at all. Freud himself, so often associated with detachment, wrote something very different later in life. In a 1929 letter to his friend Ludwig Binswanger, written after the death of Freud’s daughter Sophie, he acknowledged that grief does not resolve:
We know that after such a loss the acute state of mourning will subside … but we also know that we shall remain inconsolable and will never find a substitute.
The pain may soften, Freud suggested, but the loss is never replaced. The love endures, and so does the absence.
Grief does not neatly resolve, and great thinkers have recognised this. Viktor Frankl, neurologist, psychologist, and Holocaust survivor wrote in his memoir, Man’s Search for Meaning, about his own experience of suffering, and observed that “if there is meaning in life at all, then there must be meaning in suffering,” and that “in some way suffering ceases to be suffering at the moment it finds a meaning”.
Grief is not a detour from life to be exited as quickly as possible; it is a form of suffering that can become part of the fabric of a meaningful life. Frankl also reminded us that everything can be taken from us but one thing, the last of the human freedoms, “to choose one’s attitude in any given set of circumstances, to choose one’s own way”.
This is a reminder that how we carry grief matters even when the pain remains. This aligns with what many people who have lived with grief tell us: the pain may become less intense over time, but the love endures.
Nearly a century after Freud, the songwriter Nick Cave wrote publicly about grief following the death of his son, describing it not as something to be mastered or completed, but as a state of profound powerlessness. Grief, he wrote, “is not something you pass through, as there is no other side.” What remains is not closure, but humility.
A recognition that love does not disappear when someone dies, and that the ache left behind is not evidence of failure, but of attachment. Cave speaks of grief as something that changes shape over time, becoming less raw perhaps, but no less real.
Geldof echoed this in his reflections on the death of his daughter Peaches, saying that “time accommodates” the grief, but it “is ever present.”
The distinction matters. Getting on with life does not require leaving the dead behind. It means learning to carry grief alongside love, and absence alongside presence.
Ongoing bonds with the dead are not signs of denial or pathology, but very often the way people survive. When grief is allowed to be ongoing, when the dead can be spoken about, something shifts. People stop measuring themselves against an imagined timeline and they stop waiting to “graduate” from grief.
Perhaps the discomfort we feel around enduring grief says less about the bereaved and more about the rest of us. Grief unsettles us because it reminds us of life’s fragility and our own mortality.
But if we allow ourselves to move away from the idea that grief is a problem to be solved, we make room for a more honest understanding of grief. It is likely that grief does not end because love does not end. What changes is not the bond, but how we learn to live with it.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Aoife Lynam does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Nigel Farage has accused YouGov of being “deceptive” after the polling company consistently showed Reform with less support than other surveys. He has claimed the company broke transparency rules set out by the British Polling Council over how it presents headline figures. As a result, YouGov has agreed to publish more data in future.
The chart below compares Britain’s monthly voting intentions for Reform in a poll of polls derived from 14 different agencies, with voting intentions for the party from YouGov. The comparison runs from the start of 2025 to March 2026. At first glance, it appears that Farage is right – the YouGov data is below the poll of polls data for most of the time.
However, if we calculate the difference between the two series, the poll of polls average for Reform over the last 15 months is about 28% in voting intentions – for YouGov it is about 26%. This 2% difference is well within the margin of error associated with polling; what statisticians describe as “not statistically significant”.
Vote intentions for Reform, poll of polls and YouGov
The margin of error arises because polls try to measure support for the party across Britain from a survey of only 1,500 to 2,000 respondents. A good survey tries to replicate the diversity of the country in voting intentions, but it may differ from the country-wide support for the party because of random chance.
There is no real difference between the two series in the chart once this chance element is taken into account. If the survey has a truly representative sample, this random element can be ignored. But if there are problems with the sample, it will be inaccurate.
One such problem is accurately representing ethnic minorities in the sample, because they are less likely to respond to requests to do a survey. If a particular group is underrepresented, this can bias the results. To compensate for this problem, pollsters like YouGov use weighting, which involves giving more weight to some respondents than to others.
For example, the 2021 census shows that 4% of the population in Britain identifies as ethnically black. If only 2% of survey respondents fit this description, pollsters deal with this by counting these respondents twice in the analysis, which produces 4% black respondents.
Different agencies use different weighting schemes, which gives rise to variations in the answers they get to surveys. This is acceptable, providing these differences are not too large (not statistically significant).
Another factor may be the questions asked. This is where YouGov’s discrepancy arises.
YouGov has said it asks respondents first about general voting intention, and then specific constituency-level voting. This, the company says, takes account of tactical voting and is a more accurate representation of how a general election would play out.
There are clear differences between responses to the national and constituency questions – notably, more “don’t knows” in the latter, which means more uncertainty in the constituency responses.
My explanation of this is that when people are thinking about their own neighbourhood, they realise that voting involves a serious decision which can change their lives. When they respond to the national question, they are more likely to use it as a protest against the government and other parties.
Is Reform losing ground?
One reason Farage may be upset is because there is clear evidence that Reform is losing ground in the polls since the start of the year. This can be seen in the chart below, which shows a poll of polls of weekly voting intentions for the five major national parties in Britain since the July 2024 general election.
In the early weeks of 2025, Reform moved ahead of both Labour and the Conservatives – reaching 30% in vote intentions by May that year. The party’s support hovered around this figure until the start of 2026, when it began to decline. In October 2025, Reform was at 31% in voting intentions, but by March this year it was at 27%.
Vote intentions for the five major parties in Britain since the general election
Voting intentions since the July 2024 general election. P Whiteley, Pollbase, CC BY-ND
Polling is important to all politicians, despite the fact that many criticise it if they appear to be losing ground. Farage is probably more attentive than most because Reform’s support has been so volatile over time – and what goes up can come down.
With its success in local government elections, Reform is now exposed to much closer scrutiny than it was in the past. Some news stories that may explain its now-declining popularity include Reform-controlled councils raising council tax after pledging to “reduce waste and cut your taxes”, and the party receiving the largest-ever political donation from a living individual in British history. Neither of these bode well for a party claiming to represent working-class voters.
Farage (along with Kemi Badenoch) may also be regretting his rush to support the US and Israel in their war against Iran. A recent poll showed that only 28% of UK respondents supported the war, while 49% opposed it.
In the past, Farage has claimed to be a close friend of Donald Trump, but he talks about this much less these days – the US president’s approval ratings are now very poor in the UK.
Both Reform and the Conservatives are on the wrong side of public opinion on this issue, something which is likely to haunt them in the May elections this year if the war continues to damage the economy.
Paul Whiteley has received funding from the British Academy and the ESRC.
Neither side showed any indication that the planned five-day cessation of operations would be anything other than temporary, and they warned that any violation would be met with reciprocal strikes.
Already the conflict has seen hundreds killed, with a blast at a drug rehabilitation center in Kabul on March 16, 2026, killing more than 400 people, according to Afghanistan’s Taliban government.
The conflict has been largely kept off the front pages by the war in Iran. But as an expert on Pakistan’s foreign policy and security, I believe the fighting has the potential to further destabilize the region.
Why are Pakistan and Afghanistan fighting now?
The current conflict between Pakistan and Afghanistan is not a sudden rupture of relations between the two countries, which share a 1,640-mile (2,640 km) border called the Durand Line.
Rather, the flare-up is a result of an intensification of long-simmering, historical security concerns along the Durand Line. The immediate trigger lies in Pakistan’s growing concern over cross-border militant activity, particularly from groups such as the Tehreek-e-Taliban Pakistan, which Islamabad believes operate from sanctuaries inside Afghanistan.
However, that did not materialize. Instead, there was a perceptible rise in militant attacks within Pakistan, accompanied by Kabul’s reluctance or inability to decisively act against Tehreek-e-Taliban Pakistan.
Complicating this landscape further is the evolving character of the threat environment for Pakistan. In 2025, Pakistan was involved in a short war with historical rival India – the most intense fighting between the two countries for nearly 30 years.
In response, Pakistan has reportedly undertaken countermeasures, including airstrikes targeting drone infrastructure linked to militant networks inside Afghanistan.
All this points to a widening battlespace, where new technologies make it easier to escalate in indirect and deniable ways.
This is not merely a bilateral border crisis but a layered security contest shaped by cross-border militancy, emerging technologies and competing threat narratives.
The convergence of Pakistan’s growing willingness to respond with physical force, the Afghan Taliban’s assertion of sovereignty and the absence of a mutually agreed framework for border management continues to drive episodic escalation rooted in structural mistrust.
What is the broader history of Pakistan-Afghanistan relations?
Historically, Pakistan-Afghanistan relations have often oscillated between uneasy cooperation and strategic suspicion toward each other – all shaped by unresolved territorial, ideological and geopolitical dynamics.
At the heart of it lies a dispute over the Durand Line, which Afghanistan has never formally recognized as an international border. This has resulted in a sustained and persistent tension in their bilateral relations since Pakistan’s independence in 1947.
However, the Soviet invasion of Afghanistan in 1979 marked a critical turning point. Pakistan became a front-line state supporting the Afghan jihad against invading Soviet forces.
This entrenched cross-border militant networks and blurred the boundary between state policy and nonstate actors, resulting in dynamics that continue to shape the region.
The post-2001 period was marked by fraught relationships between Pakistan and successive U.S.-backed Afghan governments, particularly over allegations of Pakistan’s alleged proxy support for Islamist groups in Afghanistan.
Many thought the Afghan Taliban’s return to power in 2021 would resolve this tension. But instead, it reconfigured it.
While ideological affinities continue to exist between the two nations, they have not translated into any sort of strategic alignment – especially on questions of militancy and border control.
What are the implications of the conflict for the region?
The implications of Pakistan-Afghanistan tensions are significant and extend well beyond bilateral frictions. They intersect with broader questions of regional stability, militancy and great power competition.
I believe there are four direct implications:
First, the persistence of ungoverned or contested spaces along the Pakistan-Afghan border risks creating an enabling environment for transnational militant groups. This has real implications not only for Pakistan’s internal security but also for regional actors concerned about spillover effects.
Second, instability along the Pakistan-Afghanistan border complicates regional connectivity and economic integration initiatives, including projects linked to broader Central and South Asia. A volatile western frontier constrains Pakistan’s ability to act as a regional stabilizer and a safe conduit for regional trade and energy corridors.
Third, for outside interested parties like the U.S., the situation underscores the limits of disengagement from Afghanistan. While Washington’s military withdrawal marked the end of direct involvement, the persistence of militancy and the risk of regional destabilization ensure that Afghanistan remains strategically relevant not only for the U.S. but for other major powers as well.
Finally, I see these tensions as highlighting a broader pattern: The post-2021 Afghanistan remains internally consolidated but externally contested. Its relationships with neighbors, particularly Pakistan, will be central in determining whether the region moves toward managed stability or recurring cycles of escalation.
Rabia Akhtar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.