Chemistry is stuck in the dark ages – ‘chemputation’ can bring it into the digital world

Source: The Conversation – UK – By Lee Cronin, Regius Chair of Chemistry, University of Glasgow

In Chemify’s laboratories, AI-proposed molecules are compiled into chemical code which robots execute and test in real time. Chris James/Chemify, CC BY-NC-SA

Chemistry deals with that most fundamental subject: matter. New drugs, materials and batteries all depend on our ability to make new molecules. But discovery of new substances is slow, expensive and fragile. Each molecule is treated as a bespoke craft project. If a synthesis works in one lab, it often fails in another.

The problem is that any single molecule could have an almost infinite number of routes to creation. These routes are published as static text, stripped of the context, timing and error correction that made them work in the first place. So while chemistry is often presented as one of the most advanced sciences, its day-to-day practice remains surprisingly manual.

For centuries prior to the emergence of modern chemistry, alchemists worked by hand, mixing substances, adjusting conditions by feel, passing knowledge from teacher to student while keeping many secrets. Today’s chemists use far more analytical tools, yet the core workflow has barely changed.

We still design molecules manually using the rules of chemistry, then ask highly trained humans to translate these ideas into reality in the laboratory, step by step, reaction by reaction.

At the same time, we are living through an explosion in artificial intelligence and robotics – and chemists have rushed to apply these tools to molecular discovery. AI systems can propose millions of candidate molecules, rate and optimise them, and even suggest reaction pathways.

But frustratingly, these tools frequently hallucinate chemicals that cannot be made, because (unlike in the case of proteins) no one has yet captured all the practical rules for making molecules digitally.

Chemistry cannot become truly digital unless it is programmable. In other words, we need to be able to write down, in a machine-readable way, how to assemble molecules – including instructions, conditionals, loops and error handling – and then execute these instructions on different hardware in different places with the same outcome.

Without a language that allows chemistry to be executed, not just described, today’s cutting-edge AI tools risk generating little more than plausible-looking illusions of new chemical substances. This is where using the computer as an architecture to build a digital chemistry system, or “chemputer”, becomes imperative in my view.

Digital chemistry at the University of Glasgow.

Before computers, calculation was manual and mechanical. People used slide rules, tables and specialist devices built for specific tasks. But when Alan Turing showed that any computable problem could be expressed as instructions for a simple abstract machine, computation was liberated from having to be done on specific hardware – and progress became exponential.

Chemistry has never made that jump. Akin to chefs using individual methods to achieve the perfect souffle, researchers around the world have different ways of preparing chemicals. So while automation in chemistry exists, research remains largely artisanal in nature.

An AI can design a thousand hypothetical drugs overnight. But if each one requires a human chemist to manually work out how to make it because the molecules generated are not constrained by the real-world rules of chemistry, we have simply moved the bottleneck. Design has gone digital, making has not.

Chemistry by computers

To properly digitise chemistry, we need a programmable language for matter to encode these real-world rules. This idea led me, with colleagues in my research laboratory at the University of Glasgow, to develop the process of chemputation back in 2012.

We built a concrete abstraction of what a chemical code would look like – with steps such as “add/subtract matter then add/subtract energy”. By translating these steps into binary code, it was possible to build the components of a chemputer.

The premise is simple. Chemistry can be treated as a form of computation carried out in the physical world. Instead of publishing chemistry as prose, it is published in executable code, as described in our new preprinted article. Reagents are data. Operations like mixing, heating, separating and purifying are instructions. A range of machines, such as those shown in the image below, play the role of processors.

Elements of the chemputation process.
Elements of the chemputation process.
Lee Cronin, CC BY-NC-SA

Once chemistry becomes programmable, we expect many things to change. Reproducibility improves because processes are no longer interpreted by humans. Sharing becomes meaningful because a synthesis can be run, not re-imagined. Importantly, programmable chemistry allows feedback loops for error correction, with sensors monitoring reactions in real time.

Self-driving laboratories

Our ambitions for chemputation took a major step forward in June 2025 when Chemify, our University of Glasgow corporate spin-out, launched the world’s first chemifarm. At this facility in Glasgow’s Maryhill district, the process of chemputation is applied to making new molecules for drug and materials discovery.

It uses AI and robotics to enable the entire system to “self-learn”, and thus get better at making more advanced molecules over time. Discovery becomes an iterative, programmable process rather than a linear gamble.

This fits with the wider emergence of “self-driving” laboratories – robotic labs we pioneered that use AI and automation to enhance the speed and breadth of research.

Chemistry began as alchemy – a human art shaped by intuition and mystery, making potions, manipulating precious metals and building the first laboratory equipment. It has since grown into a rigorous science, yet never fully escaped its manual roots. If we want chemistry to keep pace with the digital age, especially in an era dominated by AI, we must now finish that transition.

The Conversation

Lee Cronin is the CEO and shareholder in Chemify and is the regius Professor at the University of Glasgow and recieves funding from many organisations including the UKRI Engineering and Physical Sciences Research Council.

ref. Chemistry is stuck in the dark ages – ‘chemputation’ can bring it into the digital world – https://theconversation.com/chemistry-is-stuck-in-the-dark-ages-chemputation-can-bring-it-into-the-digital-world-272610

Greenland is rich in natural resources – a geologist explains why

Source: The Conversation – UK – By Jonathan Paul, Associate Professor in Earth Science, Royal Holloway, University of London

Greenland’s concentration of natural resource wealth is tied to its hugely varied geological history over the past 4 billion years. Jane Rix/Shutterstock

Greenland, the largest island on Earth, possesses some of the richest stores of natural resources anywhere in the world.

These include critical raw materials – resources such as lithium and rare earth elements (REEs) that are essential for green technologies, but whose production and sustainability are highly sensitive – plus other valuable minerals and metals, and a huge volume of hydrocarbons including oil and gas.

Three of Greenland’s REE-bearing deposits, deep under the ice, may be among the world’s largest by volume, holding great potential for the manufacture of batteries and electrical components essential to the global energy transition.

The scale of Greenland’s hydrocarbon potential and mineral wealth has stimulated extensive research by Denmark and the US into the commercial and environmental viability of new activities like mining. The US Geological Survey estimates that onshore northeast Greenland (including ice-covered areas) contains around 31 billion barrels of oil-equivalent in hydrocarbons – similar to the US’s entire volume of proven crude oil reserves.

But Greenland’s ice-free area, which is nearly double the size of the UK, forms less than a fifth of the island’s total surface area – raising the possibility that huge stores of unexplored natural resources are present beneath the ice.

Greenland’s concentration of natural resource wealth is tied to its hugely varied geological history over the past 4 billion years. Some of the oldest rocks on Earth can be found here, as well as truck-sized lumps of native (not meteorite-derived) iron. Diamond-bearing kimberlite “pipes” were discovered in the 1970s but have yet to be exploited, largely due to the logistical challenges of mining them.

Geologically speaking, it is highly unusual (and exciting for geologists like me) for one area to have experienced all three key ways that natural resources – from oil and gas to REEs and gems – are generated. These processes relate to episodes of mountain building, rifting (crustal relaxation and extension), and volcanic activity.

Greenland was shaped by many prolonged periods of mountain building. These compressive forces broke up its crust, allowing gold, gems such as rubies, and graphite to be deposited in the faults and fractures. Graphite is crucial for the production of lithium batteries but remains “underexplored”, according to the Geological Survey of Denmark and Greenland, relative to major producers such as China and South Korea.

But the greatest proportion of Greenland’s natural resources originates from its periods of rifting – including, most recently, the formation of the Atlantic Ocean from the beginning of the Jurassic Period just over 200 million years ago.

Map of Greenland's major geologic provinces with their rock types.
Greenland’s major geologic provinces with rock types and ages.
Geophysical Research Letters, CC BY-NC-SA

Greenland’s onshore sedimentary basins such as the Jameson Land Basin appear to hold the greatest potential of oil and gas reserves, analogous to Norway’s hydrocarbon-rich continental shelf. However, prohibitively high costs have limited commercial exploration. There is also a growing body of research suggesting potentially extensive petroleum systems ringing the entirety of offshore Greenland.

Metals such as lead, copper, iron and zinc are also present in the onshore (mostly ice-free) sedimentary basins, and have been worked locally, on a small scale, since 1780.

Difficult-to-source rare earth elements

While not as intimately related to volcanic activity as nearby Iceland – which, uniquely, sits at the intersection of a mid-ocean ridge and a mantle plume – many of Greenland’s critical raw materials owe their existence to its volcanic history.

REEs such as niobium, tantalum and ytterbium have been discovered in igneous rock layers – similar to the discovery (and subsequent mining) of silver and zinc reserves in south-west England, which were deposited by warm hydrothermal waters circulating at the tip of large volcanic intrusions.

Critically among REEs, Greenland is also predicted to hold sufficient sub-ice reserves of dysprosium and neodymium to satisfy more than a quarter of predicted future global demand – a combined total of nearly 40 million tonnes.

These elements are increasingly seen as the most economically important yet difficult to source REEs because of their indispensable role in wind power, electric motors for clean road transport, and magnets in high-temperature settings like nuclear reactors.

The development of known deposits such as Kvanefield in southern Greenland – not to mention those not yet discovered in the island’s central rocky core – could easily affect the global REE market, owing to their relative global scarcity.

An unfortunate dilemma

The global energy transition came about due to increasing public recognition of the manifold threats of burning fossil fuels. But climate change has major implications for the availability of many of Greenland’s natural resources that are currently blanketed by kilometres of ice – and which are a key part of that energy transition.

An area the size of Albania has melted since 1995, and this trend is likely to accelerate unless global carbon emissions fall sharply in the near future.

Recent advances in survey techniques, such as the use of ground-penetrating radar, allow us to peer with increasing certainty beneath the ice. We are now able to obtain an accurate picture of bedrock topography below up to 2 km of ice cover, providing clues as to the potential mineral resources in Greenland’s subsurface.

However, progress is slow in prospecting under the ice – and sustainable extraction is likely to prove even harder.

Soon, an unfortunate dilemma may need to be addressed. Should Greenland’s increasingly available resource wealth be extracted with gusto, in order to sustain and enhance the energy transition? But doing so will add to the effects of climate change on Greenland and beyond, including despoiling much of its pristine landscape and contributing to rising sea levels that could swamp its coastal settlements.

Currently, all mining and resource extraction activities are heavily regulated by the government of Greenland through comprehensive legal frameworks dating from the 1970s. However, pressures to loosen these controls, and to grant new licences for exploration and exploitation, may increase amid the US’s strong interest in Greenland’s future.

The Conversation

Jonathan Paul does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Greenland is rich in natural resources – a geologist explains why – https://theconversation.com/greenland-is-rich-in-natural-resources-a-geologist-explains-why-273022

Arrow tips found in South Africa are the oldest evidence of poison use in hunting

Source: The Conversation – Africa (2) – By Marlize Lombard, Professor with Research Focus in Stone Age Archaeology, Palaeo-Research Institute, University of Johannesburg

Boophone disticha. Ton Rulkens from Mozambique, CC BY-SA 2.0, via Wikimedia Commons, CC BY

The oldest evidence for the use of arrow poison globally was long thought to come from Egypt, dating to 4,000 years ago. It was a black, toxic residue on bone arrowheads from a tomb at the Naga ed Der archaeological site.

New evidence from southern Africa is challenging this.

New research has found poison on stone arrow tips from South Africa dating to 60,000 years ago. It is the oldest direct evidence for hunting with poisoned arrows.

This adds to what is already known about the know-how of ancient African bowhunters. These abilities may have contributed to our species’ long and flourishing evolution in the region, and ultimately the successful spread of Homo sapiens out of Africa.

Hunter-gatherers in southern Africa

The evidence comes from Umhlatuzana Rock Shelter, in South Africa’s KwaZulu-Natal province. The site was partly excavated in the 1980s to preserve archaeological material that could be damaged during the construction of the N3 highway between the cities of Durban and Pietermaritzburg.

Umhlatuzana is recognised as an important Stone Age site where hunter-gatherers lived at least 70,000 years ago. It is one of only a few sites in southern Africa where people continued to live until just a few thousand years ago.

In southern Africa, people have a long history of hunting with poisoned arrows. For example, a team of South African and Swedish archaeologists found residues on arrow tips dating to between a few centuries and 1,000 years ago, that revealed how different arrow poison recipes were used.

Recently, three bone arrowheads stored in a poison-filled bone container were reported from Kruger Cave in South Africa dating to almost 7,000 years ago. This pushed back direct molecular evidence of arrow poison use to about 3,000 years before the Egyptian poisoned arrows.

Traces of poison have previously been found on a stick and in a lump of beeswax dating to between 35,000 and 25,000 years ago at Border Cave in KwaZulu-Natal. These were seen as indirect suggestions of early hunting poisons.

As a researcher in cognitive and Stone Age archaeology, I studied some of the artefacts from Umhlatuzana almost 20 years ago, finding use traces and adhesive residues on some of the quartz backed microliths (small, shaped stone tools) from 60,000 years ago. This showed that they were probably used as arrow tips.

Now, Sven Isaksson in the archaeology laboratory at Stockholm University has been able to identify molecular traces of toxic plant alkaloids (chemical substances), known to be an arrow poison, on a handful of these artefacts.

Poison from indigenous plants

This latest research revealed the presence of buphandrine and epibuphanisine toxic alkaloids on five out of ten analysed arrow tips from Umhlatuzana. The same alkaloids were also found on bone arrowheads collected by Swedish travellers in the region 250 years ago. This tells us that the same arrow poison was used for many millennia in southern Africa.

Both alkaloids can be found in several southern African species of Amaryllidaceae, a family of flowering plants growing from bulbs. But only what is colloquially known as gifbol (poison bulb, Boophone disticha) is well-recorded as the source of an arrow poison. The plant’s bulb contains a toxic juice (exudate).

Finding these specific alkaloids on five out of the ten quartz arrow tips studied cannot be coincidental. Ancient hunter-gatherers would have been familiar with the toxic properties of the gifbol exudates. For example, by about 77,000 years ago, people of the same region also understood the insecticidal and larvicidal properties of some aromatic leaves that were used for bedding. So they probably would not have kept the gifbol substance in their living space.

Substances with buphandrine and epibuphanisine molecules are not used commercially or in archaeological conservation, ruling out accidental modern contamination of the arrow tips.

Gifbol bulbs can survive for a century or more, despite drought cycles and fire regimes. The plant is indigenous to South Africa, thriving in grassland, savanna and Karoo vegetation. It is widespread throughout the southern, eastern and northern regions of South Africa, growing within a day’s walk from Umhlatuzana Rock Shelter today. For various reasons, it’s likely that it was also available to the inhabitants of the site thousands of years ago.

The toxic chemicals in the bulb last a long time. They don’t decompose easily, even in wet environments, and they interact well with mineral surfaces like stone arrow tips. That’s probably why they survived for 60,000 years at Umhlatuzana.

Implications of the world’s oldest known poisoned arrow tips

The quartz arrow tips with gifbol poison now represent the first direct evidence for hunting with poisoned arrows in southern Africa, and globally – at 60,000 years ago.

It demonstrates that these ancient bowhunters possessed a knowledge system enabling them to identify, extract and apply toxic plant exudates effectively. They must have also understood prey ecology and behaviour to know that the delayed effect of poison shot into an animal would weaken it after some time. That would make it easier to run down, a technique known as persistence hunting.

Such out-of-sight, long-distance action is a convincing indicator of complex cognition that requires response inhibition (being able to delay an action for a reason). Because poison is not a physical force, but functions chemically, the hunters must also have relied on advanced planning, abstraction and causal reasoning.

Thus, apart from providing the first direct evidence of hunting with poisoned arrows, the findings contribute to the understanding of human adaptation, techno-behavioural complexity and modern human behaviour in southern Africa.

The Conversation

Marlize Lombard works for the University of Johannesburg

ref. Arrow tips found in South Africa are the oldest evidence of poison use in hunting – https://theconversation.com/arrow-tips-found-in-south-africa-are-the-oldest-evidence-of-poison-use-in-hunting-271444

Does running wear out the bodies of professionals and amateurs alike?

Source: The Conversation – France – By Sylvain Durand, Professeur de physiologie humaine au département STAPS, chercheur au laboratoire Motricité, Interactions, Performance, Le Mans Université

Running counts among today’s most popular sports. Sometimes the race is on even before the competition itself has started, as tickets for events sell out within hours. In France, this has got people talking about a “race for the runner’s bib”.

So, while running enjoys the reputation of a wholesome sport, the reality is that some of us feel stress at the simple prospect of donning a bib, while even a greater number of us face exhaustion upon completing a race such as a marathon or trail. So, what exactly is the toll of the sport on our bodies, and does our status as an amateur or a pro make a difference?

Working it out like a pro

It’d be easy to place professional and amateur runners in two separate boxes. Indeed, pros train hard–up to three times a day ahead of certain races. Life at those times is austere, punctuated by meals, runs and sleep, leaving little room for improvisation. And while you might think that the countless events around the world might dilute some of the demand for them, competition in such a universal sport is in fact fierce and professional runners need to push their bodies to the limit to get better at it.

High-level careers are often brief, lasting five or six years. Stories such as that of Eliud Kipchoge, the first man to run a marathon in under two hours (under non-certifiable conditions), sixteen years after becoming world champion in the 5,000 metres on the track, remain exceptions.

The significant mechanical stress inherent in the sport weighs on the muscles, tendons and skeleton. There are times when rest periods are short and it’s increasingly common to see athletes injuring themselves during competitions on live television, a surefire sign of physical and mental exhaustion. Some might consider these factors to be fairly typical: after all, these are top-level athletes.

Similarities between professionals and amateurs

But are world champs, next-door champs and ordinary runners really that different? Considering the tip of the iceberg of this question, the answer seems obvious: they don’t run at the same speed and therefore don’t spend the same time exerting themselves. But what about the submerged part: the pre-race prep, training, the individuals’ investment and self-sacrifice? When you want to break a record–your own record–don’t you give 100% of yourself, both physically and mentally?

Let’s consider the figures for the Paris Marathon in 2025: 56,950 registered for the race, 55,499 finishers. The mass event spells the same challenge for all: 42,195 km (around 26 mi) for the fifty or so athletes who might be considered elite and all the others who have to juggle it with their professional and family lives.

In truth, regardless of your level or speed, there are many similarities in how you prepare for a marathon, with identical training loads. Marathon training typically lasts ten to twelve weeks and includes essential elements such as “long runs”, a training session of around thirty kilometres recommended once a week.

No one escapes it. And there’s a whole range of science-based books on running, designed to guide the general public.

However, as training takes its toll on both body and morale, the risk of injury rises.

Trail or marathon prepping: increased risk of injury for amateurs

In fact, we most often see stress-related injuries among amateurs.

A high-level athlete doesn’t need to see a sports doctor. Why is that? Because they have built their careers over many years and have specific genetic characteristics that allow them to take on heavy training loads. They follow a specific programme that includes dietary measures, recovery phases and processes.

Professional athletes benefit from much better general and medical support than novice or amateur runners who, whether for individual or collective challenges, embark on projects such as marathons or trail running. This is how a runner like Christelle Daunay, after fifteen years of practice and modest beginnings at the national level, patiently built herself up to win the European Marathon Championships in Zurich in 2014.

French athlete Christelle Daunay wins the marathon at the European Athletics Championships in Zurich in 2014
French athlete Christelle Daunay wins the marathon at the European Athletics Championships in Zurich in 2014.
Erik van Leeuwen, CC BY

When physical stress takes its toll on professionals

The issue of physical stress has been raised for a long time. In the 1990s, it was already reported that simply running for 45 minutes rather than 30 minutes a day could double the frequency of injuries. Going from three to five weekly sessions had similar effects.

Christelle Daunay was no exception. She suffered a stress-related fracture in 2018, which prevented her from defending her title as European marathon champion in 2014. It should be noted that a “stress fracture” is a bone injury, similar to a crack, which can be caused by running long distances.

When ultra-trail puts body and mind to the test

The recent development of trail running (i.e. running in the great outdoors) only reinforces these concerns, with not just the wilderness but “ultra” aspect appealing to many.

The extreme sport has its own particularities. Due to the irregular terrain, its practice requires different joint and muscle movements and therefore greater concentration than road running. Add to that the effort’s duration, ranging from a few hours to a full day or more, the issues of nutrition, effort management, and muscle damage that sets in over time, and it’s easy to understand why these events lead to mental and physical fatigue, not only during the event itself, but also in the long term.

The conditions of running-related physical wear depend on many factors and vary from person to person. For example, on whether you jog to hit a speed or mileage goal.

Wearing out the body at a given moment to increase its resistance… to wear and tear

Whatever the focus, people often engage in specific training programmes, with physical and physiological progress relying on the human body’s remarkable adaptive capacity.

Note the paradox here: one of the principles of training is to stimulate the body, to “wear it out” at a given moment in time in order to trigger the physiological processes that will lead to improved capabilities, the fight against fatigue… and, ultimately, increased resistance to physical stress.

This fundamental process is the basis of physical rehabilitation/recovery programmes, which are increasingly used in physiopathological contexts, for example to treat peripheral artery disease or obesity.

However, at its most intense, training can require mental commitment, resistance to weariness, and a strong will to continue the effort over time despite fatigue.

Stress can therefore also be mental. This is perhaps the major difference between amateurs and professionals, who have no choice but to put their bodies under severe strain in order to progress in the high-level hierarchy.

Pros or amateurs, the importance of good coaching

Seeking to push their physical and mental limits can lead any runner to feel “worn out”. All these factors highlight the importance of being well supervised and advised (by coaches, in clubs, etc.) in order to train with a certain progression, both in terms of quantity and intensity, and to adapt one’s lifestyle.

No technical equipment is needed to run – an advantage which allows you to ideally experience your own body, provided you’re aware of races’ risks and limitations. And rest assured, if you still don’t enjoy this sport, there are plenty of other options available so you can find something that suits you and enjoy the health benefits of physical activity. What’s important is to keep moving.

Extract from What I Talk About When I Talk About Running by Haruki Murakami:

“Human beings naturally continue doing things they like, and they don’t continue what they don’t like. That’s why I’ve never recommended running to others. It doesn’t suit everybody. Similarly, not everyone can become a novelist.”


Benoît Holzerny, a health-promoting sports coach, and Cédric Thomas, a top athlete trainer (including the 2014 European marathon champion, Christelle Daunay), contributed to writing this article.

The Conversation

Sylvain Durand ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Does running wear out the bodies of professionals and amateurs alike? – https://theconversation.com/does-running-wear-out-the-bodies-of-professionals-and-amateurs-alike-270507

As the US eyes Greenland, Europe must turn a global problem into an opportunity

Source: The Conversation – UK – By Francesco Grillo, Academic Fellow, Department of Social and Political Sciences, Bocconi University

Shutterstock/Bendix M

The so-called world order and the international rule of law are both officially dead in the wake of operation “absolute resolve”, the US infiltration of Venezuela to capture its president Nicolás Maduro.

It is true that both have been sick for some time – and Venezuela is a demonstration of this. Maduro was condemned by foreign leaders for illegally seizing power as long ago as 2013 – years before Donald Trump even became president. No concrete action was ever taken.

Operation “absolute resolve”, however, is a red line crossed.

Even when the US invaded Panama in 1989, there was some attempt to preserve a world order that no longer seems to matter. This invasion (more humbly named “just cause”) was anticipated by a declaration of war that came from Panama. The US Congress was, at least, informed and some countries even tried a mediation.

More importantly, the reaction when the US went ahead was much stronger. Even before the capture of Panamanian president Manuel Noriega, the General Assembly of the United Nations, the Organization of the American States (which includes the US) condemned the invasion as illegal. The European Parliament did the same immediately after.

In the case of Venezuela, the silence is deafening. And seeing that no one has challenged it, the US government has immediately started talking about taking Greenland, hinting that it wouldn’t even need to use force. The world order is dead because nobody is willing to defend it.

However, it is equally evident that the alternative to the defunct world order cannot be no order at all. It’s not feasible that the world should operate according to the law of the jungle. It is too complex and big to be governed by just one empire.

This much was acknowledged even in the controversial US security strategy published at the end of 2025, which says that US elites “badly miscalculated America’s willingness to shoulder forever global burdens” and that “they overestimated America’s ability to fund … a massive military, diplomatic, intelligence, and foreign aid complex”.

“We live in a world in which you can talk all you want about international niceties and everything else,” Stephen Miller, deputy chief of staff to Donald Trump, now says of the US’s changed vision. “But we live in a world, in the real world…that is governed by strength, that is governed by force, that is governed by power.”

But a world governed on these terms is obviously a world heading towards mutual destruction. In such a system, all countries would, legitimately, scramble to defend themselves militarily. For those countries not already equipped, the pursuit of nuclear weapons would be the only obvious route to invulnerability.

Turning crisis into opportunity

So, what should Europe do in the face of this problem? And in the current world order, it’s possible that I do really mean Europe rather than the EU.

This situation requires cooperation with the UK, Norway, and probably Canada and Switzerland. If necessary, it may require moving ahead without Hungary or whichever other EU countries are still doubtful about the need for an urgent, defence-based European integration.

In theory, a world without a world order is a much greater problem for Europe than for any other economy of the world. According to the World Bank, trade with other countries represents more than 60% of the five biggest GDPs in Europe but less than 40% of the GDPs of China, the US and Russia.

However, Europe is also probably the part of the world that is best equipped to try to be the broker a new framework. It has fewer enemies than other contenders and more friends (16 of the 20 countries whose passports enable entry to other states without a visa are European).

It has the strongest tradition of being a global meeting place (the top five host cities for international organisations are all in Europe).

So yes, Europe can, in theory, transform its biggest problem into its biggest opportunity. Indeed, I would even say that the only way to survive the chaos is by being ambitious. Europe must present itself as the only credible broker of a difficult and yet indispensable new world order.

Miller is right that this will take force, strength and power – but it is the force of standing up without double or triple standards over defending those rights that once inspired a “universal declaration” inspired by the United States.

It is about having the strength of ideas for drafting new institutions to reinforce those values. But also, it’s about having the power, even one based on a military deterrence, to defend freedom if somebody wants to impose a different vision of what civilisations are about.

Will Europe find the courage to be strong? It probably needs a trigger to wake up. Greenland could be that trigger.

If Europeans don’t manage to negotiate what they’ve mastered so far – another humiliating compromise that would only serve US interests and reinforce Miller’s worldview – then an incident over Greenland may be the end of an alliance that is already increasingly unstable. But it would also be an opportunity to draft a new vision for governing the world.

The Conversation

Francesco Grillo is affiliated with the think tank Vision which is the convenor of the Siena conference on the Europe of the Future.

ref. As the US eyes Greenland, Europe must turn a global problem into an opportunity – https://theconversation.com/as-the-us-eyes-greenland-europe-must-turn-a-global-problem-into-an-opportunity-272872

Your dog’s dinner could be worse for the planet than your own – new research

Source: The Conversation – UK – By John Harvey, PhD Researcher, Global Agriculture and Food Systems, University of Edinburgh; University of Exeter

Pixel-Shot/Shutterstock

Cutting down the amount of meat we eat helps reduce greenhouse gas emissions associated with agriculture. But what about the meat that our pet dogs eat?

Our new study shows that feeding dogs can have a larger negative effect on the environment than the food their owners eat. For a collie or English springer spaniel-sized dog (weighing 20kg), 40% of tested dog foods have a higher climate impact than a human vegan diet, and 10% exceed emissions from a high-meat human diet.

Dog food comprises a significant part of the global food system. We have calculated that producing ingredients for dog food contributes around 0.9-1.3% of the UK’s total greenhouse gas emissions. Globally, producing enough food for all dogs could create emissions equivalent to 59-99% of those from burning jet fuel in commercial aviation.

The type of animal product used to produce pet food really matters. The environmental footprint of dog food differs for prime cuts and offal or trimmings.

Cuts like chicken breast or beef mince are used in some dog foods but are also commonly eaten by people. Selling these “prime cuts” provides around 93-98% of the money from selling an animal carcass.

By-products like offal and trimmings – which are less sought after for human consumption, much cheaper, but highly nutritious – are widely used in pet food. We assign more of an animal’s environmental footprint to high-value cuts and less to these by-products.

Greenhouse gas emissions for different types of dog foods:

Some previous studies have given by-products the same environmental impact by weight as the highest‑value cuts, directly using figures calculated for human food. This “double counts” livestock impacts and substantially overestimates the footprint of pet food.

A practical problem for pet owners and researchers like us is that it’s difficult to find out which parts of the carcass are in a product. Our study used mathematical models to estimate the composition based on the ingredients list and nutritional composition of each food.




Read more:
Is your pooch better or worse off on a cereal-free diet?


Labelling guidelines allow broad terms such as “meat and animal derivatives”. These give manufacturers flexibility to change recipes but make it hard to distinguish between foods mainly based on low-value cuts and those rich in prime meat. Ingredients listed as chicken may be fresh, dehydrated (made from low value offcuts) or a mixture, and recipes are commercially sensitive.

For this reason, we adjusted our assumptions about nutrient content, environmental consequences of specific ingredients and the comparative values of meat products when estimating feed compositions. After repeating this process 1,000 times, one pattern was consistent: higher shares of prime meat drove up negative environmental effects.

yellow background, jack russell eating meat from table
Higher shares of prime meat in dog food drives negative environmental impacts.
Inna Vlasova/Shutterstock

Improved labelling – for example, indicating the proportion of prime meat v by‑products – would enable owners to make informed choices and allow better scrutiny of “sustainable” claims.

The format of pet food also matters. Some owners see raw and grain-free diets as more natural, although for many dogs these diets may offer no benefits and could introduce health risks, including nutritional imbalances and bacterial risks for dogs and their owners. Studies show that carefully formulated plant-based diets can meet dogs’ nutritional needs with similar health outcomes to meat containing diets, and there is increasing acceptance of this feeding approach from veterinary professionals.

On average, wet foods (for example, tinned or those packed in foil trays) and raw foods had more of a negative environmental effect than dry kibble. Grain-free options also have a greater environmental footprint than foods not marketed in this way. While the few plant-based diets we studied tend to be slightly less environmentally damaging than average meat-based ones, particularly among wet foods, this advantage is small compared to the difference between wet or raw and dry foods.




Read more:
Vegan diet has just 30% of the environmental impact of a high-meat diet, major study finds


There are exceptions. For example, the lowest impact wet foods we studied had lower emissions than the typical dry food. And, the foods with the absolute lowest negative environmental consequences we tested included meat by-products.

There are other protein sources being marketed as sustainable alternatives to feed dogs, the most widely marketed example being insects. We haven’t studied these in detail yet, but plan to in the future, while taking into account ongoing scientific debate about how large the real-world environmental benefits of insect production are.

Wet foods – and probably raw foods requiring refrigeration or freezing – tend to have greater greenhouse gas emissions from packaging and transportation. This further increases the chance that choosing these food types is less environmentally friendly.

Vegan v wolf diets

Pet food choices can provoke strong emotions. One of us should we name them?, a veterinary surgeon working on environmental sustainability, regularly sees owners torn between ideals of dogs as meat‑eating “wolves” and their wish to reduce environmental harm.

Our study shows that it’s not simply a matter of choosing between vegan diets and raw meat. Simple rules like “dry always has a lower environmental footprint than wet” do not hold for every product. The ingredient mix within each product is key.

So, for owners looking to reduce the environmental footprint of their pet food, it’s important to know that choosing grain-free, wet or raw foods can result in higher negative environmental effects compared to standard dry kibble foods. Regardless of food type chosen, selecting foods that use genuine animal by‑products or plant proteins rather than competing directly with meat humans typically eat is also preferable.

Dog foods showed over 65 times more variation in the effect they have on the planet, compared to a 2.5-fold difference between vegan and high-meat human diets. The potential to reduce – or increase – environmental damage by changing dog diets is enormous. By choosing meat products wisely for pet food and making labelling clearer, we can cut this hidden part of our food footprint and have healthy, well-fed dogs.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.


The Conversation

John Harvey receives funding from the Biotechnology and Biological Sci­ences Research Council (BBSRC), grant number BB/T00875X/1.

Vera Eory, SRUC, is credited as a co-author of the study and she collaborated with us during writing this article.

Peter Alexander and Sarah Crowley do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Your dog’s dinner could be worse for the planet than your own – new research – https://theconversation.com/your-dogs-dinner-could-be-worse-for-the-planet-than-your-own-new-research-271865

Stopping weight-loss jabs leads to much faster rebound than thought – so are they still worth it?

Source: The Conversation – UK – By Sam West, Postdoctoral Researcher, Primary Care Health Sciences, University of Oxford

martenaba/Shutterstock.com

Weight-loss injections, like Wegovy and Mounjaro, have been hailed as gamechangers. In clinical trials, people lost an average of 15%-20% of their body weight – results that seemed almost miraculous compared to traditional diet and exercise programmes.

Today, one in 50 people in the UK are using these treatments. Most of them – around 90% – are paying privately, at a cost of £120-£250 per month. But there’s a catch: more than half of people stop taking the drugs within a year, with cost being the main reason.

Our latest research reveals what happens next, and it’s sobering. On average, in clinical trials, people regain all the weight they lost within just 18 months of stopping the medication.

That’s surprisingly quick – almost four times faster than the weight regain seen after stopping weight-loss programmes based on diet and physical activity. The health improvements vanish too, with blood pressure, cholesterol and blood sugar levels returning to where they started.

Person taking their own blood pressure.
Health benefits vanish too.
ThamKC/Shutterstock.com

This matters because it means these drugs may need to be taken long-term – potentially for life – to maintain the benefits. Some private providers offer intensive support alongside the medication, and our review showed this helped people lose on average an extra 4.6kg. But there was no evidence that support during or after stopping the drugs helped to slow weight regain.

The rapid rebound raises serious questions about fairness and whether these treatments represent good value for the NHS. Obesity is far more common among people living in deprived areas, who are also least able to afford private treatment. NHS access is crucial to ensuring everyone gets equal care, regardless of their income.

The NHS is gradually rolling out these medications, but only to people with severe obesity (BMI over 40) and four obesity-related conditions, such as high blood pressure. That means many people who could benefit are effectively excluded unless they can pay privately.

Costs may eventually fall as existing drug patents expire and cheaper oral versions are developed, but that could take years. In the meantime, we need to make sure NHS access to these medications delivers the best possible value so more people can benefit.

Cost v benefits

The National Institute for Health and Care Excellence approved these drugs for NHS use because it judged them cost-effective by its usual standards. But those calculations assumed treatment would last two years, with weight being regained after three years of stopping. Our data shows that if treatment ends, weight comes back surprisingly quickly.

We also found that the improvements in things like blood pressure and cholesterol – the main reasons the NHS treats obesity – disappeared within the same timescale. This means the treatments may need to be continued long term to achieve lasting weight loss and health benefits, which completely changes the cost calculations.

More research is needed to estimate how cost-effective these medications really are, outside carefully controlled clinical trials, and for the actual patients being treated.

For people with obesity who don’t yet qualify for the medication based on the strict NHS criteria, the medication may not be cost-effective for widespread NHS use until the price drops substantially.

For this population, traditional weight management programmes remain the foundation of obesity treatment. Total diet replacement programmes, during which people eat nutritionally balanced soups and shakes instead of regular food for eight to 12 weeks, can achieve similar weight loss to the medications at a fraction of the cost.

Group-based weight-loss programmes, such as WW and Slimming World, achieve smaller average weight losses but can be cost-effective and even save the NHS money.

The new weight-loss medications have shown just how desperately people want help to lose weight. But the question of value for money remains unclear. Making cheaper weight-loss programmes available to anyone with obesity who wants support would allow fairer access to treatment and improve public health, though individual results are likely to be less dramatic than what could be achieved with long-term medication.

The Conversation

Sam West receives funding from the National Institute of Health Research and is a co-investigator on three weight loss trials funded by the Novo Nordisk Foundation.

Dimitrios Koutoukidis receives funding from the National Institute of Health Research and is principal investigator in publicly-funded investigator-led research studies where Oviva and Nestle Health Sciences have contributed to the costs or delivery of weight-loss interventions. He supervised an iCASE PhD studentship where Second Nature was an industry partner.

Susan Jebb receives research grant funding from National Institute of Health Research and is principal investigator in a research programme funded by the Novo Nordisk Foundation

Oviva, Second Nature, Nestle Health Sciences have contributed to the costs or delivery of weight-loss interventions as part of some of research studies funded by the National Institute of Health Research.

ref. Stopping weight-loss jabs leads to much faster rebound than thought – so are they still worth it? – https://theconversation.com/stopping-weight-loss-jabs-leads-to-much-faster-rebound-than-thought-so-are-they-still-worth-it-272314

Cuba’s leaders just lost an ally in Maduro − if starved of Venezuelan oil, they may also lose what remains of their public support

Source: The Conversation – Global Perspectives – By Joseph J. Gonzalez, Associate Professor of Global Studies, Appalachian State University

‘After you, President Maduro?’ A worrying phrase for Cuba’s President Miguel Diaz-Canel. Juan Barreto/AFP via Getty Images

Footage of a handcuffed Nicolás Maduro being escorted to a Brooklyn detention center will come as uncomfortable viewing for political leaders in Havana.

“Cuba is going to be something we’ll end up talking about,” said President Donald Trump just hours after the Jan. 3, 2026, operation to seize the Venezuelan president. Secretary of State Marco Rubio echoed Trump’s warning: “If I lived in Havana and I was in the government, I’d be concerned.”

As a historian of the United States and Cuba, I believe that Washington’s relations with Havana have entered a new phase under the Trump administration. Gone is Barack Obama’s “Cuban Thaw” and Joe Biden’s less-restrictive sanctions. In their places, the Trump administration has apparently adopted a policy of regime change through maximum pressure.

If the administration has its way, 2026 will be the final year of communist rule in Cuba – and it intends to achieve this without intervention by U.S. armed forces.

“I don’t think we need (to take) any action,” Trump said on Jan. 4, adding: “Cuba looks like it’s ready to fall.”

Cuba’s friend with benefits

Trump may have a point. Maduro’s capture has effectively taken away Cuba’s closest ally.

Maduro’s predecessor and mentor, Hugo Chávez, was an avowed admirer of Cuban revolutionary leader Fidel Castro.

Shortly after assuming power in 1999, Chávez’s government began supplying oil on favorable terms to Cuba in exchange for doctors and, eventually, the training of Venezuela’s security forces. It was no coincidence that 32 of the security officers killed as they defended Maduro from approaching American forces were Cuban.

Maduro succeeded Chavez as president in 2013 and continued the country’s support for Cuba. In 2022, a member of the Venezuelan opposition claimed that Caracas contributed US$60 billion to the Cuban economy between 2002 and 2022.

A crowd hold aloft flags.
Cubans gather in support of Venezuelan leader Nicolas Maduro in Havana on Jan. 3, 2026.
Adalberto Roque/AFP via Getty Images

Maduro’s largesse proved unsustainable. Beginning in the early 2010s, Venezuela entered a severe economic crisis provoked by economic mismanagement, an overreliance on petroleum and U.S. sanctions.

Venezuela’s support of Cuba slowed to a trickle by 2016. Maduro’s government has nevertheless continued to supply Cuba with oil in secret, while evading U.S. sanctions, at amounts far below Cuba’s needs.

Hard times in Cuba

Venezuela’s penury and U.S. pressure mean Cubans are now experiencing deprivation on a level not seen since the country’s “special period” of economic crisis from 1991 to 1995, brought about by the collapse of the Soviet Union and the end of the bloc’s generous subsidies.

Since 2020, Cuba’s GDP has shrunk by 11%, while the value of the Cuban peso continues to fall.

Cubans no longer have reliable electricity or access to water. Mosquito-borne illnesses, once rare, are now rampant because the government cannot afford to spray pesticides.

The medical system provides only the most rudimentary care, and hospitals have little to no medicine.

Meanwhile, industrial and agricultural production have sharply declined, as have food imports.

And while famine has not yet emerged, food insecurity has increased, with most Cubans eating a limited diet and skipping meals. Street crime has also become common on Cuba’s once-safe streets.

A group of people stand on the street
Cubans stand in line to buy food during a power outage in Havana on Dec. 3, 2025.
Yamil Lage/AFP via Getty Images

Since seizing Maduro, the U.S. administration has outlined policies that appear aimed at increasing economic pressure on Cuba’s economy and provoking regime change. For example, the U.S. has made it clear it will no longer permit Venezuela to supply oil to Cuba.

Apparently, the administration hopes that without oil, the Cuban government will simply collapse. Or perhaps Trump expects that Cubans, as frustrated as they are, will overthrow their communist masters without help from the U.S.

A regime without popular support

Either way, there is a potential flaw with the administration’s reasoning: Cuba’s communists have survived crises such as these for more than 60 years. Yet, there is evidence that as Cuba’s economy declines, so too does support for the regime.

Since 2020, more than 1 million Cubans have left the country, principally for the U.S. and Spanish-speaking countries. A Cuban colleague of mine with access to government research recently told me the number is closer to 2 million.

Those who stayed are no more satisfied.

In a 2024 public opinion poll, an overwhelming majority of Cubans expressed profound dissatisfaction with the Cuban Communist Party and the leadership of President Miguel Díaz-Canel.

Cubans have also taken their complaints to the streets. In July 2021, protests erupted across Cuba, demanding more freedom and a better standard of living. The government quickly jailed protesters and sentenced them to long prison terms.

Sporadic protests have continued nevertheless, often quickly and without warning, drawing harsh repression. In particular, the San Isidro movement, formed in 2018 to protest restrictions on artistic expression, has strong support among younger Cubans.

Changing attitudes toward America

As Cubans have turned against their government, they have become more receptive to the U.S.

During my first visit in 1996, Cubans blamed the U.S. embargo in place since the early 1960s for the privations they suffered during the Special Period.

In the past decade, however, I have heard Cubans – at least those under 50 – express more anger with their government than with the U.S. embargo.

A large US flag is seen flying above street.
A tricycle used as a taxi is decorated with the U.S. flag in Havana.
Yamil Lage/AFP via Getty Images

Make no mistake: Cubans want the U.S. embargo to end. But they no longer believe their government’s attempt to blame Washington for all of Cuba’s economic and political problems.

Part of this change is due to the extraordinary emigration of Cubans: Every Cuban I know has a family member or a friend in the U.S. The internet has also helped; Cubans can now read foreign news sources on their smartphones.

Welcome liberators?

Since Maduro’s capture, I have messaged friends in Cuba to gauge sentiment. All but one of the six Cuban friends I managed to reach told me they were receptive to U.S. intervention in Cuba, provided that it removed the regime making their lives miserable.

One friend said: “If the Yankees showed up today, most of us would probably greet them as liberators.”

Admittedly, my sample size is small. But such reactions, coming from comparatively elite Cubans working in both the private and public sectors, cannot be good news for what remains of the Castro regime.

The Conversation

Joseph J. Gonzalez does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Cuba’s leaders just lost an ally in Maduro − if starved of Venezuelan oil, they may also lose what remains of their public support – https://theconversation.com/cubas-leaders-just-lost-an-ally-in-maduro-if-starved-of-venezuelan-oil-they-may-also-lose-what-remains-of-their-public-support-272681

Canada has too few professional archeologists, and that has economic consequences

Source: The Conversation – Canada – By Lindsay Amundsen-Meyer, Assistant Professor in Archaeology, University of Calgary

Canadian cultural resource management archeologists — professional consultants involved in environmental assessment and compliance processes — are increasingly finding themselves in the public eye when their work intersects with the development or disaster response related infrastructure projects.

Public or media discussions often arise when delays in construction result from archeological assessments or Indigenous opposition. Yet, many more developments proceed without issue.

Today, these concerns are part of a variety of challenges including labour shortages, meaningful Indigenous engagement and recent legislative changes that guide how development occurs.

These challenges must be addressed to ensure timely assessment and approval of development projects through legally binding processes, without comprising the assessment and preservation of archeological sites — the overwhelming majority of which are Indigenous ancestral sites.

Demand for archeological professionals in Canada is quickly outpacing the number of students graduating with archeology or anthropology degrees. A similar deficit of archeologists has been demonstrated in the United States.

Post-secondary institutions can play a key role in addressing this deficit by altering and improving degree programs to ensure students are equipped with the knowledge needed to succeed in cultural resource management.

The politics of archeology

Cultural resource management (CRM) involves identifying, preserving and maintaining valuable cultural heritage like ancestral artifacts and built heritage. In Canada, this kind of archeological work is required ahead of most infrastructure development through provincial and federal legislation.

Recent political developments in Canada, including federal bill C-5 and similar legislation in Ontario and British Columbia, have the potential to impact the scope of environmental assessment work, including associated archeology work.

In order to speed economic development, these laws allow governments to exempt some infrastructure projects from archeological assessment prior to construction and bypass requirements for Indigenous consultation. This moves decision-making on archeological preservation away from Indigenous communities and trained professionals and into the political sphere.

Such exemptions risk violating the treaty rights of First Nations and causing irreparable harm to Indigenous ancestral sites without consideration or assessment, deepening conflicts between development proponents and Indigenous communities. These conflicts may themselves delay construction of infrastructure.

Where are all the archaeologists?

Our recent study indicates there are between 419 and 713 archeologists employed in cultural resource management in Canada. These are almost certainly underestimates. However, our study further suggests that labour market demand is outpacing supply.

Fifty-five responding employers across the country reported unfilled positions, including for jobs at all levels of experience. Overall, the CRM labour market has not kept pace with rapid industry growth.

Post-secondary institutions have an important role to play in meeting CRM labour market demand by creating robust degree programs which demonstrate that there are viable career pathways in archaeology outside of academia. But universities are simultaneously experiencing a significant decline in funding, and program opportunities are disappearing.

In part due to these challenges, students graduating from archeology and anthropology programs do not complete their degrees with the skills and knowledge needed to succeed in CRM. As a result, the burden of on-the-job training becomes high for employers.

There are some exceptions, such as CRM-specific undergraduate and graduate courses and programs at the Universities of Lethbridge and Calgary. However, the general lack of CRM-oriented programs at post-secondary institutions is particularly problematic given that the majority of graduates who stay in archeology will enter the CRM industry, and the overwhelming majority of archeology in Canada today is undertaken within a CRM context.

A path forward

Post-secondary curricula must extend beyond traditional academic programming to better prepare students for the workforce. To be clear, we are not arguing for creation of a CRM trade school for archeologists. Rather, we believe that small changes to curricula and programs can enhance student experience and career successes without compromising academic objectives and rigour.

Post-secondary institutions need to create degree programs that are aligned with the skills and knowledge used in industry and introduce CRM to students early in their undergraduate programs. Doing so will create more robust degree programs that attract students to a relevant education where they see a viable career path in archeology, meeting a market need.

This market need must be met to ensure timely assessment and regulatory approval of development projects, as the CRM workforce is needed to complete “nation-building” infrastructure projects. Archeology risks being seen as a barrier to development and may lose political and public backing if CRM processes are seen to slow or stall economic development.

If the CRM sector does not have the capacity to complete infrastructure assessments, current trends suggest that development will push ahead without archeological assessment or engagement. Archeological sites will almost certainly be destroyed in the process.

Critics will argue that it’s essential to cut red tape and speed up regulatory approval of economically important projects, making CRM a lesser part of the approvals process. We counter that CRM assessment is essential to development approvals, which are increasingly reliant on meaningful Indigenous engagement and Indigenous consent to proceed.

Wanton destruction of Indigenous archeological sites will only lead to further conflict and loss of heritage. Canada must protect that heritage and has a lot to gain from doing so. By protecting heritage, archeologists can help ensure better outcomes for all.

The Conversation

Lindsay Amundsen-Meyer receives funding from the Social Sciences and Humanities Research Council of Canada and the Heritage Preservation Partnership Program (Arts, Culture and Status of Women).

Kenneth Roy Holyoke receives funding from the Social Sciences and Humanities Research Council of Canada and the Wenner-Gren Foundation for Anthropological Research.

Matthew Munro works for Stantec Consulting Ltd.

ref. Canada has too few professional archeologists, and that has economic consequences – https://theconversation.com/canada-has-too-few-professional-archeologists-and-that-has-economic-consequences-272422

Congress takes up health care again − and impatient voters shouldn’t hold their breath for a cure

Source: The Conversation – USA – By SoRelle Wyckoff Gaynor, Assistant Professor of Public Policy and Politics, University of Virginia

Congress has long been unable to come to an agreement on how to help constituents pay for health care. iStock/Getty Images Plus

As the bell struck midnight on Jan. 1, 2026, time ran out on Obamacare subsidies for over 24 million Americans. These subsidies, propped up through various legislative packages over the years, lowered the health insurance costs for Americans on the Obamacare exchange.

Following the expiration of these subsidies, health insurance premiums are skyrocketing for around 90% of Americans who use health insurance from the exchange. For many Americans, the new year means a choice between paying exorbitant costs or taking the risk of no health insurance at all.

But unlike other policy challenges that Congress may face in 2026, the expiration of health insurance subsidies was not unexpected.

The extension of health care subsidies was the pivotal disagreement that ultimately led to the longest government shutdown in U.S. history in the fall of 2025. Democratic members, in support of extending the subsidies, faced off against the majority party in Congress: Republicans who wanted a short-term legislative fix that did not fund subsidies.

Republicans ultimately won the shutdown battle. And while Democrats attempted a last-gasp vote in December to reform and extend health care subsidies, the health care debate was yet again punted into the next year.

Congress has reconvened, and Democratic members – joined by four Republican members – used the best possible procedural tool at the minority party’s disposal, the discharge petition, to force congressional leaders to allow votes on an extension of Obamacare subsidies during its first week back in session. But overcoming congressional leadership is an immense challenge: Even if the House is successful, Senate Republican leadership has made clear that there is no future for the legislation in that chamber.

The challenge of passing meaningful solutions to rising health care costs is not unique to this year or to this Congress. It has been a decades-long argument among lawmakers that shows no sign of being resolved.

Why is it so hard for Congress to lower the cost of health care for the people who sent them to Washington?

Like many policy problems, partisanship is partly to blame. But the sprawling complexities of the American health care system pose a particular challenge to members of Congress. As my own research finds, the outsized power and resources of congressional leaders means that for Congress’ most intricate issues, rank-and-file members do not have the time, resources or, frankly, the interest to dedicate to meaningful problem-solving.

The failure of two health care proposals in December 2025, one from Democrats and one from Republicans, meant certain Obamacare enrollees face huge premium increases.

Government ‘dips its toe’

Americans face some of the highest health care costs in the world. Lawmakers on both sides of the aisle have long campaigned on addressing exorbitant costs and equitable access.

Progressive politicians proposed the idea of national health insurance as early as the 1900s, but efforts were limited to women and children, and any policy successes were moderate and temporary.

Following the Great Depression and the advent of Social Security in the 1940s, Congress had warmed to the idea of the federal government providing social services. But attempts at widespread health care coverage failed to gain traction.

During the 1950s, as Americans began to expect more services from their tax dollars, formal coalitions formed in support of, and in opposition to, government-supported health care. Workers and unions, bolstered by Congress and the Supreme Court, used the power of collective bargaining to push for employee benefits such as health insurance. Doctors and medical providers, enjoying their current – and profitable – position, coordinated campaigns against national health insurance proposals.

The tension held until 1956, when the government dipped its toe into federally funded health care, enacting the first “Medicare” government-funded program for dependents of the armed forces.

In the private sector, employee demands and employer tax incentives led to a convoluted web of employer-based insurance programs. But for many Americans, particularly the retired and elderly and those with low-paying jobs, there remained few, if any, insurance options available.

Enter: Medicare and Medicaid

In the 1960s, under Democratic President Lyndon Johnson’s vision for a “Great Society,” and with a bipartisan vote in Congress, the federal government took the greatest step forward in providing federal health assistance for Americans: Medicare and Medicaid. The programs helped with the cost of health care via federal health insurance for those who were elderly and low-income, and they ushered in a new era of federal health policy.

This was a watershed moment for policymakers. With health care coverage now under the umbrella of the federal government, domestic policymaking responsibility expanded to match. For lawmakers, this meant not only new debates but also new federal agencies, new congressional committees, new lobbying firms and new interest group coalitions.

An older woman pats the cheek of a much taller middle-aged man.
An elderly woman shows her gratitude to President Lyndon B. Johnson for his signing of the Medicare health care bill in April 1965.
Corbis via Getty Images

In the decades that followed, Congress’ responsibility for health care policy continued to expand: Coverage amounts and eligibility requirements were tweaked, programs were expanded to include prescription drugs and vaccines, health savings accounts were introduced, and more.

Yet still, the web of private and federal health insurance programs left millions of Americans uninsured. It wasn’t until 2010, under President Barack Obama, that the Democratic-controlled House and Senate passed the Patient Protection and Affordable Care Act, known as “Obamacare,” to close that gap. But as evident from the 2025 government shutdown, this solution was far from perfect – and quite expensive.

Why, despite centuries of attention, does health care coverage remain one of – if not the most – perplexing and challenging domestic issues that Congress faces?

Consensus becomes more difficult

Part of this is a uniquely American problem: Like many services, the American health care system is based on economic incentives, and the foundational ideal of American liberalism means the government is inclined to let capitalism thrive.

As a former congressional staffer and now a scholar of Congress, I know that nowhere is the tension of societal support and personal freedom more apparent than the debate over health care access.

But the issue is also immensely complex, and today’s Congress does not have the resources to meet the challenge, particularly in the face of a sprawling executive branch.

Over time, as policies were adopted by the federal government, the scope of potential solutions expanded. To put it another way, as more cooks enter the policymaking kitchen, consensus became more difficult. The history of American health care is populated by private industries, powerful interest groups, federal officials and concerned citizens.

And the web of federal funding and private insurance companies across 50 states has resulted in a policy landscape that is easier to tweak, rather than whole-scale reform.

This is further stymied by the limited resources and expertise of the modern Congress. My research has shown that rank-and-file members are increasingly reliant on party leaders to take the lead on policymaking and problem-solving. Negotiating across coalitions and parties is unpleasant, and communicating policy changes on such a complex issue is difficult.

The result? Tepid policy tweaks made for partisan messaging.

And as ideological divisions on government support and personal autonomy become crystallized by the two parties in Congress, partisan policy solutions diverge even further. Collaboration becomes harder every year.

The continuing resolution passed late in 2025 funded the government only until Jan. 30, 2026, which means Congress is facing a Groundhog Day rather than a clean slate for the new year. With millions of Americans facing exploding health care costs, the question becomes who will Congress follow: party leadership or concerned constituents?

The Conversation

SoRelle Wyckoff Gaynor does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Congress takes up health care again − and impatient voters shouldn’t hold their breath for a cure – https://theconversation.com/congress-takes-up-health-care-again-and-impatient-voters-shouldnt-hold-their-breath-for-a-cure-271998