Which cooking oil is best? Asking how they’re made could tell you more

Source: The Conversation – UK – By Serge Wich, Professor of Primate Biology, Liverpool John Moores University

Traditional palm oil extraction in Guinea plays important cultural and nutritional roles. Uzabiaga, 2017, Wikimedia Commons.

Vegetable oils are everywhere, and almost everyone has an opinion about them. From clever marketing in supermarket aisles to headlines about deforestation, they have become both the heroes and villains of the modern diet. But vegetable oils are vital to our lives and can help to address food insecurity.

Consumers trying to make ethical and sustainable purchases find themselves at odds with a marketplace where clickbait often masks reality and reliable information about traceability is often missing or hard to find. A pot of “palm-oil-free” peanut butter does not necessarily disclose what the palm oil was replaced with, or how and where the peanuts were produced.

In a market flooded with controversy and conflicting messages, informed consumption is a challenge. Which oils should we really be using and what’s the truth behind their production?

Consumers are entitled to clear ingredient transparency. More accurate information enables us to make choices that genuinely align with our values. Our recent research across three studies explores how nutrition, sustainability and transparency intersect in the world of vegetable oils.




Read more:
Iceland advert: conservation is intensely political, let’s not pretend otherwise


Few foods illustrate the complexity of our global food system quite like vegetable oils. Used in cooking, processed foods, cosmetics, plastics and biodiesel, the global demand has quadrupled in 50 years, making vegetable oils a cornerstone of both diets and economies. An estimated 37% of agricultural crop land is used by oil crops, such as soybean, oil palm, rapeseed and sunflower.

Yet, this demand also drives major health and environmental pressures. With 2 billion more mouths to feed in the coming decades, several hundred million hectares of land – ten times the area of the UK – will need to be allocated to vegetable oil production. Decisions about which crops are used and how they are produced will have critical environmental and social consequences.

Vegetable oil crops, including soy and maize, take up 37% of the world’s agricultural crop land and are among the fastest growing commodities.
Erik Meijaard, CC BY-NC

Don’t call me fat

“Fat” has long-held negative connotations. This has led to extreme health advice calling for anything from the total omission of seed oils to eating a stick of butter as a snack or adding a shot of coconut oil to one’s coffee.

Alongside this, alarmist marketing campaigns have painted certain vegetable oils, most notably palm oil, as the agent of mass extinction and deforestation.

But behind every bottle on a supermarket shelf lies a more complex story: a network of farmers, factories and policies that shape not only what we eat but also how land is used and how livelihoods are sustained.

We need to stop treating dietary fat as a villain. Yes, trans-fats are harmful, but evidence on saturated fats is mixed and context-specific. Frying risks are overlooked and fat replacers are often oversold.

Importantly, a global “fat gap” coexists with obesity – really, some people need more fat in their diet. The idea that some fats are good for you and others aren’t isn’t clear cut.

The consumer blind spot

Claims about the foods we consume can become part of popular discourse. Take WWF’s 2009 claim that 50% of supermarket products contain palm oil. Is it true now? Our findings suggest at least not everywhere.

How easily could it be proved to be true then? Ever? It’s hard to tell, without clear historical evidence of how the original claim was made. But has this claim encouraged millions of consumers to avoid palm oil? Absolutely.

This is not a matter of overturning palm oil’s bad reputation, but one of noting the sheer lack of clarity and transparency in ingredient information. Many food products list only “vegetable oil” without specifying type or origin and sustainability labels are inconsistent and easily manipulated.

This lack of transparency fuels misinformation and prevents consumers from aligning purchases with their values. This fundamentally slows down any efforts from consumers and policymakers to improve sustainability within the food system.

The human dimension: culture and equity

Vegetable oils are more than ingredients. They’re woven into our culture, economies and identity. From palm oil in south-east Asia and west Africa to olive oil in the Mediterranean, their value extends beyond nutrition or environmental metrics.

Which oil a consumer chooses depends on culture, price, taste, and many perception, some better informed than others.
Erik Meijaard, CC BY-NC

In an era of rising food insecurity, affordable oils remain a vital source of nutrition and income for millions. Calls to eliminate certain oils can carry hidden social costs, undermining livelihoods in producing regions. No oil is inherently good or bad.

Rather than asking which oil is best, we should question how our oils are made, who benefits, and which systemic changes truly serve people and the planet.

Ultimately, companies need to disclose sourcing origins and processing methods, and policymakers must mandate labelling that disclose an ingredient’s true environmental and social effects. Only then consumers can know how best to choose a varied mix of traceable oils, without the hype.

Technology such as QR codes and mobile applications can already enable this and by demanding greater traceability, shoppers can help shift towards fairer and more sustainable food systems.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

Serge Wich is a professor at Liverpool John Moores University. He receives funding from United Nation Environmental Programme (UNEP) under the Global Environment Facility (GEF) Congo Basin Impact Program (PCA/2022/5067) to conduct research on vegetable oils in Central African countries. He is also a member of the IUCN Global Oil Crops Task Force.

Erik Meijaard works for Borneo Futures and consults to Soremartec SA and Soremartec Italia, Ferrero Group, in the framework of its Sustainable Nutrition Scientific Board. Erik co-chairs the IUCN Global Oil Crops Task Force. We thank Emily Meijaard for developing the initial pitch and story line.

ref. Which cooking oil is best? Asking how they’re made could tell you more – https://theconversation.com/which-cooking-oil-is-best-asking-how-theyre-made-could-tell-you-more-267266

The real reason abolishing stamp duty won’t help first-time buyers

Source: The Conversation – UK – By Nigel Gilbert, Professor of Sociology, University of Surrey

sirtravelalot/Shutterstock

Scrapping stamp duty may sound like a quick fix to Britain’s housing crisis, but there’s reason to believe it would barely move the needle on affordability – while costing the Treasury billions.

At the Conservative party conference, leader Kemi Badenoch announced that a future Tory government would abolish stamp duty for people buying their main home. Badenoch called stamp duty “tax on aspiration” that traps families and holds back social mobility.

But research we conducted with colleagues casts doubt on this claim. We tested it using a detailed computer model of the English housing market. Our results told a different story.

Our simulation found that removing stamp duty, which the Tories themselves estimated would cost between £9 billion and £11 billion a year in lost revenue, would make almost no difference to house prices, rents or people’s ability to buy a home. It might be politically attractive, but the proposal would deliver little benefit to those most in need of help and would hand the biggest savings to wealthier buyers.

To understand why, it is helpful to examine what stamp duty actually does. Buyers in England and Northern Ireland pay the tax on property purchases above £125,000, with rates increasing for more expensive homes. (Scotland and Wales now have their own systems.)

The logic of abolishing it is simple enough: if you cut upfront costs, more people can afford to move. But our research shows this doesn’t translate into meaningful change.

We built an agent-based computer model that simulates the behaviour of thousands of virtual households across England. These digital households vary in income, family size, tenure and employment status. They make realistic decisions about saving, renting, buying and selling property over time. The model mirrors how the market behaves when conditions change, such as when interest rates rise or a tax is removed.




Read more:
Housebuyers hate stamp duty. Why hasn’t it been reformed before now?


When we ran the model without stamp duty for main homes, very little changed. Buyers could save for a deposit slightly faster because they no longer needed to set aside money for the tax. But the overall patterns of prices and transactions remained almost identical to the current system.

In other words, removing stamp duty gave households a modest short-term boost without altering the deeper forces that shape the market.

Rising deposits

For most people, the real barrier to home ownership is the deposit, not the tax. The average first-time buyer now needs around £60,000 to put down a deposit, while abolishing stamp duty would save them only a few thousand pounds. It’s the difference between climbing a mountain and skipping the last step.

More importantly, the benefits of scrapping stamp duty wouldn’t be shared evenly. Buyers of high-value homes, who currently face rates as high as 12%, would gain the most.

First-time buyers and those buying modest properties would see only a small difference. That makes the policy regressive – it helps those already well-off far more than those struggling to get on the ladder.

Our model also highlights how tightly connected the housing system is. Changes in one part of the market ripple through others. If more affluent buyers rush to buy expensive homes, prices can rise further up the chain, offsetting any small gains made lower down. Renters, meanwhile, would see no direct benefit.

cover of the daily express showing conservative leader kemi badenoch and a headline about plans to abolish stamp duty
Conservatives knew the stamp duty pledge would grab headlines.
Steve Travelguide/Shutterstock

This complexity explains why policies that look straightforward, often disappoint in practice. Housing markets are shaped by multiple factors: interest rates, planning restrictions, the supply of new homes and people’s incomes. Tweaking a single tax rarely shifts the overall picture.

The findings underline a broader point about policymaking. Governments often announce headline-grabbing tax cuts or incentives without fully testing their effects. But simulation models like ours can provide a powerful way to forecast outcomes before they happen in the real world.

They allow researchers to explore how thousands of households, landlords and lenders interact, revealing unintended consequences that might otherwise be missed.

In this case, the message is clear: abolishing stamp duty might sound like a lifeline for aspiring homeowners, but it’s unlikely to change who can actually afford to buy. The real solutions lie elsewhere: in building more homes, addressing stagnant wages and improving access to affordable credit.

The housing crisis is one of the defining challenges of our time. Quick fixes make for good headlines, but data-driven evidence should guide decisions that affect millions of people. Before policymakers reach for the next easy answer, they would do well to test whether it’s genuinely likely to work.

The Conversation

Nigel Gilbert receives funding from the Economic and Social Research Council. He is a Fellow of the Royal Academy of Engineering.

Corinna Elsenbroich receives funding from UKRI. She is a member of the Labour Party.

Yahya Gamal (Yahya Gamalaldin) receives funding from UKRI.

ref. The real reason abolishing stamp duty won’t help first-time buyers – https://theconversation.com/the-real-reason-abolishing-stamp-duty-wont-help-first-time-buyers-267584

Guillermo de Toro’s Frankenstein: beguiling adaptation stays true to heart of Mary Shelley’s story

Source: The Conversation – UK – By Sharon Ruston, Professor of English and Creative Writing, Lancaster University

Frankenstein has clearly been a labour of love for the director Guillermo del Toro. I am editing Frankenstein for The Oxford Complete Works of Mary Shelley so have spent a lot of time with her tale too. While del Toro’s deviates in noticeable but interesting ways from Shelley’s novel, the film remains true to the heart of her story with its obvious compassion and empathy for the Creature created in an unorthodox experiment by a young scientist.

The film is grand and lush, the costumes are incredibly opulent, the settings awe-inspiring, and the score emotional. I watched it in a sold out 600+ seat cinema in Manchester, filled with a young audience mainly in their 20s and late teens who laughed, cried and winced along with the action on screen.




Read more:
Frankenstein at 200 and why Mary Shelley was far more than the sum of her monster’s parts


It is divided into two parts – scientist Victor Frankenstein’s (Oscar Issac) narrative and then the Creature’s (Jacob Elordi) narrative – framed by and interspersed with the story of a ship caught in the ice and a captain faced with mutinous sailors who want to give up on their foolhardy mission and return home. This very much follows Shelley’s original tale.

A cartoon of undertakers in wigs.
A Consultation of Physicians, or The Company of Undertakers is an engraving by the English artist William Hogarth that satirises the medical profession.
Wellcome

The main narrative in the film has been moved to 1857; Shelley’s book is set in the 1790s. This enables a far greater range of technology to be employed – from early photography, flushable toilets, and gigantic voltaic piles, which were the first electrical batteries.

However, many scenes evoke earlier times, such as the gowns and wigs of the medical professionals who reject Victor Frankenstein’s heretical experiments with corpses. These have an 18th-century aesthetic, wearing large white wigs like those depicted in William Hogarth’s 1736 engraving A Consultation of Physicians, or The Company of Undertakers.

The film makes full use of historical medical knowledge. For example, one breakthrough in Victor’s experiments comes when he is given a map of the lymphatic system, which is said to be the fifth long lost Evelyn Table. The real four tables are from 17th-century Italy and were an educational tool to demonstrate the vein, artery and nervous systems. Famously, these wooden boards were pasted with real human tissue.




Read more:
The dark history of medical illustrations and the question of consent


Just as the surgeon who gathered the material for the Evelyn Tables is unlikely to have asked for the patient’s consent before their death, Victor is relentless in his search for body parts to build his monster. One of the most visceral scenes is of him surrounded by piles of heads, legs, and other human parts, sawing off what he intends to use, and lugging sacks of unwanted bits of bodies to be slung into the gutter.

Throughout, Victor is brilliant but single-minded to the point of monomania. He is likened, as he is in the novel’s subtitle, to the mythological figure Prometheus who stole fire from the gods to give to humans. But, unlike Prometheus, Victor’s actions seem to lack much altruistic purpose.

The one character who sees through him is Elizabeth Lavenza (Mia Goth), newly imagined as the intended wife of Victor’s brother William (Felix Kammerer). Elizabeth in this version is herself an amateur entomologist (a scientist who studies insects). Her scarab necklace, with its Egyptian symbolism, is a symbol of renewal and rebirth – a symbol of her affinity with creatures that are often misunderstood.

Del Toro creates animosity between Victor and his father Leopold (Charles Dance) to prepare us for the inadequacies of Victor’s relationship with his creation. He also leans into the theory that Victor and his Creature are doubles. Nowhere is this more clear than when the scientist identifies himself as Victor and the monster applies that name to himself. Both characters also emit the same animalistic growl when they are angry.

There are also visual signs of this doubling. At first, the monster is practically naked bar a few bandages until he acquires a long coat from a fallen soldier and other swaddling layers, which only enhances his formidable size.

When Victor begins to hunt the Creature, he is dressed in a similarly huge fur coat. His gait is ambling, much like the creatures in his first steps, as he drags his prosthetic leg across the snow in pursuit. Their resemblance seems to signify a merging of identities. It is difficult to know who is the hunter and who is hunted.




Read more:
Two centuries on, Frankenstein is the perfect metaphor for the Anthropocene era


Created from a process declared unholy, obscene and an abomination, and declared by Victor to be a mistake, the Creature endures. In fact, as it all ends we are left with a final close up of the monster’s face, cementing del Toro’s sympathy with the Creature.

In the film, the Creature is, throughout, afraid, attempting to be gentle, wanting to find affection at the hands of the humans he encounters but most often instead encounters pain and suffering. Ultimately, he is not of “the same nature” as humans, which allows for some intriguing differences.

Despite this, he insists that he is not a “something” but a “someone”. Those watching will be left with the Creature’s words reverberating in their heads, words which shine a harsh light on us all: “the world will hunt you and kill you just for being who you are.”

The Conversation

Sharon Ruston does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Guillermo de Toro’s Frankenstein: beguiling adaptation stays true to heart of Mary Shelley’s story – https://theconversation.com/guillermo-de-toros-frankenstein-beguiling-adaptation-stays-true-to-heart-of-mary-shelleys-story-267570

Drought, sand storms and evacuations: how Iran’s climate crisis gets ignored

Source: The Conversation – UK – By Sanam Mahoozi, PhD Candidate Journalism, City St George’s, University of London

Iran and Israel fought a 12-day war in June. Although a ceasefire was declared the same month, news coverage of Iran continues to focus on the conflict’s aftermath and the Middle East’s tense political situation.

Meanwhile, Tehran – home to more than 10 million people – is facing one of its worst water shortages in decades. Dams near the capital are at their lowest levels for nearly 70 years – the Karaj dam (one of the city’s major suppliers), which has 25 million cubic metres of water storage, is 86% empty.

In the centre of the country, the city of Isfahan is sinking as subsidence swallows cars and pedestrians. Land subsidence is mainly caused by over-extraction of ground water for agriculture – more than 90% of Iran’s water is extracted for agricultural use. Many of Iran’s iconic lakes have turned into a bed of salt.

Even though schools and roads in Tehran were evacuated in September due to their risk of collapsing, international media coverage of this major environmental problem remains alarmingly low – limited mostly to local and Persian-language diaspora outlets.

Earlier this year, the country’s southern provinces were blanketed by sand and dust storms that sent thousands to hospitals and disrupted infrastructure. Again, this went mostly unreported outside Iran.


Wars and climate change are inextricably linked. Climate change can increase the likelihood of violent conflict by intensifying resource scarcity and displacement, while conflict itself accelerates environmental damage. This article is part of a series, War on climate, which explores the relationship between climate issues and global conflicts.


There has also been little international coverage of the environmental impact of the war on Iran.

In contrast, local media have reported that Israel’s missile attacks on oil depots close to Tehran released 47,000 tons of greenhouse gases into the city’s atmosphere, causing air pollution. They claimed surface and groundwater systems, soil and wider ecosystems have all been damaged by the leakage of industrial wastewater, urban sewage and other forms of pollution including noise, vibration, radiation and heat – all of which pose a threat to the lives of humans, animals and plants.

For months, international news outlets have focused their coverage of Iran on questions about its nuclear programme and worsening ties with the west. They have covered espionage, sanctions, cybersecurity and Iranian officials’ statements about uranium enrichment and nuclear weapons.

This is not surprising. An analysis by the Reuters Institute for the Study of Journalism at the University of Oxford found that newsrooms covering the Middle East will mainly report on war and conflict. Other academic studies underline that long-term but far-reaching environmental issues are far down their list of priorities.

Iran is being hit by a major drought.

Even inside Iran, the news media has largely concentrated on the war. During the conflict, conservative Iranian state-affiliated news outlets such as Tasnim, Mizan and Kayhan focused almost entirely on military developments and official narratives of “national defence” and “foreign threats.”

But when the fighting ended, some Iranian newspapers, particularly those which advocate for gradual social, political and press freedom (along with the state-run IRNA news agency), started to cover the drought and water shortages. The conservative Iranian news media outlets are now covering these stories a little, but less so than the reformist media, such as Payamema and Shargh.

Today, the Middle East faces some of the world’s worst environmental crises — including droughts, floods, sand and dust storms with enormous consequences. Across Iran’s provinces, many rivers and wetlands have dried up. Air pollution is getting worse and power cuts are devastating lives and livelihoods.

What does the world know?

My research looks at how the media reports climate change across the Middle East and North Africa, and particularly in Iran.

I also write for news organisations about water and climate change. In doing this research, I have found that Iran’s environmental problems are largely driven by decades of government mismanagement and the overexploitation of water resources — including excessive dam construction and groundwater use for agriculture.

Even on the few occasions when international media outlets have covered Iran’s water crisis in recent months, the lead section of the coverage is often tied to the war. While reporting on war is essential to expose its human costs and security, environmental coverage is equally important. Climate change will not pause for a ceasefire, and neglecting it risks overlooking a crisis that affects everyone.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

Sanam Mahoozi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Drought, sand storms and evacuations: how Iran’s climate crisis gets ignored – https://theconversation.com/drought-sand-storms-and-evacuations-how-irans-climate-crisis-gets-ignored-266725

Russia now has a strategy for a permanent state of hybrid war

Source: The Conversation – UK – By Stefan Wolff, Professor of International Security, University of Birmingham

Drone incursions into Poland, fighter jets in Nato airspace, election interference in Romania and Moldova and “little green men” (soldiers of unconfirmed origin) in Estonia. These are just a few examples of the tactics Russia has been using in the past few weeks.

They appear to be part of a much broader strategy variously referred to as the “Gerasimov doctrine”, non-linear war or new-generation warfare. What lies behind these terms is the very worrying and very real “weaponisation of everything” – Moscow’s strategy to reshape international order.

As a researcher on great-power rivalries in Eurasia, I’ve observed this kind of hybrid warfare long before the full-scale invasion in Ukraine. We saw it most obviously with Russian interference in the 2016 US presidential elections. But it has intensified since the Ukraine conflict began in 2022.

These tactics cover a broad spectrum. They range from information operations, including propaganda and disinformation campaigns, to attacks on critical infrastructure, such as undersea cables. They involve the use of drones to disrupt air traffic and malicious cyber-attacks against Russia’s enemies. They have also included assassination campaigns against defectors and dissidents in the UK and elsewhere.

Russia is struggling to retain its traditional influence in post-Soviet regions like the south Caucasus and central Asia. Meanwhile it has also sought to extend its influence elsewhere, such as in Latin America or Africa.

But the main focus of the Kremlin’s hybrid warfare is Europe. The continent has become a key battleground in Moscow’s attempts to restore Russia to its erstwhile great-power status and reclaim a Soviet-style sphere of influence.

At the heart of these efforts is the war against Ukraine. For Russia, victory there is more than the mere military defeat of Ukraine and the permanent weakening of the country along the lines of Moscow’s frequently stated war aims: annexation of one-fifth of Ukrainian territory, limits on the country’s armed forces and no prospect of Nato membership.

While clearly important for Putin, he needs Russia’s victory to signal the extent of his power and at the same time to highlight western impotence to prevent Ukraine’s defeat.

Weakening the west

To win the war against Ukraine, the Kremlin needs to weaken the west and its resolve. In this sense, the intensification of the Kremlin’s hybrid war against Kyiv’s European allies is a tool Moscow uses as part of its broader war effort.

But weakening the west is also an end in itself. A strong EU and Nato alliance would prevent Russia from reclaiming its sphere of influence in central and eastern Europe.

Europe has been slow to rise to the challenge of upping its defence game against Russian aggression. But in the end the simple numbers do not favour Russia. The size of the EU’s economy is roughly ten times the size of Russia’s, and its population is more than three times that of Russia.

The EU’s defence expenditure in 2024 stood at just under US$400 billion (£298 billion), up 19% from 2023, and equal to 1.9% of member states’ GDP. According to the International Institute for Strategic Studies, Russia, by comparison, spent US$145 billion, or an ultimately probably unsustainable 6.8% of its total GDP.

In terms of purchasing power parity (the buying power of different countries’ currencies using a common “basket of goods”), Russia still marginally outspends the EU. But not if non-EU Nato members such as the UK and Norway are factored into the equation.

So far, Russia has not been able to decisively outperform Ukraine’s military on the battlefield. With the transatlantic alliance – and hence US support – still by and large intact and a more assertive coalition of European allies backing Kyiv emerging, this is unlikely to change soon.

That is why Russia employs its wide range of hybrid warfare tools against European societies. It needs to sow doubt over their ability to prevail, to cause perceived hardship that makes supporting Ukraine unattractive, and to support populist allies who promote pro-Russian narratives, be they government parties in Hungary or Slovakia or opposition parties in Germany and elsewhere.

Permanent state of war

From the Kremlin’s perspective, the logic is probably very simple. Using the full spectrum of hybrid warfare signals that Russia has the capability and the will to make the costs for supporting Ukraine unacceptable for Europe.

With European support for Kyiv ebbing away, Russia will either defeat Ukraine outright on the battlefield or force the country into humiliating concessions at the negotiation table. Either outcome will damage European credibility and morale and allow Moscow to set the terms of a reshaping of the continent’s security order along the lines of one of the Kremlin’s favourite current talking points – “indivisible security”.

Indivisible security was one of the themes of Vladimir Putin’s speech at the annual meeting of the Valdai discussion club – a gathering of Russian and pro-Russian foreign and security policy analysts. By this he simply means a prioritisation of Russian interests over those of its neighbours – in other words a western recognition of a Russian sphere of influence.

But it would be a mistake to assume that recognising such a Russian sphere of influence would satisfy the Kremlin today in the same way as it may have satisfied Soviet rulers during the cold war. On the contrary, a Russian victory in and beyond Ukraine would most likely encourage dreams of further expansion.

The 2025 annual report of the Valdai club, written by some of Russia’s leading foreign policy thinkers, is instructive in this respect. Titled “Dr Chaos or how to stop worrying and love the disorder”, the report posits that the very purpose of war may have changed from victory to “maintaining a balance necessary for a period of relative peaceful development”.

If turned into actual policy, the kind of hybrid warfare the Kremlin has pursued against Europe for more than a decade, becomes a permanent feature of Russia’s relations with Europe. This is a vision that exposes the limits of Russia’s aspirations – managing chaos and loving disorder – and the dangers they imply for the rest of the world.

The Conversation

Stefan Wolff is a past recipient of grant funding from the Natural Environment Research Council of the UK, the United States Institute of Peace, the Economic and Social Research Council of the UK, the British Academy, the NATO Science for Peace Programme, the EU Framework Programmes 6 and 7 and Horizon 2020, as well as the EU’s Jean Monnet Programme. He is a Trustee and Honorary Treasurer of the Political Studies Association of the UK and a Senior Research Fellow at the Foreign Policy Centre in London.

ref. Russia now has a strategy for a permanent state of hybrid war – https://theconversation.com/russia-now-has-a-strategy-for-a-permanent-state-of-hybrid-war-266936

Can Netanyahu survive peace?

Source: The Conversation – UK – By John Strawson, Emeritus Professor of Law, University of East London

Now a ceasefire has come into effect in Gaza, Israel’s long-serving prime minister, Benjamin Netanyahu, faces the dilemma of how to campaign ahead of the next national elections. These elections must be held, at the latest, in one year’s time.

In a meeting at the Knesset in Jerusalem on October 13, both Netanyahu and opposition leader Yair Lapid made speeches that seemed to open the election campaign. Netanyahu chose to cast himself as war victor, while Lapid emphasised the liberal values contained in Israel’s declaration of independence.

Donald Trump also addressed Israeli lawmakers at the Knesset and, in his speech, paid many compliments to Netanyahu. He even directed a request to Israel’s president, Isaac Herzog, to pardon Netanyahu over longstanding fraud and bribery charges – something Herzog has already suggested.

But the US president also issued Netanyahu with a warning that Israel could not fight the world. Netanyahu has received a lesson in big power politics over the past month that will not have been welcomed.

It came after his miscalculation in attacking Qatar on September 9, where Hamas representatives were discussing the possibility of a plan to end the war in Gaza. Netanyahu was called to the White House and made to apologise to the Qatari government.

He was then pressured into signing up to Trump’s 20-point peace plan, which includes a “realistic pathway” to Palestinian self-determination and statehood. This is something Netanyahu has long opposed and puts him in a difficult position with his electoral base, which is vociferously against a Palestinian state.

The question now is can Netanyahu turn Trump’s plan to his advantage and win the next election?

Some commentators, such as Middle Eastern affairs expert Shira Efron, think Netanyahu has not realised that the Gaza deal represents a defeat for his government. Efron says the agreement contradicts what Netanyahu has sold Israelis for two years: the promise of total victory and the destruction of Hamas.

However, I think this underestimates a politician who has made a career out of turning obstacles into opportunities. His first election as Israel’s prime minister in 1996, for example, came despite trailing his rival Shimon Peres by a substantial margin in opinion polls at the start of the election campaign.

He has also learned to build coalitions with figures on the left, like former Israeli prime minister Ehud Barak, the centre like Benny Gantz and, of course, with the far-right politicians Bezalel Smotrich and Itamar Ben Gvir.




Read more:
Itamar Ben-Gvir and Bezalel Smotrich: the Netanyahu government extremists sanctioned by the UK


His speech in the Knesset during Trump’s visit was vintage Netanyahu. He spun the peace deal, which he was forced to sign, into a massive victory for Israel’s war aims in Gaza. Only weeks before he had been saying that Hamas could only be crushed by conquering Gaza City.

It is true that the living hostages have now been freed. But, in deploying 7,000 armed men to control areas in Gaza vacated by Israeli forces, Hamas hardly seems destroyed. Netanyahu has nonetheless convinced himself – and will now try to convince the electorate – that he has led Israel to total victory.

Prospects of success

Opinion polls since the October 7 Hamas attacks in 2023 have not made good reading for Netanyahu. However, despite this disaster taking place on his watch, Netanyahu’s polling has never been disastrous.

Current polls suggest that, if elections were held today, his Likud party would be the biggest single party in the Knesset. However, his ruling coalition would be unlikely to return to power. The same polling gives the Netanyahu bloc 51 seats compared to 55 for the opposition, with the balance held by Arab parties.

The opposition bloc ranges from the right, led by former Israeli prime minister Naftali Bennett, to the dovish Democrats. They are united in opposition to Netanyahu’s style of government and his judicial reforms, but they have not yet found a convincing narrative of what they stand for.

Whereas Netanyahu unites his bloc, the opposition is divided between several strong personalities. The leaders that make up this so-called “change bloc” – Naftali Bennett, Avigdor Lieberman, Benny Gantz, Yair Lapid and Yair Golan – all think they should be Israel’s next prime minister.

But unlike the last election in 2022, where these parties fought as divided incumbents after a short period in office, they have begun coordinating well in advance of the elections.

The election will be held in the wake of the still palpable trauma after October 7 and exhaustion from two years of war fought on many fronts. Despite this, Israel’s civil society remains healthy.

This has been best exemplified by the Hostages and Missing Family Forum, a body that has not only campaigned publicly for the hostages return but also provided vital services to the families and released hostages. The big question will be the effect of such movements on the way Israelis vote.

Much, of course, will also depend on how Trump’s peace plan develops on the ground. If the US and its allies can deploy an international stabilisation force and create a semblance of a governing authority in Gaza to take over from Hamas, then calm may be maintained. This could boost Netanyahu’s reelection chances if he can spin it as a win for Israel.

Looming over the plan is the decommissioning of Hamas – not just guns and rockets but also dismantling its network of tunnels beneath Gaza. This process is unlikely to be smooth. It is possible that Trump’s plan will still allow Netanyahu some more opportunities to demonstrate military prowess ahead of the election. This might help mobilise support for his coalition, particularly among the far right.

Since the start of the war in Gaza, the Israeli prime minister has compared himself to Winston Churchill, Britain’s leader during the second world war. Churchill did indeed win the war, but went on to lose elections in 1945. Netanyahu will be working hard to prove that part of the comparison wrong.

The Conversation

John Strawson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Can Netanyahu survive peace? – https://theconversation.com/can-netanyahu-survive-peace-267383

The BBC is a partisan battleground – why does Japan’s public broadcaster escape the same fate?

Source: The Conversation – UK – By Steven David Pickering, Honorary Professor, International Relations, Brunel University of London

William Barton/Shutterstock

Public service broadcasters are supposed to be the most trusted news outlets in democratic societies. Funded through models like licence fees and free from advertising, they are meant to stand apart from commercial media.

But our new study of trust in the BBC in the UK and NHK in Japan shows that reality is more complicated. Politics and ideology divide trust in public broadcasters in very different ways.

Like the BBC, NHK is a nationwide broadcaster with a mandate to serve the public interest and a fee-based funding model – making it a useful comparison. But there are differences: NHK’s budget is approved every year by parliament, and its style is often seen as cautious or technocratic. Unlike the BBC, it has not become a lightning rod for partisan battles.

In the UK, the BBC commands middling levels of trust overall, but those levels are deeply polarised. And the results of a recent survey reveal that the public is worried about political interference in the BBC.

As part of our TrustTracker research, we asked people how much they trust the BBC on a scale from one (“not at all”) to seven (“completely”). We ran this survey every month for 19 months, from December 2022 to June 2024. We also asked them how they voted at the 2019 general election.

On that 1-7 scale, average trust in the BBC was 3.6. As far as political affiliation, the people most trusting of the BBC were Lib Dem voters, giving an average of 4.5 on the 1-7 scale. Next up were Labour voters, coming in around 3.9. Conservative voters were nearer 3.2, while people who voted for the Brexit party, the predecessor of Reform, averaged just 2.2.

This is a divide that we don’t really see in Japan: there is much less polarisation around NHK than there is for the BBC.

In Japan, trust in NHK clusters in the mid-3s across the major parties. Supporters of the ruling Liberal Democratic party (LDP, centre-right) tend to rate NHK a little lower than average, while backers of the Constitutional Democratic party (CDP, centre-left) rate it slightly higher. Supporters of the Japan Innovation party (JIP, right-leaning) and the Japanese Communist party (JCP, left) fall in between, but the differences are modest.

In Japan, NHK isn’t the partisan lightning rod that the BBC has become. Trust levels are modest and remarkably uniform, suggesting that while it may be seen as dull or technocratic, it is not a site of political polarisation.

Partisan battleground

The UK’s partisan divide around media becomes even more apparent when we look beyond the Beeb. Many people have little trust in the news they read on social media. But people who voted for the Brexit party in 2019 trust the BBC even less than social media.

That tells us something important. The BBC has become the focal point for wider partisan scepticism about the media. In our study, Conservative voters show lower trust than Labour or Lib Dem voters, but it is Brexit party supporters who are the most hostile, rating the BBC even less trustworthy than social media news.

This division has been baked in for years. When we look back to the 2016 referendum, Leave voters average about 3.0 on our BBC trust scale, compared with 4.1 among Remain voters. That gap highlights how Brexit now functions as a shorthand for a wider political divide in the UK, one that still shapes how people view certain issues, including the BBC.

The BBC has become a political lightning rod: rejected by anti-establishment voters on the right, and strongly embraced by liberal-centrist voters.

Why the difference?

The contrast is not just about governance models or funding arrangements. The BBC’s licence fee and NHK’s parliamentary budget oversight play a role, but the bigger story is political culture.

In the UK, criticism of the BBC has become part of political identity on the right. Conservative politicians and sections of the press have long accused it of bias, not only in its coverage of Europe, immigration and culture, but also in its metropolitan outlook.

These debates escalated after Brexit, when the BBC came under fire from both Conservative MPs and the Brexit Party, which cast the broadcaster as out of touch with “ordinary people”.

The BBC is not simply one broadcaster among many; it has become a symbol around which partisan distrust gathers.

In Japan, NHK attracts none of this polarisation. Across supporters of all mainstream parties, trust in NHK sits at a broadly similar, middling level. Small anti-NHK protest parties exist, but they remain fringe. Mainstream parties have not made hostility to NHK a defining issue.

An NHK cameraman films a reporter with an umbrella on a rainy day
Trust levels in Japan’s NHK broadcaster are modest yet steady across the political spectrum.
Ned Snowman/Shutterstock

As a result, NHK is seen less as a political symbol and more as a technocratic background institution: rarely inspiring enthusiasm, but also not a target of partisan attack or polarisation.

One of the BBC’s founding principles is that it needs to entertain. But this opens it to both loyalty and attack. Supporters value the passion it inspires, but this comes at the cost of deep political polarisation.

NHK, by contrast, avoids such extremes by being more technocratic. This also means it struggles to command the same public enthusiasm or cultural weight as the BBC. Whether broadcasters should aspire to one model or the other depends on whether stability or symbolism is more important in sustaining public trust.

At a time when misinformation spreads rapidly and public trust in institutions is under strain, public broadcasters are supposed to provide a shared ground of reliable information. Our findings suggest they still do, but in different ways.

The Conversation

Steven David Pickering received funding from the UK Research and Innovation’s Economic and Social Research Council (UKRI-ESRC, grant reference ES/W011913/1).

Yosuke Sunahara receives funding from Japan Society for the Promotion of Science (JSPS, grant reference JPJSJRP 20211704).

Martin Ejnar Hansen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The BBC is a partisan battleground – why does Japan’s public broadcaster escape the same fate? – https://theconversation.com/the-bbc-is-a-partisan-battleground-why-does-japans-public-broadcaster-escape-the-same-fate-266315

African languages for AI: the project that’s gathering a huge new dataset

Source: The Conversation – Africa – By Vukosi Marivate, Chair of Data Science, Professor of Computer Science, Director AfriDSAI, University of Pretoria

The African Next Voices project has started out with sites in Kenya, Nigeria and South Africa. Iuliia Anisimova/iStock

Artificial intelligence (AI) tools like ChatGPT, DeepSeek, Siri or Google Assistant are developed by the global north and trained in English, Chinese or European languages. In comparison, African languages are largely missing from the internet.

A team of African computer scientists, linguists, language specialists and others have been working on precisely this problem for two years already. The African Next Voices project, primarily funded by the Gates Foundation (with other funding from Meta) and involving a network of African universities and organisations, recently released what’s thought to be the largest dataset of African languages for AI so far. We asked them about their project, with sites in Kenya, Nigeria and South Africa.


Why is language so important to AI?

Language is how we interact, ask for help, and hold meaning in community. We use it to organise complex thoughts and share ideas. It’s the medium we use to tell an AI what we want – and to judge whether it understood us.

We are seeing an upsurge of applications that rely on AI, from education to health to agriculture. These models are trained from large volumes of (mostly) linguistic (language) data. These are called large language models or LLMs but are found in only a few of the world’s languages.




Read more:
AI in Africa: 5 issues that must be tackled for digital equality


Languages also carry culture, values and local wisdom. If AI doesn’t speak our languages, it can’t reliably understand our intent, and we can’t trust or verify its answers. In short: without language, AI can’t communicate with us – and we can’t communicate with it. Building AI in our languages is therefore the only way for AI to work for people.

If we limit whose language gets modelled, we risk missing out on the majority of human cultures, history and knowledge.

Why are African languages missing and what are the consequences for AI?

The development of language is intertwined with the histories of people. Many of those who experienced colonialism and empire have seen their own languages being marginalised and not developed to the same extent as colonial languages. African languages are not as often recorded, including on the internet.

So there isn’t enough high-quality, digitised text and speech to train and evaluate robust AI models. That scarcity is the result of decades of policy choices that privilege colonial languages in schools, media and government.




Read more:
AI chatbots can boost public health in Africa – why language inclusion matters


Language data is just one of the things that’s missing. Do we have dictionaries, terminologies, glossaries? Basic tools are few and many other issues raise the cost of building datasets. These include African language keyboards, fonts, spell-checkers, tokenisers (which break text into smaller pieces so a language model can understand it), orthographic variation (differences in how words are spelled across regions), tone marking and rich dialect diversity.

The result is AI that performs poorly and sometimes unsafely: mistranslations, poor transcription, and systems that barely understand African languages.

In practice this denies many Africans access – in their own languages – to global news, educational materials, healthcare information, and the productivity gains AI can deliver.

When a language isn’t in the data, its speakers aren’t in the product, and AI cannot be safe, useful or fair for them. They end up missing the necessary language technology tools that could support service delivery. This marginalises millions of people and increases the technology divide.

What is your project doing about it – and how?

Our main objective is to collect speech data for automatic speech recognition (ASR). ASR is an important tool for languages that are largely spoken. This technology converts spoken language into written text.

The bigger ambition of our project is to explore how data for ASR is collected and how much of it is needed to create ASR tools. We aim to share our experiences across different geographic regions.

The data we collect is diverse by design: spontaneous and read speech; in various domains – everyday conversations, healthcare, financial inclusion and agriculture. We are collecting data from people of diverse ages, gender and educational backgrounds.

Every recording is collected with informed consent, fair compensation and clear data-rights terms. We transcribe with language-specific guidelines and a large range of other technical checks.

In Kenya, through Maseno Centre for Applied AI, we are collecting voice data for five languages. We’re capturing the three main language groups Nilotic (Dholuo, Maasai and Kalenjin) as well as Cushitic (Somali) and Bantu (Kikuyu).




Read more:
What do Nigerian children think about computers? Our study found out


Through Data Science Nigeria, we are collecting speech in five widely spoken languages – Bambara, Hausa, Igbo, Nigerian Pidgin and Yoruba. The dataset aims to accurately reflect authentic language use within these communities.

In South Africa, working through the Data Science for Social Impact lab and its collaborators, we have been recording seven South African languages. The aim is to reflect the country’s rich linguistic diversity: isiZulu, isiXhosa, Sesotho, Sepedi, Setswana, isiNdebele and Tshivenda.

Importantly, this work does not happen in isolation. We are building on the momentum and ideas from the Masakhane Research Foundation network, Lelapa AI, Mozilla Common Voice, EqualyzAI, and many other organisations and individuals who have been pioneering African language models, data and tooling.

Each project strengthens the others, and together they form a growing ecosystem committed to making African languages visible and usable in the age of AI.

How can this be put to use?

The data and models will be useful for captioning local-language media; voice assistants for agriculture and health; call-centre and support in the languages. The data will also be archived for cultural preservation.




Read more:
Hype and western values are shaping AI reporting in Africa: what needs to change


Larger, balanced, publicly available African language datasets will allow us to connect text and speech resources. Models will not just be experimental, but useful in chatbots, education tools and local service delivery. The opportunity is there to go beyond datasets into ecosystems of tools (spell-checkers, dictionaries, translation systems, summarisation engines) that make African languages a living presence in digital spaces.

In short, we are pairing ethically collected, high-quality speech at scale with models. The aim is for people to be able to speak naturally, be understood accurately, and access AI in the languages they live their lives in.

What happens next for the project?

This project only collected voice data for certain languages. What of the remaining languages? What of other tools like machine translation or grammar checkers?

We will continue to work on multiple languages, ensuring that we build data and models that reflect how Africans use their languages. We prioritise building smaller language models that are both energy efficient and accurate for the African context.

The challenge now is integration: making these pieces work together so that African languages are not just represented in isolated demos, but in real-world platforms.

One of the lessons from this project, and others like it, is that collecting data is only step one. What matters is making sure that the data is benchmarked, reusable, and linked to communities of practice. For us, the “next” is to ensure that the ASR benchmarks we build can connect with other ongoing African efforts.




Read more:
Does AI pose an existential risk? We asked 5 experts


We also need to ensure sustainability: that students, researchers, and innovators have continued access to compute (computer resources and processing power), training materials and licensing frameworks (Like NOODL or Esethu). The long-term vision is to enable choice: so that a farmer, a teacher, or a local business can use AI in isiZulu, Hausa, or Kikuyu, not just in English or French.

If we succeed, built-in AI in African languages won’t just be catching up. It will be setting new standards for inclusive, responsible AI worldwide.

The Conversation

Vukosi Marivate is a Co-Founder of Lelapa AI. DSFSI is funded by the Gates Foundation, Meta, Google.org, ABSA (for the ABSA UP Chair of Data Science). Vukosi is a co-founder of the Deep Learning Indaba and Masakhane Research Foundation. Vukosi is a board member of the Partnership on AI and the Council for Higher Education in South Africa.

Ife Adebara is a Co-Founder and Chief Technology Officer of EqualyzAI. She receives funding from Gates Foundation, Lacuna and the University of British Columbia and she is affiliated with Data Science Nigeria.

Lilian Wanzare receives funding from Gates Foundation. she is affiliated with Maseno University and Utavu AI Foundation. .

ref. African languages for AI: the project that’s gathering a huge new dataset – https://theconversation.com/african-languages-for-ai-the-project-thats-gathering-a-huge-new-dataset-266371

The hidden sources of forever chemicals leaking into rivers – and what to do about them

Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

Phil Silverman/Shutterstock

As one of the birthplaces of the industrial revolution, the River Mersey in northern England is no stranger to pollution flowing into its waters.

“It’s gone through periods of extremely bad river water quality where the river was just raw sewage”, explains Patrick Byrne, a water scientist at Liverpool John Moores University. “During the heyday of manufacturing and the industrial revolution, you would’ve had a lot of toxic metals as well from different manufacturing processes.”

Despite a perception that the water quality is better than it used to be, Byrne’s research found that the river now has a new kind of pollution problem: the amount of forever chemicals entering the Mersey catchment area is among one of the highest in the world.

Per- and polyfluoroalkyl substances (PFAS) are a class of human-made chemicals used in waterproofing, food packaging and many industrial processes. They’re known as forever chemicals because they persist and are hard to destroy. PFAS have been found in almost every environment on the planet. They accumulate in wildlife and humans and some have been linked to cancer.

In this episode of The Conversation Weekly podcast, we talk to Byrne about why rivers are the “canary in the coalmine” for wider contamination of a landscape, and how so much PFAS continues to end up in them.

Byrne recently published a study of the amount of PFAS making it into the Mersey that was able to pinpoint some of the biggest sources, including of types of PFAS that are now banned in the UK. To his surprise it wasn’t big factories churning out lots of effluent. Instead, the PFAS were mostly coming from old, buried landfills, airports and recycling facilities.

Listen to the conversation with Patrick Byrne on The Conversation Weekly podcast to find out why monitoring PFAS in this way can help environmental regulators prioritise the areas needed to clean up first.

This episode of The Conversation Weekly was written and produced by Katie Flood, Mend Mariwany and Gemma Ware. Mixing and sound design by Michelle Macklem and theme music by Neeta Sarl.

Newsclips in this episode from Sunrise, France24 English and ABC New Australia.

Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts or Spotify.

The Conversation

Patrick Byrne receives funding from the Natural Environment Research Council.

ref. The hidden sources of forever chemicals leaking into rivers – and what to do about them – https://theconversation.com/the-hidden-sources-of-forever-chemicals-leaking-into-rivers-and-what-to-do-about-them-267465

Stethoscope, meet AI – helping doctors hear hidden sounds to better diagnose disease

Source: The Conversation – USA – By Valentina Dargam, Research Assistant Professor of Biomedical Engineering, Florida International University

The basic premise of the stethoscope has been around for centuries, largely unchanged. Jonathan Kitchen/DigitalVision via Getty Images

When someone opens the door and enters a hospital room, wearing a stethoscope is a telltale sign that they’re a clinician. This medical device has been around for over 200 years and remains a staple in the clinic despite significant advances in medical diagnostics and technologies.

The stethoscope is a medical instrument used to listen to and amplify the internal sounds produced by the body. Physicians still use the sounds they hear through stethoscopes as initial indicators of heart or lung diseases. For example, a heart murmur or crackling lungs often signify an issue is present. Although there have been significant advances in imaging and monitoring technologies, the stethoscope remains a quick, accessible and cost-effective tool for assessing a patient’s health.

Though stethoscopes remain useful today, audible symptoms of disease often appear only at later stages of illness. At that point, treatments are less likely to work and outcomes are often poor. This is especially the case for heart disease, where changes in heart sounds are not always clearly defined and may be difficult to hear.

We are scientists and engineers who are exploring ways to use heart sounds to detect disease earlier and more accurately. Our research suggests that combining stethoscopes with artificial intelligence could help doctors be less reliant on the human ear to diagnose heart disease, leading to more timely and effective treatment.

History of the stethoscope

The invention of the stethoscope is widely credited to the 19th-century French physician René Theophile Hyacinthe Laënnec. Before the stethoscope, physicians often placed their ear directly on a patient’s chest to listen for abnormalities in breathing and heart sounds.

In 1816, a young girl showing symptoms of heart disease sought consultation with Laënnec. Placing his ear on her chest, however, was considered socially inappropriate. Inspired by children transmitting sounds through a long wooden stick, he instead rolled a sheet of paper to listen to her heart. He was surprised by the sudden clarity of the heart sounds, and the first stethoscope was born.

Wooden tube with writing wrapped around one side
One of René Laënnec’s original wooden stethoscopes.
Science Museum London/Science and Society Picture Library, CC BY-NC-SA

Over the next couple of decades, researchers modified the shape of this early stethoscope to improve its comfort, portability and sound transmission. This includes the addition of a thin, flat membrane called a diaphragm that vibrates and amplifies sound.

The next major breakthrough occurred in the mid-1850s, when Irish physician Arthur Leared and American physician George Philip Cammann developed stethoscopes that could transmit sounds to both ears. These binaural stethoscopes use two flexible tubes connected to separate earpieces, allowing clearer and more balanced sound by reducing outside noise.

These early models are remarkably similar to the stethoscopes medical doctors use today, with only slight modifications mainly designed for user comfort.

Listening to the heart

Medical schools continue to teach the art of auscultation – the use of sound to assess the function of the heart, lungs and other organs. Digital models of stethoscopes, which have been commercially available since the early 2000s, offer new tools like sound amplification and recording – yet the basic principle that Laënnec introduced endures.

When listening to the heart, doctors pay close attention to the familiar “lub-dub” rhythm of each heartbeat. The first sound – the lub – happens when the valves between the upper and lower chambers of the heart close as it contracts and pushes blood out to the body. The second sound – the dub – occurs when the valves leading out of the heart close as the heart relaxes and refills with blood.

Diagram of stethoscope
The diaphragm and bell of a stethoscope transmit different sound frequencies to the listener.
Jarould/Wikimedia Commons, CC BY-SA

Along with these two normal sounds, doctors also listen for unusual noises – such as murmurs, extra beats or clicks – that can point to problems with how blood is flowing or whether the heart valves are working properly.

Heart sounds can vary greatly depending on the type of heart disease present. Sometimes, different diseases produce the same abnormal sound. For example, a systolic murmur – an extra sound between first and second heart sounds – may be heard with narrowing of either the aortic or pulmonary valve. Yet the very same murmur can also appear when the heart is structurally normal and healthy. This overlap makes it challenging to diagnose disease based solely on the presence of murmurs.

Teaching AI to hear what people can’t

AI technology can identify the hidden differences in the sounds of healthy and damaged hearts and use them to diagnose disease before traditional acoustic changes like murmurs even appear. Instead of relying on the presence of extra or abnormal sounds to diagnose disease, AI can detect differences in sound that are too faint or subtle for the human ear to detect.

To build these algorithms, researchers record heart sounds using digital stethoscopes. These stethoscopes convert sound into electronic signals that can be amplified, stored and analyzed using computers. Researchers can then label which sounds are normal or abnormal to train an algorithm to recognize patterns in the sounds it can then use to predict whether new sounds are normal or abnormal.

Doctor holding stethoscope to patient's chest
Stethoscopes can capture diagnostic information the human ear alone cannot hear.
Drs Producoes/E+ via Getty Images

Researchers are developing algorithms that can analyze digitally recorded heart sounds in combination with digital stethoscopes as a low-cost, noninvasive and accessible tool to screen for heart disease. However, a lot of these algorithms are built on datasets of moderate-to-severe heart disease. Because it is difficult to find patients at early stages of disease, prior to when symptoms begin to show, the algorithms don’t have much information on what hearts in the earliest stages of disease sound like.

To bridge this gap, our team is using animal models to teach the algorithms to analyze heart sounds to find early signs of disease. After training the algorithms on these sounds, we assess its accuracy by comparing it with image scans of calcium buildup in the heart. Our research suggests that an AI-based algorithm can classify healthy heart sounds correctly over 95% of the time and can even differentiate between types of heart disease with nearly 85% accuracy. Most importantly, our algorithm is able to detect early stages of disease, before cardiac murmurs or structural changes appear.

We believe teaching AI to hear what humans can’t could transform how doctors diagnose and respond to heart disease.

The Conversation

Valentina Dargam receives funding from Florida Heart Research Foundation and National Institute of Health.

Joshua Hutcheson receives funding from the Florida Heart Research Foundation, the American Heart Association, and the National Heart, Lung, and Blood Institute of the National Institutes of Health.

ref. Stethoscope, meet AI – helping doctors hear hidden sounds to better diagnose disease – https://theconversation.com/stethoscope-meet-ai-helping-doctors-hear-hidden-sounds-to-better-diagnose-disease-267373