Some new drugs aren’t actually ‘new’ – pharmaceutical companies exploit patents and raise prices for patients, but data transparency can help protect innovation

Source: The Conversation – USA (3) – By Lucy Xiaolu Wang, Assistant Professor, Department of Resource Economics, UMass Amherst

When companies file hundreds of patents for a single drug, affordable versions can remain out of reach for years. pilli/iStock via Getty Images Plus

Pharmaceutical innovation saves lives. But not every “new” drug is truly new.

Patents are designed to reward breakthrough inventions by granting the inventors temporary monopoly rights to recoup the costs of research and development and to encourage future innovation. But firms may also exploit the system in ways that make drugs more expensive and less accessible to patients. A 2023 study found that 78% of drugs associated with new patents weren’t actually new drugs but minor modifications.

After obtaining a drug’s primary patent, pharmaceutical companies often file additional ones to extend their monopoly rights. This practice – called evergreening – may cover new dosages, delivery methods, drug combinations and conditions. Though some of these secondary patents improve the effectiveness or convenience of treatment, many have little effect on health outcomes. More often, these subsequent changes are mainly used to strategically prolong market exclusivity, delay competition from generics and keep drug prices high.

Such practices raise concerns about drug access and affordability, especially when companies use minor tweaks to block cheaper alternatives, with little benefit to patients. Yet distinguishing between truly innovative improvements and low-value extensions has been challenging for regulators and courts.

I am an economist studying innovation and digitization in health care markets. My colleague Dennis Byrski and I have focused on how regulatory transparency plays a role in curbing weak patents. Our recently published research found that when clinical trial data become public, this disclosure makes it harder for firms to obtain patents for incremental changes that add little therapeutic benefit for patients.

What makes a drug patentable?

According to World Intellectual Property Organization, a patentable invention needs to be novel and non-obvious.

Novelty means the invention hasn’t been previously documented in publicly available information – such as patents, publications or products – in fields related to an invention before the filing date. This information is often referred to as prior art.

Non-obviousness means the invention wouldn’t be obvious – an easy tweak or routine step in the process – to a skilled person in the field based on existing knowledge. For example, if prior art reveals that a new combination therapy improves treatment outcomes, officials may deem subsequent patents using the same drug cocktail as obvious and refuse to grant or enforce the patent.

Pharmaceutical companies game the patent system to maintain their monopoly on a drug.

For drugs, these two concepts are deeply intertwined with safety and efficacy. If a company reformulates a drug – say, by changing an inactive ingredient or tweaking the dose – it is not always easy to determine whether such changes improve patient health without further testing in the clinic.

According to guidelines from the European Patent Office, clinical trial results can be critical to prior art, particularly when revealing unexpected or previously undisclosed therapeutic benefits. Patent advisers have also noted that evidence from trials can play a decisive role in assessing novelty and non-obviousness.

However, comprehensive clinical trial results are often either unavailable or not disclosed until the start of the marketing authorization process, when a company submits a comprehensive application to regulators to formally approve a drug for sale.

In fact, while European drug regulators strongly encourage companies to disclose clinical trial data early in the process, firms can defer the release of study data for up to seven years after trial completion or until the drug goes on the market – whichever occurs first. The latter is more binding for firms wishing to delay the release of critical data points to avoid competition.

Marketing authorization changes the game

Given the lengthy drug development process, most firms file the primary patent of a drug early on, often before starting clinical trials and obtaining data on treatment safety and efficacy.

This information is required when applying for marketing authorization and is usually disclosed through detailed Phase 3 clinical trial results. That data can then become prior art to evaluate subsequent patent applications, making it harder to obtain low-value patents. But does marketing authorization actually affect whether drug companies pursue follow-on patents?

Timeline of drug development and patenting process in Europe, extending over 25 years
The drug development and marketing process can be lengthy.
Dennis Byrski and Lucy Xiaolu Wang, CC BY-NC-ND

To investigate how patenting behaviors change after marketing authorization, we specifically used data from the German Patent and Trade Mark Office and the European Patent Office’s Worldwide Patent Statistical Database. Legal and innovation scholars worldwide often view the European agency as the gold standard for patent quality, and scholars use European drug patents as high-quality benchmarks when evaluating U.S. drug patents.

Furthermore, the U.S. has seen four major Supreme Court cases involving patent eligibility between 2010 and 2014, including two focused on the pharmaceutical sector. The European setting allowed us to study changes in patenting behavior in the absence of direct legal changes to the patent system.

Identifying primary patents isn’t easy. Because they often aren’t labeled in drug patent databases, researchers often need to manually review lengthy patent texts for U.S. drugs. We overcome this difficulty by tracking supplementary protection certificates granted by the European patent term extension system. This system requires companies to specify which main drug patent to extend after marketing authorization and before patent expiration.

We found that disclosing prior art – such as existing knowledge from clinical trial data – during marketing authorization makes it harder to obtain low-value, follow-on patents afterward. This was reflected by a sharp drop in self-citations from subsequent patents for that drug and other patents with similar disease targets.

In contrast, subsequent self-citations from substantive product patents – such as those for new drug derivatives – and patents targeting different disease areas continue at roughly the same pace as before marketing authorization.

These findings suggest that transparency in the authorization process effectively deters companies from obtaining low-value patent extensions without discouraging further research and development.

Importantly, we saw similar patenting adjustments among the patent owner’s competitors, collaborators and generic manufacturers. This pattern suggests that changes in patenting behaviors may not be driven by reduced profit-seeking after drug approval, as other firms would have a higher motivation to obtain related weak patents after seeing a drug’s market potential. Once clinical trial data is public, this seems to have a systemwide effect on reducing low-value, follow-on patents, likely driven by a higher bar for novelty.

Interestingly, we didn’t see similar declines in patent filings after earlier milestones in the drug development process, such as the end of Phase 2 clinical trials. These milestones provide information on drug quality but involve less data disclosure, so they’re less likely to provide usable prior art for patent examiners.

In other words, it’s the full clinical transparency at marketing authorization that makes a big difference.

What this means for patients and policymakers

Drug patent quality matters. Weak patents can drive up drug costs and delay access by blocking competition from generics long after the market has rewarded a company for its main innovation. The results can be costly for patients, insurers and public health systems, and it risks steering R&D toward marginal tweaks instead of breakthrough therapies.

Our findings suggest that integrating regulatory information, including clinical trial data, into patent assessments can indirectly improve patent quality. Doing so can reduce the number of weak drug patents filed more for strategic considerations rather than improving patient health.

Better aligning patents with genuine innovation is not just a legal concern but a public health imperative. Transparency, paired with smarter review systems, can help raise the bar for drug development and reward the kinds of innovations that truly improve health.

The Conversation

Lucy Xiaolu Wang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Some new drugs aren’t actually ‘new’ – pharmaceutical companies exploit patents and raise prices for patients, but data transparency can help protect innovation – https://theconversation.com/some-new-drugs-arent-actually-new-pharmaceutical-companies-exploit-patents-and-raise-prices-for-patients-but-data-transparency-can-help-protect-innovation-258989

The Conversation sponsors Vitae’s 2025 Three Minute Thesis competition – register to vote for your winner

Source: The Conversation – UK – By Jo Adetunji, Executive Editor – Partnerships, The Conversation

The 2025 Vitae 3MT finalists, clockwise L-R: Miranda Qianyu Wang, Yuxuan Wu, Abubakar Yunusa, Caitlin Campbell, Vic Pickup, Cesar Portillo. CC BY

You have three minutes to present your big research idea which will be viewed by thousands of people. Go! That is the challenge put to doctoral researchers in the 2025 Vitae Three Minute Thesis competition, sponsored by The Conversation. The candidates must present a compelling spoken presentation on their research topic to non-experts in those three minutes. The competition is fierce, and this year is no exception.

Research doesn’t exist in a vacuum, and we all benefit from more open knowledge and understanding. The Conversation and the Vitae Three Minute Thesis (3MT®) competition aim to provide engaging, exciting and accessible insights into academic research, and to inspire new thinking.

There are three prizes up for grabs: the Judges Choice, selected by a panel of judges; the Editor’s Choice, selected by The Conversation; and the People’s Choice – chosen by you.

Six finalists will by vying for your vote and the online final will be broadcast on Wednesday 1 October at 12 noon
(GMT+1)
. Register to watch and vote for your favourite. All winners will be be announced on Friday October 3.

The six finalists are:

Miranda Qianyu Wang, Durham University. Miranda’s research examines the legal and ethical implications of neuroscientific evidence within the criminal justice system. By exploring the intersection of law, neuroscience, and criminal behaviour, she provides insightful comparative analyses on how justice systems might better integrate neuroscientific findings.

Abubakar Yunusa, Robert Gordon University. Abubakar specialises in hydrogen injection and mixing optimisation in natural gas pipelines. His research supports safer and more efficient transitions to low-carbon energy systems. Abubakar bridges technical expertise with real-world challenges in the energy transition.

Caitlin Campbell, Ulster University. Caitlin is an optometrist and PhD researcher who is passionate about preventing avoidable vision loss and supporting those with visual impairment. Through the development of a new vision test, her work aims to enable earlier detection of glaucoma-related vision loss, the leading cause of permanent blindness worldwide, and to accelerate access to treatment.

Yuxuan Wu, University of Birmingham. Yuxuan has a general interest in the intersection of new technologies and work and employment. Her research focuses on artificial intelligence (AI) and the future of work.

Vic Pickup, University of Reading. Vic is the author of three poetry books and a long-time lover of romance novels. She is currently working in the Mills & Boon archives held in the university’s special collections.

Cesar Portillo, University of West London. Cesar is a sound engineer and PhD candidate specialising in immersive audio and accessibility for visually impaired audiences. His research investigates how spatial sound and haptic feedback can transform virtual environments into inclusive narrative spaces.

The six above have already battled it out to win the 3MT® competition within their own institution, and have progressed through the national semi-finals. Who will get your vote?

Rachel Eastwood Cox, director of business operations at CRAC/Vitae, said: “The Three Minute Thesis (3MT®), launched in 2008 by the University of Queensland, Australia, dares doctoral candidates to do the impossible: explain years of complex research in a clear, captivating, and concise three-minute spoken presentation.

“Since 2014, Vitae has proudly hosted the UK’s national 3MT® competition, supporting Vitae member institutions in fostering a culture of effective research communication and public engagement.

“3MT® presentations have reached tens of thousands of viewers on YouTube, making cutting-edge research accessible and engaging to the wider public. It’s become a powerful way to spark curiosity and inspire the next generation of researchers.

To find out more about the finalists, the semi-finalists and the competition click here.

Vitae and its membership programme are managed by the Careers Research and Advisory Centre (CRAC) Limited, an independent registered charity.

The Conversation

ref. The Conversation sponsors Vitae’s 2025 Three Minute Thesis competition – register to vote for your winner – https://theconversation.com/the-conversation-sponsors-vitaes-2025-three-minute-thesis-competition-register-to-vote-for-your-winner-265004

Biosphere 2’s latest mission: Learning how life first emerged on Earth – and how to make barren worlds habitable

Source: The Conversation – USA – By Scott Saleska, Professor of Ecology & Evolutionary Biology, University of Arizona

Biosphere 2 is a research facility located near Tucson, Ariz. Katja Schulz/Flickr, CC BY

From a distance, Biosphere 2 emerges from the cacti and creosote of the Sonoran desert like a gleaming oasis, a colony of glass and bright white structures. Despite being just outside Tucson, Arizona, it looks almost like a colony on another planet.

When one of the facility’s 100,000 annual visitors steps inside, they see a whole world – from a tropical rainforest, glistening in 50 shades of green and teeming with life, to a miniature, experimental ocean. Toward the end of the tour, the visitor comes to a comparatively barren-looking experiment called the Landscape Evolution Observatory, where life is struggling to establish itself on crushed volcanic rock originally spewed from an ancient Arizonan volcano.

It is these rock slopes, where life is colonizing and transforming a tough landscape, that our team thinks are the key to humanity’s future – both on Earth and, eventually, on other worlds.

Biosphere 2 first became famous as the human experiment of the 1990s that sealed a group of eight researchers inside its 3 acres of diverse ecosystems for two long years. The goal was to experiment with the viability of a closed ecological system to maintain human life in outer space. Today, we – a global change ecologist, an astronomer and a doctoral student specializing in microbial biogeochemistry, along with our team of colleagues – have made Biosphere 2 into a test bed for understanding how life transforms landscapes, from local areas to whole planets.

We hope to use what we learn to help preserve biodiversity, access to fresh water and food security. To address these issues, we must understand how soil, rocks, water and microbes together drive the transformation of landscapes, from local to planetary scales.

Beyond Earth, these same principles apply to the challenge of terraformation: the science of rendering other worlds habitable.

How life on Earth affects the Earth

Life doesn’t just sit on the Earth’s surface. Organisms profoundly affect the planet’s geology, as well as the atmosphere’s composition. Biology can transform barren environments into habitable ecosystems.

This happened with the evolution of cyanobacteria, the first microscopic organisms to use oxygen-producing photosynthesis. Cyanobacteria pumped oxygen into the atmosphere 2 billion to 3 billion years ago.

Atmospheric oxygen, in turn, enabled a new supercharged metabolism of life called aerobic, or oxygen-using, respiration. Aerobic respiration produced so much energy that it became the dominant way for organisms to make the energy needed for life, eventually making multicellular life possible.

Cyanobacteria allowed organisms to take in oxygen and produce energy, which made more complex life possible.

In addition, the oxygen produced by photosynthesizing cyanobacteria also made its way to the upper atmosphere, forming another kind of oxygen known as ozone, which, by shielding the Earth’s surface from sterilizing ultraviolet radiation, allowed life to expand onto land.

Biology again transformed the planet when the life that expanded onto land 400 million years ago gave a biological boost to the chemical and geological process known as weathering. Weathering occurs when carbon dioxide in the atmosphere chemically reacts with material on Earth’s surface – such as rocks, minerals and water – to create soils imbued with nutrients that can support plants and other living organisms.

On Earth, weathering was first driven by purely physical and chemical processes. Once plants expanded from the oceans onto land, however, their roots injected carbon dioxide directly into the soil where weathering reactions were strongest. This process sucked carbon dioxide out of the atmosphere. Lower carbon dioxide levels in the atmosphere then cooled the Earth, turning a hothouse planet into one with a more temperate climate, like the one enjoyed by life today.

How organisms colonize new landscapes

When life colonizes a new, previously barren landscape, it starts up the process of primary succession. In this process, the first biological organisms – simple microbes – expand into interacting communities made of different kinds of organisms, which increase in complexity and biodiversity as they change and adapt to fit their new environment.

These microbes react with the air and rock through photosynthesis and respiration to produce organic molecules called metabolites. The metabolites can alter the soil, allowing it to support larger plants. The larger plants that then emerge have complex structures such as roots and leaves that regulate the flow of water – and contribute to weathering. Eventually, humans can domesticate some of these plants for food crops.

Biosphere 2’s Landscape Evolution Observatory is ideal for the careful study of how weathering and primary succession work together. Those processes both happen at the small, molecular scale but emerge as important only over large areas.

A glass dome with a sloped floor of dark rock.
The Landscape Evolution Observatory at Biosphere 2 contains crushed basalt rock extracted from a volcanic crater.
Daniel Oberhaus/Wikimedia Commons, CC BY-SA

The Landscape Evolution Observatory has both hillslopes larger than any experiment in the world and crushed rock soils that are more simple and uniform than almost any natural setting. These characteristics mean the molecular measurements are consistent and understandable, even in different places across the larger hillslope.

The observatory is made up of three hillslopes covering 300 square yards that look like three giant tray-shaped, inclined planters made of steel, filled with crushed rock instead of fertile soil. The rain that falls on them soaks into the surface and flows down the incline to dribble out along the lower edge, where it is captured and carefully measured for its chemical and biological content.

We are using biological tools to understand how microbes and simple plants end up spreading across the larger, originally bare, crushed-rock hillslopes. These techniques include metagenomics, which can identify all the microbial life forms in a hillslope, and metabolomics, which can look at the organic molecules that microbes and plants produce and use in their interactions with each other and their surroundings.

Putting this all together, we see that colonies of photosynthesizing bacteria initiate succession on the Landscape Evolution Observatory. Critically, these cyanobacteria – descendants of those same organisms that gave Earth oxygen – capture the essential nutrient, nitrogen, from the air. Nitrogen buildup paves the way for mosses – simple plants without roots – to join them.

These bacteria-moss communities are now gradually spreading across the observatory’s hillslopes, preparing the way for the next phase: colonization by larger plants with roots.

By learning how life establishes itself and then thrives on lifeless landscapes, we will gain insights for addressing key problems scientists face today. For example, when life-forms in a new landscape successfully spread and diversify, they tell us how biodiversity is preserved.

When those spreading organisms transform the way a landscape uses water, they give us lessons on how we should use water. And when plants find a way to be productive under stressful conditions, they give us examples for increasing our own plant-dependent food security.

Implications for Mars

Earth isn’t the only planet where we can apply our findings. Today, Mars, unlike Earth, is a barren, lifeless desert. But it was once warmer, wetter and, like the early Earth, it may have hosted primitive living organisms several billion years ago.

While the rock in the Landscape Evolution Observatory comes from an Arizona volcano, basalt is the same kind of rock found on the surface of the Moon and Mars.

Countries such as the United States and China plan to land humans on Mars, and the company SpaceX has grandiose plans to send a million colonists there. If humans ever hope to grow plants on the red planet’s surface, learning how to create early succession there will prove crucial.

Before Mars colonization can happen at a large, sustainable scale, the first step is to grow plants and create food for human life. That is, we must solve what might be called the “Matt Damon problem,” after the actor in the movie “The Martian.” In order to survive, his character had to quickly learn to grow food crops – potatoes – on Mars.

'The Martian' protagonist Mark Watney, donning a space suit, overlooks a Mars-scape.
In ‘The Martian,’ Matt Damon’s character Mark Watney had to figure out how to grow food and survive the red planet’s barren, inhospitable environment.
20th Century Fox

Matt Damon’s character would probably not have survived on the real Mars of today, because its rocklike surface, called regolith, is too full of salts and toxic chemicals such as perchlorate for potatoes, or most Earth-like plants, to grow.

At the Landscape Evolution Observatory, we are focusing on experiments in chambers that simulate Martian environments to ask what it will take to detoxify Mars-like soils so that microbes and plants can live there.

One initial approach is to use perchlorate-reducing bacteria, recruited from extreme environments on Earth, to convert the perchlorate into harmless chloride.

In this way, experiments at Biosphere 2 are informing the science of terraforming Mars. Together with progress made in other areas, such as finding ways of making Mars warm enough to sustain liquid water, restoring barren environments on Earth could be a key to one day living on Mars.

The Conversation

Scott Saleska receives funding from National Science Foundation, NASA, and U.S. Department of Energy.

Ghiwa Makke receives funding from National Science Foundation and U.S. Department of Energy.

Chris Impey does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Biosphere 2’s latest mission: Learning how life first emerged on Earth – and how to make barren worlds habitable – https://theconversation.com/biosphere-2s-latest-mission-learning-how-life-first-emerged-on-earth-and-how-to-make-barren-worlds-habitable-262293

What happens when AI comes to the cotton fields

Source: The Conversation – USA – By Debra Lam, Founding Director of the Partnership for Inclusive Innovation, Enterprise Innovation Institute, Georgia Institute of Technology

A researcher works in a cotton field in Jenkins County, Georgia, as part of a project on AI and pesticide use. Dorothy Seybold

Precision agriculture uses tools and technologies such as GPS and sensors to monitor, measure and respond to changes within a farm field in real time. This includes using artificial intelligence technologies for tasks such as helping farmers apply pesticides only where and when they are needed.

However, precision agriculture has not been widely implemented in many rural areas of the United States.

We study smart communities, environmental health sciences and health policy and community health, and we participated in a research project on AI and pesticide use in a rural Georgia agricultural community.

Our team, led by Georgia Southern University and the City of Millen, with support from University of Georgia Cooperative Extension, local high schools and agriculture technology company FarmSense, is piloting AI-powered sensors to help cotton farmers optimize pesticide use. Georgia is one of the top cotton-producing states in the U.S., with cotton contributing nearly US$1 billion to the state’s economy in 2024. But only 13% of Georgia farmers use precision agriculture practices.

Public-private-academic partnership

Innovation drives economic growth, but access to it often stops at major city limits. Smaller and rural communities are frequently left out, lacking the funding, partnerships and technical resources that fuel progress elsewhere.

At the same time, 75% of generative AI’s projected economic impact is concentrated in customer operations, marketing, software engineering and research and development, according to a 2023 McKinsey report. In contrast, applications of AI that improve infrastructure, food systems, safety and health remain underexplored.

Yet smaller and rural communities are rich in potential — home to anchor institutions like small businesses, civic groups and schools that are deeply invested in their communities. And that potential could be tapped to develop AI applications that fall outside of traditional corporate domains.

The Partnership for Innovation, a coalition of people and organizations from academia, government and industry, helps bridge that gap. Since its launch almost five years ago, the Partnership for Innovation has supported 220 projects across Georgia, South Carolina, Kentucky, Tennessee, Virginia, Texas and Alabama, partnering with more than 300 communities on challenges from energy poverty to river safety.

One Partnership for Innovation program provides seed funding and technical support for community research teams. This support enables local problem-solving that strengthens both research scholarship and community outcomes. The program has recently focused on the role of civic artificial intelligence – AI that supports communities and local governments. Our project on cotton field pesticide use is part of this program.

Cotton pests and pesticides

Our project in Jenkins County, Georgia, is testing that potential. Jenkins County, with a population of around 8,700, is among the top 25 cotton-growing counties in the state. In 2024, approximately 1.1 million acres of land in Georgia were planted with cotton, and based on the 2022 agricultural county profiles census, Jenkins County ranked 173rd out of the 765 counties producing cotton in the United States.

a hand holding a white puffy object with leafy plants in the background
Cotton is a major part of Georgia’s agriculture industry.
Daeshjea Mcgee

The state benefits from fertile soils, a subtropical-to-temperate climate, and abundant natural resources, all of which support a thriving agricultural industry. But these same conditions also foster pests and diseases.

Farmers in Jenkins County, like many farmers, face numerous insect infestations, including stink bugs, cotton bollworms, corn earworms, tarnished plant bugs and aphids. Farmers make heavy use of pesticides. Without precise data on the bugs, farmers end up using more pesticides than they likely need, risking residents’ health and adding costs.

While there are some existing tools for integrated pest management, such as the Georgia Cotton Insect Advisor app, they are not widely adopted and are limited to certain bugs. Other methods, such as traditional manual scouting and using sticky traps, are labor-intensive and time-consuming, particularly in the hot summer climate.

Our research team set out to combine AI-based early pest detection methods with existing integrated pest management practices and the insect advisor app. The goal was to significantly improve pest detection, decrease pesticide exposure levels and reduce insecticide use on cotton farms in Jenkins County. The work compares different insect monitoring methods and assesses pesticide levels in both the fields and nearby semi-urban areas.

We selected eight large cotton fields operated by local farmers in Millen, four active and four control sites, to collect environmental samples before farmers began planting cotton and applying pesticides.

a triangular open-sided structure
Pest insects are identified by AI as they fly through a light sensor inside this trap.
Daeshjea Mcgee

The team was aided by a new AI-based insect monitoring system called the FlightSensor by FarmSense. The system uses a machine learning algorithm that was trained to recognize the unique wingbeats of each pest insect species. The specialized trap is equipped with infrared optical sensors that project an invisible infrared light beam – called a light curtain – across the entrance of a triangular tunnel. A sensor monitors the light curtain and uses the machine learning algorithm to identify each pest species as insects fly into the trap.

FlightSensor provides information on the prevalence of targeted insects, giving farmers an alternative to traditional manual insect scouting. The information enables the farmers to adjust their pesticide-spraying frequency to match the need.

What we’ve learned

Here are three things we have learned so far:

1. Predictive pest control potential – AI tools can help farmers pinpoint exactly where pest outbreaks are likely – before they happen. That means they can treat only the areas that need it, saving time, labor and pesticide costs. It’s a shift from blanket spraying to precision farming – and it’s a skill farmers can use season after season.

2. Stronger decision-making for farmers – The preliminary results indicate that the proposed sensors can effectively monitor insect populations specific to cotton farms. Even after the sensors are gone, farmers who used them get better at spotting pests. That’s because the AI dashboards and mobile apps help them see how pest populations grow over time and respond to different field conditions. Researchers also have the ability to access this data remotely through satellite-based monitoring platforms on their computers, further enhancing the collaboration and learning.

3. Building local agtech talent – Training students and farmers on AI pest detection is doing more than protecting cotton crops. It’s building digital literacy, opening doors to agtech careers and preparing communities for future innovation. The same tools could help local governments manage mosquitoes and ticks and open up more agtech innovations.

Blueprint for rural innovation

By using AI to detect pests early and reduce pesticide use, the project aims to lower harmful residues in local soil and air while supporting more sustainable farming. This pilot project could be a blueprint for how rural communities use AI generally to boost agriculture, reduce public health risks and build local expertise.

Just as important, this work encourages more civic AI applications – grounded in real community needs – that others can adopt and adapt elsewhere. AI and innovation do not need to be urban or corporate to have a significant effect, nor do you need advanced technology degrees to be innovative. With the right partnerships, small towns, too, can harness innovations for economic and community growth.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. What happens when AI comes to the cotton fields – https://theconversation.com/what-happens-when-ai-comes-to-the-cotton-fields-261526

Your age shouldn’t put you off learning a new language – what the research says

Source: The Conversation – UK – By Karen Roehr-Brackin, Reader, Department of Language and Linguistics, University of Essex

If you’ve always wanted to learn a new language, don’t let age put you off. People aged over 60 can be independent and flexible in how they learn a language – and successful, too.

There is ample evidence from decades of research that, as we age, some of our perceptual and cognitive abilities gradually decline. Our hearing and vision are not as sharp as they used to be, we process information less speedily, and our memory may not be as good as it was when we were younger. These are all known corollaries of healthy ageing which do not normally have a major adverse impact on daily life.

What is noted less frequently is the possibility that these effects need not be deficits in themselves, but may arise from a lifetime of accumulated knowledge and experience. The older we get, the more information we have to sort through, and this may slow us down. In this context, it is also important to highlight the fact that general and especially verbal knowledge can actually grow with increasing age.

In line with this, researchers have investigated language learning in late adulthood and shown that there is no age limit to our ability to learn a new language – we can do it at any point in our lives. However, it is less clear which approach to language learning and teaching works best later in life.

Research with younger adults indicates that an explicit approach which includes explanations of the target language and spells out grammar rules, for instance, is most effective.

At first glance, we may assume that this should apply to older adults too, or indeed that it should be even more true for them, given that it reflects a traditional approach to language instruction. Older adults may well have experienced exactly such an approach during their schooling and may therefore favour it.

To date, there is surprisingly little research that has put this assumption to the test. A recent study in the Netherlands found no evidence that late-life language learners would do better with an explicit approach.

Indeed, it did not matter whether instruction was explicit or implicit, that is, with or without any grammatical explanations. The senior volunteers did equally well, regardless of how they were taught.

Comparing approaches

In my new study with colleague Renato Pavlekovic, we compared an explicit with an incidental approach to language learning. In a small set of online lessons, 80 English-speaking volunteers aged between 60 and 83 learned the beginnings of Croatian – a language they were completely unfamiliar with.

In the explicit approach, a full explanation of the grammatical structure we targeted was given. In the incidental approach, there was no explanation, but additional practice exercises were available instead.

Woman with headphones and laptop taking notes
Older learners were successful with different learning methods.
fizkes/Shutterstock

We found that learners did equally well regardless of the instructional approach they experienced. They first learned a set of vocabulary items and subsequently the targeted grammatical structure to a high level of success, achieving average scores of around 80% accuracy. This suggests that the teaching approach did not matter to these late-life learners – they could find their own way independently of how the learning materials were presented.

In this new study, we also explored the role of cognitive and perceptual factors as well as our volunteers’ self-concepts: that is, how they felt about their own health, happiness and abilities. In addition, we asked questions about their (former) occupations and prior language learning experience. Interestingly, we found a connection between the ability to learn implicitly (that is, picking things up from context without being aware of it), occupational status (whether someone was retired or still working) and self-concepts.

Specifically, people who reported a more positive self-concept showed better implicit learning abilities. Moreover, people who were still working at the time of the study showed better implicit learning abilities than individuals who were retired – something we had observed in a previous study too. Importantly, this effect was independent of age.

Superficially, a link between employment status, implicit learning ability and self-concept may not make much sense. There is arguably a common denominator, though: confidence could be at the centre of a self-reinforcing cycle. A person with strong implicit learning ability remains in the workforce for longer. This boosts their self-concept, which in turn makes them continue with their occupation for longer.

While in work, they need to take the rough with the smooth; they cannot only engage in activities they enjoy. This means that they continue drawing on their implicit learning ability, and so forth.

Taken together, the results of our study show that late-life language learners can be very successful. They seem to be sufficiently independent to choose the path that works best for them, so it does not matter so much which teaching approach is used. In addition, confidence is important; it appears to arise from a combination of ability and social status.

The Conversation

Karen Roehr-Brackin received funding from the British Academy/Leverhulme Trust (grant reference SRG23230787) which supported the research project reported here.

ref. Your age shouldn’t put you off learning a new language – what the research says – https://theconversation.com/your-age-shouldnt-put-you-off-learning-a-new-language-what-the-research-says-263581

Deadly drug-resistant fungus spreading rapidly through European hospitals

Source: The Conversation – UK – By Joni Wildman, PhD Candidate in Mycology, University of Bath

TommyStockProject/Shutterstock.com

A new European health survey shows that Candidozyma auris – a dangerous drug-resistant fungus – is spreading rapidly in hospitals across the continent. Cases and outbreaks are increasing, with some countries now seeing ongoing local transmission.

Here’s what you need to know about this deadly fungus.

What is C auris?

Scientists first isolated C auris from the ear of a Japanese patient in 2009. It has since spread to hospitals in over 40 countries.

C auris is a yeast species – single-celled microorganisms from the fungi kingdom. While yeasts contribute to a healthy microbiome and many people experience only mild yeast infections when microbial balance becomes disrupted, C auris is far more dangerous. The fungus usually causes only mild infections in healthy people, but in patients with weakened immune systems, it can prove deadly, particularly when it enters the bloodstream and vital organs.

The fungus primarily affects severely ill patients, spreading from the skin into the bloodstream and organs.

Why is it dangerous?

C auris causes severe organ infections when it breaches the body’s natural defences. Between 30% and 60% of patients with invasive C auris infections die. And patients who carry the fungus risk developing infections themselves and spreading it to others.

The fungus can be very difficult to treat because some strains are resistant to nearly all available drugs. C auris appears to evolve rapidly, with new drug-resistant strains emerging regularly.

An illustration of C auris.
C auris was first discovered in 2009. It is now on every continent bar Antarctica.
peterschreiber.media/Shutterstock.com

How does it spread?

C auris spreads mainly in hospitals through direct contact with infected people or contaminated surfaces. The fungus produces proteins called adhesins that help it stick to surfaces, making it very hard to remove.

Why is it spreading so quickly?

C auris spreads quickly because hospitals struggle to detect and eliminate the fungus. People can carry it on their skin without symptoms, unknowingly bringing it into hospitals. And diagnosis is difficult. Standard laboratory tests misidentify C auris as more common yeasts. Hospitals need specialised methods to correctly identify it, so early cases go unidentified without access to these tools.

The fungus grows well at higher temperatures (optimally at 37-40°C), thriving on warm bodies. It also withstands routine disinfection. C auris forms biofilms – layers of microbial growth that prove extremely difficult to eliminate.

How common is it in Europe?

C auris has spread fast across Europe. Once limited to isolated cases, it now causes sustained hospital outbreaks. Between 2013 and 2023, there were over 4,000 cases, including 1,300 in 2023 alone.

The UK recorded 134 cases between November 2024 and April 2025 – a 23% increase compared with the previous six months.

In some European countries, the fungus has become endemic in hospitals, and true numbers may be higher because of limited testing.

Globally, C auris has reached every continent except Antarctica.

Scientists have identified distinct genetic groups that dominate in different regions, each varying in how easily they spread and how resistant they are to treatment, making control more difficult.

What are health authorities doing about it?

Health authorities recognise that they need to contain C auris and are taking action. The European Centre for Disease Prevention and Control has called for stronger surveillance, and the World Health Organization has placed C auris on its list of priority fungal pathogens.

In the UK, new guidance sets out practical steps for hospitals, highlighting the careful and responsible use of antifungal drugs as crucial for controlling the disease.

Can it be stopped?

Hospitals can stop or at least control C auris. Those acting quickly have successfully contained outbreaks. Experts stress that a critical window exists when rigorous measures can stamp out a single case or small outbreak. However, once C auris spreads widely in a hospital or region, it becomes extremely difficult to stop.

What’s being done about it?

Hospitals and governments need to act swiftly. Hospitals must strengthen their infection-control practices, while governments should mandate that every case of C auris is reported to health agencies so its spread can be tracked. Public health authorities can help by issuing clear guidance and expanding access to reliable tests, and specialised response teams should be ready to support hospitals during outbreaks.

What happens if it’s not contained?

If authorities allow C auris to spread unchecked, it could become a permanent healthcare menace, causing frequent outbreaks that mean higher costs, strained hospital capacity, and more illness and deaths.

We might also see C auris evolve even greater drug resistance through continued circulation. Scientists have already found some strains that resist all major antifungal drugs. This is why health authorities stress the need for immediate action while containing and limiting C auris remains possible. Without urgent action, this fungus could become a permanent fixture in hospitals, driving up infections,costs and deaths.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Deadly drug-resistant fungus spreading rapidly through European hospitals – https://theconversation.com/deadly-drug-resistant-fungus-spreading-rapidly-through-european-hospitals-265328

Russian incursions into Nato airspace show Ukraine’s allied coalition needs to be ready as well as willing

Source: The Conversation – UK – By Stefan Wolff, Professor of International Security, University of Birmingham

While the air and ground war in Ukraine grinds on, Moscow is increasing pressure on Kyiv’s western allies. Russian drone incursions into Poland in the early hours of September 10, and Romania a few days later, were followed by three Russian fighter jets breaching Estonian airspace on September 19.

And there has been speculation that drones which forced the temporary closure of Copenhagen and Oslo airports overnight are connected to the Kremlin as well.

While this might suggest a deliberate strategy of escalation on the part of the Russian president, Vladimir Putin, it is more likely an attempt to disguise the fact that the Kremlin’s narrative of inevitable victory is beginning to look shakier than ever.

A failed summer offensive that has been extremely costly in human lives is hardly something to cheer about. Estimates of Russian combat deaths now stand at just under 220,000. What’s more, this loss of life has produced little in territorial advances.

Since the start of the full-scale invasion in February 2022, Russia has gained some 70,000 sq km. This means that Moscow has nearly tripled the amount of territory it illegally occupies. But during its most recent summer offensive, it gained fewer than 2,000 sq km. On September 1, 2022, Russia controlled just over 20% of Ukrainian territory, three years later it was 19% (up from 18.5% at the beginning of 2025).

Perhaps most telling that the Russian narrative of inevitable victory is hollow is the fact that Russian forces were unable to convert a supposed breakthrough around Pokrovsk in the Donbas area of Ukraine in August into any solid gains after a successful Ukrainian counterattack.

That Russia is not winning, however, is hardly of comfort to Ukraine. Moscow still has the ability to attack night after night, exposing weaknesses in Ukraine’s air defence system and targeting critical infrastructure.

The western response, too, has been slow so far and has yet to send a clear signal to the Kremlin what Nato’s and the EU’s red lines are. While Nato swiftly launched Eastern Sentry in response to the Russian drone incursion into Poland, the operation’s deterrent effect appears rather limited given subsequent Russian incursions into Estonia and undeclared flights in neutral airspace near Poland and Germany.

ISW map showing the status of the war in Ukraine, September 22, 2025.
The status of the war in Ukraine, September 22, 2025.
ISW

Subsequent comments by Donald Tusk, the Polish prime minister, threatened to “shoot down flying objects when they violate our territory and fly over Poland”. He also cautioned that it was important “to think twice before deciding on actions that could trigger a very acute phase of conflict.”

On the other side of the Atlantic, Donald Trump, the US president, has said little about Russia ratcheting up pressure on Nato’s eastern flank. Regarding the Russian drone incursion into Poland, he mused that it could have been a mistake, before pledging to defend Nato allies in the event of a Russian attack.

This is certainly an improvement on his earlier threats to Nato solidarity, but it is at best a backstop against a full-blown Russian escalation. What it is not is a decisive step to ending the war against Ukraine. In fact, any such US steps seem ever farther off the agenda. The deadline that Trump gave Putin after their Alaska summit to start direct peace talks with Ukraine came and went without anything happening.

Europe scrambles to replace US guarantees

As for Trump’s phase-two sanctions on Russia and its enablers, these have now been made conditional by Trump on all Nato and G7 countries, imposing such sanctions first.

Meanwhile, US arms sales to Europe, meant to be channelled to strengthen Ukraine’s defences, have been scaled down by the Pentagon to replenish its own arsenals.

At the same time, a longstanding US support programme for the Baltic states – the Baltic security initiative – is under threat from cuts. There are justified worries that it could be discontinued as of next year.

As has been clear for some time, support for Ukraine – and ultimately the defence of Europe – is no longer a primary concern for the US under Trump. Yet European efforts to step into the gaping hole in the continent’s security left by US retrenchment are painfully slow. The defence budgets of the EU’s five biggest military spenders – France, Germany, Poland, Italy and the Netherlands – combined are less than one-quarter of what the US spends annually.

Even if money were not the issue, Europe has serious problems with its defence-industrial base. The EU’s flagship Security Action for Europe programme has faced months of delays over the participation of non-EU members – including the UK and Canada, two countries which have significant defence-industrial capacity.

European defence cooperation, including the flagship Future Combat Air System, is threatened by national quarrels, including between the EU’s two largest defence players, France and Germany.

Thus far, muddling through has worked for Ukraine’s western allies. This is mostly because Kyiv has held the line against the Russian onslaught. It has done so by making do with whatever the west provided while rapidly innovating its own defence sector.

It has also worked because Trump has not (yet) completely abandoned his European allies. There is enough life – or perhaps just enough ambiguity – left in the idea of Nato as a collective defence alliance to give Putin pause for thought. For now, he is merely testing boundaries. But if unchallenged, he might keep pushing further into uncharted territory – with unpredictable consequences.

Western stop-gap measures may be fine for now. But the west’s responses to Putin’s challenges – which are likely to become more frequent and more severe in the future – will require the European coalition of the willing to focus on the here and now and raise its level of preparedness.

The Conversation

Stefan Wolff is a past recipient of grant funding from the Natural Environment Research Council of the UK, the United States Institute of Peace, the Economic and Social Research Council of the UK, the British Academy, the NATO Science for Peace Programme, the EU Framework Programmes 6 and 7 and Horizon 2020, as well as the EU’s Jean Monnet Programme. He is a Trustee and Honorary Treasurer of the Political Studies Association of the UK and a Senior Research Fellow at the Foreign Policy Centre in London.

ref. Russian incursions into Nato airspace show Ukraine’s allied coalition needs to be ready as well as willing – https://theconversation.com/russian-incursions-into-nato-airspace-show-ukraines-allied-coalition-needs-to-be-ready-as-well-as-willing-265776

The UK, France, Canada and Australia have recognised Palestine – what does that mean? Expert Q+A

Source: The Conversation – UK – By George Kyris, Associate Professor in International Politics, University of Birmingham

The UK, France, Canada and Australia are among a group of nations that are moving to formally recognise the state of Palestine like most other states have done over the years. This move is a major diplomatic shift and turning point in one of the world’s most intractable conflicts. Here’s what it means.

What does it mean to recognise Palestine?

Recognising Palestine means acknowledging the existence of a state that represents the Palestinian people. Following from that, it also means that the recogniser can develop full diplomatic relations with representatives of this state – which would include exchanging embassies or negotiating government-level agreements.

Why have these countries moved together – and why now?

Diplomatic recognition, when done in concert, carries more heft than isolated gestures – and governments know this. A year or so ago, Spain tried to get European Union members to recognise Palestine together and when this was not possible opted to coordinate its recognition with Norway and Ireland only. Further away, a cluster of Caribbean countries (Barbados, Jamaica, Trinidad and Tobago, the Bahamas) also recognised Palestine around the same time.

By acting together, countries amplify the message that Palestinian statehood is not a fringe idea, but a legitimate aspiration backed by a growing international consensus. This collective recognition also serves to shield individual governments from accusations of unilateralism or political opportunism.

This wave of recognition comes now because of concern that Palestinian statehood is under threat, perhaps more than ever before. In their recognition statements, the UK and Canada cited Israel’s settlements in the West Bank in their reasoning.

The Israeli government has also revealed plans that amount to annexing Gaza, the other area that ought to belong to Palestinians. This is after months of assault on its people, which the UN commission of inquiry on the occupied Palestinian Territories and Israel found amounts to genocide. Public sentiment has also shifted dramatically in support of Palestine, adding to the pressure on governments.

Why do some say recognition isn’t legal?

Israel and some of its allies argue that the recognition is illegal because Palestine lacks the attributes of a functioning state, such as full control of its territory or a centralised government. Legal opinion on whether Palestine meets the criteria of statehood is divided. But, regardless, these criteria are not consistently used to recognise states.

In fact, many states have been recognised well before they had complete control over their borders or institutions. Ironically, the US recognised Israel in 1948, refuting critics that this was premature due to the lack of clear borders. Recognition has, therefore, always been political.

But even if we take a more legal perspective, the international community, through numerous UN and other texts has long recognised the right of Palestinians to have a state of their own.

Does recognition ‘reward Hamas’, as Israel claims?

Recognising a state does not mean you recognise those who govern it. At the moment, for example, many states do not recognise Taliban rule, but this doesn’t mean they have stopped recognising the existence of Afghanistan as a state.

Similarly, the fact that Netanyahu is under arrest warrant of the International Criminal Court for war crimes and crimes against humanity has not resulted in states withdrawing their recognition of the state of Israel and its people. Recognising a state is not the same as endorsing a specific government.

Not only that but all of the states that recently recognised Palestine have explicitly said that Hamas must play no role in a future government. France said that although it recognises the state of Palestine it won’t open an embassy until Hamas releases the hostages.

Will recognition make a difference?

The past few years have laid bare the limits of diplomacy in stopping the horrific human catastrophe unfolding in Gaza. This doesn’t leave much room for optimism. And, in a way, states taking brave diplomatic steps are, at the same time, exposing their reluctance to take more concrete action, such as sanctions, to press the government of Israel to end its war.

Still, the recognition brings the potential for snowball effects that would enhance the Palestinians’ international standing. They will be able to work more substantively with those governments who now recognise their state. More states may now also recognise Palestine, motivated by the fact others did the same.

Keir Starmer walking towards a microphone.
Starmer preparing to announce UK recognition of Palestine.
Number 10/Flickr, CC BY-NC-ND

And more recognition means better access to international forums, aid and legal instruments. For example, the UN’s recognition of Palestine as an observer state in 2011 allowed the International Court of Justice to hear South Africa’s case accusing Israel of genocide and the International Criminal Court to issue an arrest warrant for Netanyahu.

The implications for the Israeli government and some of its allies could also be significant. The US will now be isolated as the only permanent member of the UN Security Council not recognising Palestine. States that do not recognise Palestine will be in a dissenting minority and more exposed to critiques in international forums and public opinion.

This growing isolation may not force immediate changes and may not bother the current US administration, which often does not follow the logic of traditional diplomacy. Still, over time, the pressure on Israel and its allies to engage with a peace process may grow.

In the end, recognition from some of the world’s biggest players breaks their longstanding alignment with consecutive Israeli governments. It shows how strongly their public and governments feel about Israel’s threat to Palestinian statehood through annexation and occupation. For Palestinians, recognition strengthens their political and moral standing. For the government of Israel, it does the opposite.

But recognition alone is not enough. It must be accompanied by sustained efforts to end the war in Gaza, hold perpetrators of violence accountable and revive peace efforts towards ending the occupation and allow Palestinians their rightful sovereignty alongside Israel.

The Conversation

George Kyris does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The UK, France, Canada and Australia have recognised Palestine – what does that mean? Expert Q+A – https://theconversation.com/the-uk-france-canada-and-australia-have-recognised-palestine-what-does-that-mean-expert-q-a-265790

It’s OK to use paracetamol in pregnancy. Here’s what the science says about the link with autism

Source: The Conversation – Global Perspectives – By Nicholas Wood, Professor, The Children’s Hospital at Westmead Clinical School, University of Sydney

United States President Donald Trump has urged pregnant women to avoid paracetamol except in cases of extremely high fever, because of a possible link to autism.

Paracetamol – known as acetaminophen or by the brand name Tylenol in the US – is commonly used to relieve pain, such as back pain and headaches, and to reduce fever during pregnancy.

Australia’s Therapeutic Goods Administration today re-affirmed existing medical guidelines that it’s safe for pregnant women to take paracetamol at any stage of pregnancy.

Paracetamol is classified as a Category A drug. This means many pregnant women and women of childbearing age have long used it without increases in birth defects or harmful effects on the fetus.

It’s important to treat fevers in pregnancy. Untreated high fever in early pregnancy is linked to miscarriage, neural tube defects, cleft lip and palate, and heart defects. Infections in pregnancy have also been linked to greater risks of autism.

How has the research evolved in recent years?

In 2021 an international panel of experts looked at evidence from human and animal studies of paracetamol use in pregnancy. Their consensus statement warned that paracetamol use during pregnancy may alter fetal development, with negative effects on child health.




Read more:
Take care with paracetamol when pregnant — but don’t let pain or fever go unchecked


Last month a a group of researchers from Harvard University examined the association between paracetamol and neurodevelopmental disorders including autism and attention-deficit hyperactivity disorder (ADHD) in existing research.

They identified 46 studies and found 27 studies reported links between taking paracetamol in pregnancy and neurodevelopmental disorders in the offspring, nine showed no significant link, and four indicated it was associated with a lower risk.

The most notable study in their review, due to its sophisticated statistical analysis, covered almost 2.5 million children born in Sweden between 1995 and 2019, and was published in 2024.

The authors found there was a marginally increased risk of autism and ADHD associated with paracetamol use during pregnancy. However, when the researchers analysed matched-full sibling pairs, to account for genetic and environmental influences the siblings shared, the researchers found no evidence of an increased risk of autism, ADHD, or intellectual disability associated with paracetamol use.

Siblings of autistic children have a 20% chance of also being autistic. Environmental factors within a home can also affect the risk of autism. To account for these influences, the researchers compared the outcomes of siblings where one child was exposed to paracetamol in utero and the other wasn’t, or when the siblings had different levels of exposure.

The authors of the 2024 study concluded that associations found in other studies may be attributable to “confounding” factors: influences that can distort research findings.

A further review published in February examined the strengths and limitations of the published literature on the effect of paracetamol use in pregnancy on the child’s risk of developing ADHD and autism. The authors noted most studies were difficult to interpret because they had biases, including in selecting participants, and they didn’t for confounding factors.

When confounding factors among siblings were accounted for, they found any associations weakened substantially. This suggests shared genetic and environmental factors may have caused bias in the original observations.

Working out what causes or increases the risk of autism

A key piece to consider when assessing the risk of paracetamol and any link to neurodevelopmental disorders is how best to account for many other potentially relevant factors that may be important.

We still don’t know all the causes of autism, but several genetic and non-genetic factors have been implicated: the mother’s medication use, illnesses, body mass index, alcohol consumption, smoking status, pregnancy complications including pre-eclampsia and fetal growth restriction, the mother and father’s ages, whether the child is an older or younger sibling, the newborn’s Apgar scores to determine their state of health, breastfeeding, genetics, socioeconomic status, and societal characteristics.

It’s particularly hard to measure the last three characteristics, so they are often not appropriately taken into account in studies.

Other times, it may not be the use of paracetamol that is important but rather the mother’s underlying illness or reason paracetamol is being taken, such as the fever associated with an infection, that influences child development.




Read more:
Autism is not a scare story: What parents need to know about medications in pregnancy, genetic risk and misleading headlines


I’m pregnant, what does this mean for me?

There is no clear evidence that paracetamol has any harmful effects on an unborn baby.

But as with any medicine taken during pregnancy, paracetamol should be used at the lowest effective dose for the shortest possible time.

If you’re pregnant and develop a fever, it’s important to treat this fever, including with paracetamol.

If the recommended dose of paracetamol doesn’t control your symptoms or you’re in pain, contact your doctor, midwife or maternity hospital for further medical advice.

Remember, the advice for taking ibuprofen and other NSAIDS when you’re pregnant is different. Ibuprofen (sold under the brand name Nurofen) should not be taken during pregnancy.

The Conversation

Nicholas Wood previously received funding from the NHMRC and has held a Churchill fellowship.

Debra Kennedy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. It’s OK to use paracetamol in pregnancy. Here’s what the science says about the link with autism – https://theconversation.com/its-ok-to-use-paracetamol-in-pregnancy-heres-what-the-science-says-about-the-link-with-autism-265768

The thousand-year story of how the fork crossed Europe, and onto your plate today

Source: The Conversation – Global Perspectives – By Darius von Guttner Sporzynski, Historian, Australian Catholic University

John of Gaunt dining with the King of Portugal, Chronique d’Angleterre, vol 3, late 14 century. Wikimedia Commons

In today’s world, we barely think about picking up a fork. It is part of a standard cutlery set, as essential as the plate itself. But not that long ago, this now-ordinary utensil was viewed with suspicion, derision and even moral outrage.

It took centuries, royal marriages and a bit of cultural rebellion to get the fork from the kitchens of Constantinople (today’s Istanbul) onto the dining tables of Europe.

A scandalous utensil

Early versions of forks have been found in Bronze Age China and Ancient Egypt, though they were likely used for cooking and serving.

The Romans had elegant forks made of bronze and silver, but again, mainly for food preparation.

A green fork with two tines.
Bronze serving fork from Ancient Rome, c 2nd–3rd century CE.
Metropolitan Museum of Art

Eating with a fork – especially a small, personal one – was rare.

By the 10th century, Byzantine elites used them freely, shocking guests from western Europe. And by around the 11th century, the table fork began to make regular appearances at mealtimes across the Byzantine empire.

Bronze forks made in Persia during the 8th or 9th century.
Wikimedia Commons

In 1004, the Byzantine Maria Argyropoulina (985–1007), sister of Emperor Romanos III Argyros, married the son of the Doge of Venice and scandalised the city by refusing to eat with her fingers. She used a golden fork instead.

Later, the theologian Peter Damian (1007–72) declared Maria’s vanity in eating with “artificial metal forks” instead of using the fingers God had given her was what brought about divine punishment in the form of her premature death in her 20s.

Yet by the 14th century, forks had become common in Italy, thanks in part to the rise of pasta.

It was far easier to eat slippery strands with a pronged instrument than with a spoon or knife. Italian etiquette soon embraced the fork, especially among the wealthy merchant classes.

And it was through this wealthy class that the fork would be introduced to the rest of Europe in the 16th century by two women.

Enter Bona Sforza

Born in into the powerful families Sforza of Milan and Aragon of Naples, Bona Sforza (1494–1557) grew up in a world where forks were in use; more, they were in fashion.

Her family was used to the refinements of Renaissance Italy: court etiquette, art patronage, ostentatious dress for women and men, and elegant dining.

When she married Sigismund I, king of Poland and grand duke of Lithuania in 1518, becoming queen, she arrived in a region where dining customs were different. The use of forks was largely unknown.

Bowls, forks and a spoon made in Venice in the 16th century.
© The Trustees of the British Museum, CC BY-NC-SA

At courts in Lithuania and Poland, cutlery use was practical and limited. Spoons and knives were common for eating soups and stews, and the cutting of meat, but most food was eaten with the hands, using bread or trenchers – thick slices of stale bread that soaked up the juices from the food – for assistance.

This method was not only economical but also deeply embedded in courtly and noble dining traditions, reflecting a social etiquette in which communal dishes and shared eating were the norm.

Bona’s court brought Italian manners to the region, introducing more vegetables, Italian wine and, most unusually, the table fork.

Though her use of it was likely restricted at first to formal or court settings, it made an impression. Over time, especially from the 17th century onwards, forks became more common among the nobility of Lithuania and Poland.

Catherine de’ Medici comes to France

Catherine de’ Medici (1519–89) was born into the powerful Florentine Medici family, niece of Pope Clement VII. In 1533, aged 14, she married the future King Henry II of France as part of a political alliance between France and the Papacy, bringing her from Italy to France.

Catherine de’ Medici, introduced silver forks and Italian dining customs to the French court.

Like in the case of Bona Sforza, these arrived in Catherine’s trousseau. Her retinue also included chefs, pastry cooks, and perfumers, along with artichokes, truffles and elegant tableware.

Her culinary flair helped turn court meals into theatre.

While legends exaggerate her influence, many dishes now claimed as French, trace their roots to her Italian table: onion soup, duck à l’orange and even sorbet.

An Italian 15th century fork.
The Met

The ‘right’ way to eat

Like many travellers, the curious Englishman Thomas Coryat (1577–1617) in the early 1600s brought tales of fork-using Italians back home, where the idea still seemed laughably affected.

In England, using a fork in the early 1600s was a sign of pretension. Even into the 18th century, it was considered more masculine and more honest to eat with a knife and fingers.

But across Europe, change was underway. Forks began to be seen not just as tools of convenience, but symbols of cleanliness and refinement.

In France, they came to reflect courtly civility. In Germany, specialised forks multiplied in the 18th and 19th centuries: for bread, pickles, ice cream and fish.

And in England, the fork’s use eventually became a class marker: the “right” way to hold it distinguished the polite from the uncouth.

An etching of an old man and a fork from 1888.
Rijksmuseum

As mass production took off in the 19th century, stainless steel made cutlery affordable, and the fork became ubiquitous. By then, the battle had shifted from whether to use a fork to how to use it properly.

Table manners manuals now offered guidance on fork etiquette. No scooping, no stabbing, and always hold it tines down.

It took scandal, royal taste, and centuries of resistance for the fork to win its place at the table. Now it’s hard to imagine eating without it.

The Conversation

Darius von Guttner Sporzynski receives funding from the National Science Centre, Poland as a partner investigator in the grant “Polish queen consorts in the 15th and 16th centuries as wives and mothers” (2021/43/B/HS3/01490).

ref. The thousand-year story of how the fork crossed Europe, and onto your plate today – https://theconversation.com/the-thousand-year-story-of-how-the-fork-crossed-europe-and-onto-your-plate-today-260704