Africa’s traditional fermented foods – and why we should keep consuming them

Source: – By Florence Malongane, Senior lecturer, University of South Africa

Fermentation is a process where microorganisms like bacteria and yeast work together to break down complex carbohydrates and protein into simpler, more digestible forms.

The fermentation process not only extends the shelf life of food but also enhances its nutritional content. During fermentation, beneficial microorganisms produce essential vitamins and minerals.

Fermented foods have many benefits and have been shown to reduce inflammation and infections.

As nutrition researchers we undertook an in-depth assessment of fermented African foods and their potential to improve human health cost-effectively.

By gaining a deeper understanding of the diverse microbiomes present in various fermented indigenous African foods, we aim to enhance human health through targeted dietary interventions.

Going back in history

Fermentation as a preservation method can be traced back a long way.

In the Middle East, between 1,000 and 15,000 years ago, people moved from foraging and hunting to organised food cultivation and production.

Evidence of the alcoholic fermentation of barley into beer and grapes into wine dates back to between 2000 and 4000 BC.

In the Middle East and the Indian subcontinent milk was fermented to create yoghurt and other sweet and savoury fermented milks. White cabbage pickles and fermented olives are very popular in the Middle East.

In India and the Philippines, rice flour was fermented to produce products like noodles.

Africa’s traditions

In Africa, fermented foods hold great cultural significance and health benefits, yet this topic has not been thoroughly researched.

Foods are mostly fermented at home and trends vary by region.

The primary ingredients in African fermented foods are mainly cereals, tubers and milk.

Most of the fermented foods are plants that grow on their own in the wild and are often considered weeds in cropped and cultivated land. These include amaranths, Bidens pilosa, cleome and Corchorus species. The increased availability of African indigenous foods could expand the range of commercially available fermented African foods.

While some products like marula beer have entered the commercial market, the overall consumption of fermented foods among Africans has declined.

This drop is largely due to the widespread availability of refrigeration systems and a growing loss of interest in traditional African foods.

Improving health in Africa

Fermented root plants such as cassava and yam have been shown to decrease creatinine levels, which may indicate enhanced renal function and kidney health. This suggests that the fermentation process not only enriches these root plants with probiotics, but also promotes better physiological responses in the body.

Among the diverse array of fruits native to Africa, baobab and marula are the most popular fermented fruits. Fermenting them enhances their protein and fibre content. Consuming fermented baobab fruits has been shown to reduce the activity of α-amylase, an enzyme that may have implications for regulating blood sugar.

Millet, maize, African rice and sorghum are the most fermented grains in Africa. When these foods are fermented, they can help reduce blood glucose levels, serum triglycerides and cholesterol.

Amahewu is a traditional beverage produced through the fermentation of sorghum or maize, mostly enjoyed in South Africa and Zimbabwe for its tangy flavour and smooth texture.

In Kenya, a similar fermented cereal beverage known as uji is made of millet and flavoured with milk, adding to its rich and nutritious profile.

Ghana boasts its own version called akasa, which is prepared from a combination of sorghum, corn and millet and known for its unique taste and cultural significance.

In Sudan, the beverage referred to as abreh varies in preparation but shares the same essence of fermentation, while in Nigeria, ogi is another fermented cereal paste, from similar small grains like sorghum and millet, which produce a creamy beverage.

Fermenting sorghum and millet provides essential nutrients and supports metabolic health and gut function.

In Nigeria, fermented cereal beverages are widely used to control diarrhoea in young children.

Sour milk is the most fermented food in Africa, celebrated for its rich flavour and numerous health benefits.

During the fermentation process, bacteria convert the milk sugar, called lactose, into lactic acid.

Kulenaoto, a traditional fermented milk drink enjoyed in Kenya, is known for its creamy texture and slightly tangy flavour. South Africa produces sour milk known as amasi. Nigeria and Togo share a common fermented dairy product known as wara, which is made from fermented soybeans and is often served as a snack.

In Ghana, nyamie is a rich, thick yogurt-like product. In Cameroon, pendidam is a unique fermented milk product that is cherished for its distinctive taste and nutritional benefits, making it a staple in many households.

Regular consumption of fermented sour milk can play a significant role in weight management, decreasing visceral (gut) fat, which is a risk factor for cardiovascular diseases.

Moreover, fermented milk offers valuable protection against folate deficiency.

Looking forward

African fermented foods could be the easiest and least expensive way of introducing beneficial microbes to the gastrointestinal tract, replacing expensive pharmaceutical probiotics.

These processes should be encouraged, and younger generations need to be exposed to the benefits of these traditions.

Vanishing plants could be preserved and distributed through seed banks.

The tradition of fermentation should be encouraged at both household and commercial levels to promote overall health.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Africa’s traditional fermented foods – and why we should keep consuming them – https://theconversation.com/africas-traditional-fermented-foods-and-why-we-should-keep-consuming-them-243287

Beating malaria: what can be done with shrinking funds and rising threats

Source: – By Taneshka Kruger, UP ISMC: Project Manager and Coordinator, University of Pretoria

Healthcare in Africa faces a perfect storm: high rates of infectious diseases like malaria and HIV, a rise in non-communicable diseases, and dwindling foreign aid.

In 2021, nearly half of the sub-Saharan African countries relied on external financing for more than a third of their health expenditure. But donor fatigue and competing global priorities, such as climate change and geopolitical instability, have placed malaria control programmes under immense pressure. These funding gaps now threaten hard-won progress and ultimately malaria eradication.

The continent’s healthcare funding crisis isn’t new. But its consequences are becoming more severe. As financial contributions shrink, Africa’s ability to respond to deadly diseases like malaria is being tested like never before.

Malaria remains one of the world’s most pressing public health threats. According to the World Health Organization there were an estimated 263 million malaria cases and 597,000 deaths globally in 2023 – an increase of 11 million cases from the previous year.

The WHO African region bore the brunt, with 94% of cases and 95% of deaths. It is now estimated that a child under the age of five dies roughly every 90 seconds due to malaria.

Yet, malaria control efforts since 2000 have averted over 2 billion cases and saved nearly 13 million lives globally. Breakthroughs in diagnostics, treatment and prevention have been critical to this progress. They include insecticide-treated nets, rapid diagnostic tests, artemisinin-based combination therapies (drug combinations to prevent resistance) and malaria vaccines.

Since 2017, the progress has been flat. If the funding gap widens, the risk is not just stagnation; it’s backsliding. Several emerging threats such as climate change and funding shortfalls could undo the gains of the early 2000s to mid-2010s.

New challenges

Resistance to drugs and insecticides, and strains of the malaria parasite Plasmodium falciparum that standard
diagnostics can’t detect, have emerged as challenges. There have also been changes in mosquito behaviour, with vectors increasingly biting outdoors, making bed nets less effective.

Climate change is shifting malaria transmission patterns. And the invasive Asian mosquito species Anopheles stephensi is spreading across Africa, particularly in urban areas.

Add to this the persistent issue of cross-border transmission, and growing funding shortfalls and aid cuts, and it’s clear that the fight against malaria is at a critical point.

As the world observes World Malaria Day 2025 under the theme “Malaria ends with us: reinvest, reimagine, reignite”, the call to action is urgent. Africa must lead the charge against malaria through renewed investment, bold innovation, and revitalised political will.

Reinvest: Prevention is the most cost-effective intervention

We – researchers, policymakers, health workers and communities – need to think smarter about funding. The economic logic of prevention is simple. It’s far cheaper to prevent malaria than to treat it. The total cost of procuring and delivering long-lasting insecticidal nets typically ranges between US$4 and US$7 each and the nets protect families for years. In contrast, treating a single case of severe malaria may cost hundreds of dollars and involve hospitalisation.

In high-burden countries, malaria can consume up to 40% of public health spending.

In Tanzania, for instance, malaria contributes to 30% of the country’s total disease burden. The broader economic toll – lost productivity, work and school absenteeism, and healthcare costs – is staggering. Prevention through long-lasting insecticidal nets, chemoprevention and health education isn’t only humane; it’s fiscally responsible.

Reimagine: New tools, local solutions

We cannot fight tomorrow’s malaria with yesterday’s tools. Resistance, climate-driven shifts in transmission, and urbanisation are changing malaria’s patterns.

This is why re-imagining our approach is urgent.

African countries must scale up innovations like the RTS,S/AS01 vaccine and next-generation mosquito nets. But more importantly, they must build their own capacity to develop, test and produce these tools.

This requires investing in research and development, regional regulatory harmonisation, and local manufacturing.

There is also a need to build leadership capacity within malaria control programmes to manage this adaptive disease with agility and evidence-based decision-making.

Reignite: Community and collaboration matters

Reigniting the malaria fight means shifting power to those on the frontlines. Community health workers remain one of Africa’s greatest untapped resources. Already delivering malaria testing, treatment and health education in remote areas, they can also be trained to manage other health challenges.

Integrating malaria prevention into broader community health services makes sense. It builds resilience, reduces duplication, and ensures continuity even when external funding fluctuates.

Every malaria intervention delivered by a trusted, local health worker is a step towards community ownership of health.

Strengthened collaboration between partners, governments, cross-border nations, and local communities is also needed.

The cost of inaction is unaffordable

Africa’s malaria challenge is part of a deeper health systems crisis. By 2030, the continent will require an additional US$371 billion annually to deliver basic primary healthcare – about US$58 per person.

For malaria in 2023 alone, US$8.3 billion was required to meet global control and elimination targets, yet only US$4 billion was mobilised. This gap has grown consistently, increasing from US$2.6 billion in 2019 to US$4.3 billion in 2023.

The shortfall has led to major gaps in the coverage of essential malaria interventions.

The solution does not lie in simply spending more, but in spending smarter by focusing on prevention, building local innovation, and strengthening primary healthcare systems.

The responsibility is collective. African governments must invest boldly and reform policies to prioritise prevention.

Global partners must support without dominating. And communities must be empowered to take ownership of their health.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Beating malaria: what can be done with shrinking funds and rising threats – https://theconversation.com/beating-malaria-what-can-be-done-with-shrinking-funds-and-rising-threats-255126

Malaria scorecard: battles have been won and advances made, but the war isn’t over

Source: – By Shüné Oliver, Medical scientist, National Institute for Communicable Diseases

Sub-Saharan Africa continues to bear the brunt of malaria cases in the world. In this region 11 countries account for two-thirds of the global burden.

World Malaria Day is marked on 25 April. What progress has been made against the disease, where are the gaps and what’s being done to plug them?

As scientists who research malaria in Africa, we believe that the continent can defeat the disease. New, effective tools have been added to the malaria toolbox.

Researchers and malaria programmes, however, must strengthen collaborations. This will ensure the limited resources are used in ways that make the most impact.

The numbers

Some progress has been made, but in some cases there have been reverses.

  • Between 2000 and 2015 there was an 18% reduction in new cases from 262 million in 2000 to 214 million in 2015. Since then, progress has stalled.

  • The World Health Organization estimates that approximately 2.2 billion cases have been prevented between 2000 and 2023. Additionally, 12.7 million deaths have been avoided. In 2025, 45 countries are certified as malaria free. Only nine of those countries are in Africa. These include Egypt, Seychelles and Lesotho.

  • The global target set by the WHO was to reduce new cases by 75% compared to cases in 2015. Africa should have reported approximately 47,000 cases in 2023. Instead there were 246 million.

  • Almost every African country with ongoing malaria transmission experienced an increase in malaria cases in 2023. Exceptions to this were Rwanda and Liberia.

So why is progress stagnating and in many cases reversing?

The setbacks

Effective malaria control is extremely challenging. Malaria parasite and mosquito populations evolve rapidly. This makes them difficult to control.

Africa is home to malaria mosquitoes that prefer biting humans to other animals. These mosquitoes have also adapted to avoid insecticide-treated surfaces.

It has been shown in South Africa that mosquitoes may feed on people inside their homes, but will avoid resting on the sprayed walls.

Mosquitoes have also developed mechanisms to resist the effects of insecticides. Malaria vector resistance to certain insecticides used in malaria control is widespread in endemic areas. Resistance levels vary around Africa.

Resistance to the pyrethroid class is most common. Organophosphate resistance is rare, but present in west Africa. As mosquitoes become resistant to the chemicals used for mosquito control, both the spraying of houses and insecticide treated nets become less effective. However, in regions with high malaria cases, nets still provide physical protection despite resistance.

An additional challenge is that malaria parasites continue to develop resistance to anti-malarial drugs. In 2007 the first evidence began to emerge in south-east Asia that parasites were developing resistance to artemisinins. These are key drugs in the fight against malaria.

Recently this has been shown to be happening in some African countries too. Artemisinin resistance has been confirmed in Eritrea, Rwanda, Tanzania and Uganda. Molecular markers of artemisinin resistance were recently detected in parasites from Namibia and Zambia.

Malaria parasites have also developed mutations that prevent them from being being detected by the most widely used rapid diagnostic test in Africa.

Countries in the Horn of Africa, where parasites with these mutations are common, have changed the malaria rapid diagnostic tests used to ensure early diagnosis.

The progress

Nevertheless, the fight against malaria has been strengthened by novel control strategies.

Firstly, after more than 30 years of research, two malaria vaccines – RTS,S and R21 – have finally been approved by the WHO. These are being deployed in 19 African countries.

These vaccines have reduced disease cases and deaths in the high-risk under-five-years-old age group. They have reduced cases of severe malaria by approximately 30% and deaths by 17%.

Secondly, effectiveness of long-lasting insecticide-treated nets has been improved.

New insecticides have been approved for use. Chemical components that help to manage resistance have also been included in the nets.

Thirdly, novel tools are showing promise. One option is attractive toxic sugar baits. This is because sugar is what mosquitoes naturally eat. Biocontrol by altering the native gut bacteria of mosquitoes may also prove effective.

Fourthly, reducing mosquito populations by releasing sterilised male or genetically modified mosquitoes into wild mosquito populations is also showing promise. Trials are currently happening in Burkina Faso. Genetically sterilised males have been released on a small scale. This strategy has shown promise in reducing the population.

Fifthly, two new antimalarials are expected to be available in the next year or two. Artemisinin-based combination therapies are standard treatment for malaria. An improvement to this is triple artemisinin-based combination therapy. This is a combination of this drug with an additional antimalarial. Studies in Africa and Asia have shown these triple combinations to be very effective in controlling malaria.

The second new antimalarial is the first non-artemisinin-based drug to be developed in over 20 years. Ganaplacide-lumefantrine has been shown to be effective in young children. Once available, it can to be used to treat parasites that are resistant to artemisinin. This is because it has a completely different mechanism of action.

The end game

It has been several years since the malaria control toolbox has been strengthened with novel tools and strategies that target both the vector and the parasite. This makes it an ideal time to double down in the fight against this deadly disease.

In 2020, the WHO identified 25 countries with the potential to stop malaria transmission within their borders by 2025. While none of these countries eliminated malaria, some have made significant progress. Costa Rica and Nepal reported fewer than 100 cases. Timor-Leste reported only one case in recent years.

Three southern African countries are included in this group: Botswana, Eswatini and South Africa. Unfortunately, all these countries showed increases in cases in 2023.

With the new tools, these and other countries can eliminate malaria, getting us closer to the dream of a malaria-free world.

The Conversation

Shüné Oliver receives funding from the National Research Foundation of South Africa and the South African Medical Research Council. She is associated with both the National Institute for Communicable Diseases and the Wits Research Institte for Malaria.

Jaishree Raman receives funding from the Gates Foundation, Global Fund, Wellcome Trust, National Research Foundation, National Institute for Communicable Diseases, South African Medical Research Council, and the Research Trust. She is affiliated with the National Institute for Communicable Diseases, the Wits Institute for Malaria Research, University of Witwatersrand, and the Institute for Sustainable Malaria Control, University of Pretoria.

ref. Malaria scorecard: battles have been won and advances made, but the war isn’t over – https://theconversation.com/malaria-scorecard-battles-have-been-won-and-advances-made-but-the-war-isnt-over-255230

African women at higher risk of pre-eclampsia – a dangerous pregnancy complication

Source: – By Annettee Nakimuli, Associate Professor of Obstetrics and Gynecology, Makerere University

Pre-eclampsia is a danger to pregnant women. It’s a complication characterised by high blood pressure and organ damage, arising during the second half of pregnancy, in labour or in the first week after delivery.

It plays a major role in about 16% of the deaths of pregnant women in sub-Saharan Africa.

And it’s on the rise: between 2010 and 2018, the incidence of pre-eclampsia in Africa jumped by around 20%.

Pre-eclampsia usually occurs in young mothers during a first pregnancy. Girls under the age of 18 years are most at risk. The probability that a 15-year-old girl will die from complications of pregnancy is one in 150 in developing countries, versus one in 3,800 in developed countries, according to the World Health Organization.

Not only does pre-eclampsia pose a serious health threat to women, it also harms babies. It contributes to stillbirth, preterm birth and low birth weight.

Yet we still do not know enough about pre-eclampsia. This gap has driven my research into the disease.

I conducted the first genetic case-control study on pre-eclampsia among African women in comparison to European women over a decade ago for my PhD research.

My work revealed that both African and European populations have a gene (KIR AA genotype) that increases the chance of pre-eclampsia. However, African women are at greater risk of pre-eclampsia than other racial groups. This is because they’re more at risk of carrying a fetus with a C2-type HLA-C gene from the father. African populations have a higher frequency of this gene, which raises the likelihood of risky mother-fetus combinations.

An additional finding from my research is that genetic protection from pre-eclampsia works differently across populations – and African populations carry unique protective genes. However, even with these additional protections, African women are at greater risk of developing severe pre-eclampsia due to the other challenges, like access to healthcare and socio-economic constraints.

There’s inequality in the treatment of the condition too. In my experience, wealthier and better-educated African women often receive the necessary diagnosis and treatment. Poorer and less-educated African women too often do not.

Pre-eclampsia research, especially in Africa, requires a lot more funding, as does broader research related to the maternal health of African women.

Pre-eclampsia in Uganda

Around 287,000 women worldwide die during pregnancy and childbirth every year. Shockingly, 70% of these are African women.

Most of these deaths are preventable. For example, around 10% are the result of high blood pressure-related conditions during pregnancy.

Uganda’s Ministry of Health recorded in 2023 that out of 1,276 maternal deaths reported, 16% were associated with high blood pressure.

Hospitals are being overwhelmed by patients with the illness. For example, Kawempe National Referral Hospital in Kampala receives around 150 patients with the condition every month. It has set up a special ward to treat them.

The maternal mortality rate (death due to complications from pregnancy or childbirth) in Uganda is 284 per 100,000 live births. In Australia it is 2.94. The neonatal mortality rate (death during the first 28 completed days of life) is 19 per 1,000 live births in Uganda against 2.37 in Australia. Infant mortality (death before a child turns one) is 31 per 1,000 live births in Uganda versus 3.7 in Australia, according to the WHO’s Global Health Observatory.

This stark contrast highlights an enormous gap in care that the two countries’ pregnant mothers and babies receive.

Part of the problem in Uganda, as in many developing countries, is persistent challenges in healthcare infrastructure. There are shortages of healthcare workers, medical supplies and facilities, particularly in the rural areas.

Early detection is key

As a clinician and researcher working at the centre of Uganda’s healthcare system, I witness mothers arriving at hospitals already in a critical condition, with limited options to treat the complications associated with pre-eclampsia. It is heartbreaking.

The condition is both preventable and treatable if caught early. My research focuses on identifying biological signs of the likelihood of complications during pregnancy, using data analysis informed by Artificial Intelligence.

These predictive biomarkers, as they are called, enable us to categorise patients based on their risk levels and identify those most likely to benefit from specific treatments or preventive measures.

The precise causes of pre-eclampsia are not certain, but factors beyond genetics are thought to be problems with the immune system and inadequate development of the placenta. But much of what researchers know comes from work done in high-income countries, often with a limited sample size of African women.

Consequently, the findings may not apply directly to the genetics of sub-Saharan African women. My research addresses this knowledge gap.

Building on my findings about genetic determinants, I am leading a research team at Makerere University to design interventions tailored to specific prevention and treatment strategies for African populations.

Raising pre-eclampsia awareness

Research alone is not enough. There is an urgent need to bridge the gap between research and practice.

During my fieldwork, I have witnessed first-hand how many Ugandan women are not aware of pre-eclampsia’s warning signs and miss out on vital prenatal care. These warning signs often include headache, disturbances with vision, upper pain in the right side of the abdomen and swelling of the legs.

But we can develop screening algorithms so that healthcare professionals can rapidly diagnose women at higher risk early in their pregnancy. Timely intervention, including specific treatment and plans for delivery, would reduce the risk of adverse outcomes for both mother and baby.

In my capacity as a national pre-eclampsia champion appointed by Uganda’s Ministry of Health, I am spearheading initiatives to raise awareness and improve access to maternal healthcare services.

Through community outreach programmes and educational campaigns, we want to empower all women, rich and poor, with knowledge about the condition and encourage them to seek medical assistance at an early stage.

More resources must be allocated to genetics research to realise our goals of prevention, early detection, diagnosis and treatment of pre-eclampsia and its associated complications.

This investment will drive the development of predictive technology for precise diagnosis, and enable timely intervention for at-risk mothers.

Moreover, investigating the genetic roots of pre-eclampsia could lead to novel therapies that reduce the need for costly medical procedures or prolonged care for those affected.

This would reduce the strain on already overburdened African healthcare systems.

The Conversation

Annettee Nakimuli receives funding from the Gates Foundation, GSK and the Royal Society.

ref. African women at higher risk of pre-eclampsia – a dangerous pregnancy complication – https://theconversation.com/african-women-at-higher-risk-of-pre-eclampsia-a-dangerous-pregnancy-complication-249222

There’s gold trapped in your iPhone – and chemists have found a safe new way to extract it

Source: The Conversation – Canada – By Justin M. Chalker, Professor of Chemistry, Flinders University

A sample of refined gold recovered from mining and e-waste recycling trials. Justin Chalker

In 2022, humans produced an estimated 62 million tonnes of electronic waste – enough to fill more than 1.5 million garbage trucks. This was up 82% from 2010 and is expected to rise to 82 million tonnes in 2030.

This e-waste includes old laptops and phones, which contain precious materials such as gold. Less than one quarter of it is properly collected and recycled. But a new technique colleagues and I have developed to safely and sustainably extract gold from e-waste could help change that.

Our new gold-extraction technique, which we describe in a new paper published today in Nature Sustainability, could also make small-scale gold mining less poisonous for people – and the planet.

Soaring global demand

Gold has long played a crucial role in human life. It has been a form of currency and a medium for art and fashion for centuries. Gold is also essential in modern industries including the electronics, chemical manufacture and aerospace sectors.

But while global demand for this precious metal is soaring, mining it is harmful to the environment.

Deforestation and use of toxic chemicals are two such problems. In formal, large-scale mining, highly toxic cyanide is widely used to extract gold from ore. While cyanide can be degraded, its use can cause harm to wildlife, and tailings dams which store the toxic byproducts of mining operations pose a risk to the wider environment.

In small-scale and artisanal mining, mercury is used extensively to extract gold. In this practice, the gold reacts with mercury to form a dense amalgam that can be easily isolated. The gold is then recovered by heating the amalgam to vaporise the mercury.

Small-scale and artisanal mining is the largest source of mercury pollution on Earth, and the mercury emissions are dangerous to the miners and pollute the environment. New methods are required to reduce the impacts of gold mining.

A bucket full of telephone circuit board parts.
In 2022, humans produced an estimated 62 million tonnes of electronic waste.
DAMRONG RATTANAPONG/Shutterstock

A safer alternative

Our interdisciplinary team of scientists and engineers has developed a new technique to extract gold from ore and e-waste. The aim was to provide a safer alternative to mercury and cyanide and reduce the health and environmental impacts of gold mining.

Many techniques have previously been reported for extracting gold from ore or e-waste, including mercury- and cyanide-free methods. However, many of these methods are limited in rate, yield, scale and cost. Often these methods also consider only one step in the entire gold recovery process, and recycling and waste management is often neglected.

In contrast, our approach considered sustainability throughout the whole process of gold extraction, recovery and refining. Our new leaching technology uses a chemical commonly used in water sanitation and pool chlorination: trichloroisocyanuric acid.

When this widely available and low-cost chemical is activated with salt water, it can react with gold and convert it into a water-soluble form.

To recover the gold from the solution, we invented a sulphur-rich polymer sorbent. Polymer sorbents isolate a certain substance from a liquid or gas, and ours is made by joining a key building block (a monomer) together through a chain reaction.

Our polymer sorbent is interesting because it is derived from elemental sulphur: a low-cost and highly abundant feedstock. The petroleum sector generates more sulphur than it can use or sell, so our polymer synthesis is a new use for this underused resource.

Our polymer could selectively bind and remove gold from the solution, even when many other types of metals were present in the mixture.

The simple leaching and recovery methods were demonstrated on ore, circuit boards from obsolete computers and scientific waste. Importantly, we also developed methods to regenerate and recycle both the leaching chemical and the polymer sorbent. We also established methods to purify and recycle the water used in the process.

In developing the recyclable polymer sorbent, we invented some exciting new chemistry to make the polymer using light, and then “un-make” the sorbent after it bound gold. This recycling method converted the polymer back to its original monomer building block and separated it from the gold.

The recovered monomer could then be re-made into the gold-binding polymer: an important demonstration of how the process is aligned with a circular economy.

A long and complex road ahead

In future work, we plan to collaborate with industry, government and not-for-profit groups to test our method in small-scale mining operations. Our long-term aim is to provide a robust and safe method for extracting gold, eliminating the need for highly toxic chemicals such as cyanide and mercury.

There will be many challenges to overcome including scaling up the production of the polymer sorbent and the chemical recycling processes. For uptake, we also need to ensure that the rate, yield and cost are competitive with more traditional methods of gold mining. Our preliminary results are encouraging. But there is still a long and complex road ahead before our new techniques replace cyanide and mercury.

Our broader motivation is to support the livelihood of the millions of artisanal and small-scale miners that rely on mercury to recover gold.

They typically operate in remote and rural regions with few other economic opportunities. Our goal is to support these miners economically while offering safer alternatives to mercury. Likewise, the rise of “urban mining” and e-waste recycling would benefit from safer and operationally simple methods for precious metal recovery.

Success in recovering gold from e-waste will also reduce the need for primary mining and therefore lessen its environmental impact.

The Conversation

Justin M. Chalker is an inventor on patents associated with the gold leaching and recovery technology. Both patents are wholly owned by Flinders University. This research was supported financially by the Australian Research Council and Flinders University. He has an ongoing collaboration with Mercury Free Mining and Adelaide Control Engineering: organisations that supported the developments and trials reported in this study.

ref. There’s gold trapped in your iPhone – and chemists have found a safe new way to extract it – https://theconversation.com/theres-gold-trapped-in-your-iphone-and-chemists-have-found-a-safe-new-way-to-extract-it-259817

How old are you really? Are the latest ‘biological age’ tests all they’re cracked up to be?

Source: The Conversation – Canada – By Hassan Vally, Associate Professor, Epidemiology, Deakin University

We all like to imagine we’re ageing well. Now a simple blood or saliva test promises to tell us by measuring our “biological age”. And then, as many have done, we can share how “young” we really are on social media, along with our secrets to success.

While chronological age is how long you have been alive, measures of biological age aim to indicate how old your body actually is, purporting to measure “wear and tear” at a molecular level.

The appeal of these tests is undeniable. Health-conscious consumers may see their results as reinforcing their anti-ageing efforts, or a way to show their journey to better health is paying off.

But how good are these tests? Do they actually offer useful insights? Or are they just clever marketing dressed up to look like science?

How do these tests work?

Over time, the chemical processes that allow our body to function, known as our “metabolic activity”, lead to damage and a decline in the activity of our cells, tissues and organs.

Biological age tests aim to capture some of these changes, offering a snapshot of how well, or how poorly, we are ageing on a cellular level.

Our DNA is also affected by the ageing process. In particular, chemical tags (methyl groups) attach to our DNA and affect gene expression. These changes occur in predictable ways with age and environmental exposures, in a process called methylation.

Research studies have used “epigenetic clocks”, which measure the methylation of our genes, to estimate biological age. By analysing methylation levels at specific sites in the genome from participant samples, researchers apply predictive models to estimate the cumulative wear and tear on the body.

What does the research say about their use?

Although the science is rapidly evolving, the evidence underpinning the use of epigenetic clocks to measure biological ageing in research studies is strong.

Studies have shown epigenetic biological age estimation is a better predictor of the risk of death and ageing-related diseases than chronological age.

Epigenetic clocks also have been found to correlate strongly with lifestyle and environmental exposures, such as smoking status and diet quality.

In addition, they have been found to be able to predict the risk of conditions such as cardiovascular disease, which can lead to heart attacks and strokes.

Taken together, a growing body of research indicates that at a population level, epigenetic clocks are robust measures of biological ageing and are strongly linked to the risk of disease and death

But how good are these tests for individuals?

While these tests are valuable when studying populations in research settings, using epigenetic clocks to measure the biological age of individuals is a different matter and requires scrutiny.

For testing at an individual level, perhaps the most important consideration is the “signal to noise ratio” (or precision) of these tests. This is the question of whether a single sample from an individual may yield widely differing results.

A study from 2022 found samples deviated by up to nine years. So an identical sample from a 40-year-old may indicate a biological age of as low as 35 years (a cause for celebration) or as high as 44 years (a cause of anxiety).

While there have been significant improvements in these tests over the years, there is considerable variability in the precision of these tests between commercial providers. So depending on who you send your sample to, your estimated biological age may vary considerably.

Another limitation is there is currently no standardisation of methods for this testing. Commercial providers perform these tests in different ways and have different algorithms for estimating biological age from the data.

As you would expect for commercial operators, providers don’t disclose their methods. So it’s difficult to compare companies and determine who provides the most accurate results – and what you’re getting for your money.

A third limitation is that while epigenetic clocks correlate well with ageing, they are simply a “proxy” and are not a diagnostic tool.

In other words, they may provide a general indication of ageing at a cellular level. But they don’t offer any specific insights about what the issue may be if someone is found to be “ageing faster” than they would like, or what they’re doing right if they are “ageing well”.

So regardless of the result of your test, all you’re likely to get from the commercial provider of an epigenetic test is generic advice about what the science says is healthy behaviour.

Are they worth it? Or what should I do instead?

While companies offering these tests may have good intentions, remember their ultimate goal is to sell you these tests and make a profit. And at a cost of around A$500, they’re not cheap.

While the idea of using these tests as a personalised health tool has potential, it is clear that we are not there yet.

For this to become a reality, tests will need to become more reproducible, standardised across providers, and validated through long-term studies that link changes in biological age to specific behaviours.

So while one-off tests of biological age make for impressive social media posts, for most people they represent a significant cost and offer limited real value.

The good news is we already know what we need to do to increase our chances of living longer and healthier lives. These include:

  • improving our diet
  • increasing physical activity
  • getting enough sleep
  • quitting smoking
  • reducing stress
  • prioritising social connection.

We don’t need to know our biological age in order to implement changes in our lives right now to improve our health.

The Conversation

Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How old are you really? Are the latest ‘biological age’ tests all they’re cracked up to be? – https://theconversation.com/how-old-are-you-really-are-the-latest-biological-age-tests-all-theyre-cracked-up-to-be-257710

Bats get fat to survive hard times. But climate change is threatening their survival strategy

Source: The Conversation – Canada – By Nicholas Wu, Lecturer in Wildlife Ecology, Murdoch University

Rudmer Zwerver/Shutterstock

Bats are often cast as the unseen night-time stewards of nature, flitting through the dark to control pest insects, pollinate plants and disperse seeds. But behind their silent contributions lies a remarkable and underappreciated survival strategy: seasonal fattening.

Much like bears and squirrels, bats around the world bulk up to get through hard times – even in places where you might not expect it.

In a paper published today in Ecology Letters, we analysed data from bat studies around the world to understand how bats use body fat to survive seasonal challenges, whether it’s a freezing winter or a dry spell.

The surprising conclusion? Seasonal fattening is a global phenomenon in bats, not just limited to those in cold climates.

Even bats in the tropics, where it’s warm all year, store fat in anticipation of dry seasons when food becomes scarce. That’s a survival strategy that’s been largely overlooked. But it may be faltering as the climate changes, putting entire food webs at risk.

Climate shapes fattening strategies

We found bats in colder regions predictably gain more weight before winter.

But in warmer regions with highly seasonal rainfall, such as tropical savannas or monsoonal forests, bats also fatten up. In tropical areas, it’s not cold that’s the enemy, but the dry season, when flowers wither, insects vanish and energy is hard to come by.

The extent of fattening is impressive. Some species increased their body weight by more than 50%, which is a huge burden for flying animals that already use a lot of energy to move around. This highlights the delicate balancing act bats perform between storing energy and staying nimble in the air.

Sex matters, especially in the cold

The results also support the “thrifty females, frisky males” hypothesis.

In colder climates, female bats used their fat reserves more sparingly than males – a likely adaptation to ensure they have enough energy left to raise young when spring returns. Since females typically emerge from hibernation to raise their young, conserving fat through winter can directly benefit their reproductive success.

Interestingly, this sex-based difference vanished in warmer climates, where fat use by males and females was more similar, likely because more food is available in warmer climates. It’s another clue that climate patterns intricately shape behaviour and physiology.

Climate change is shifting the rules

Beyond the biology, our study points to a more sobering trend. Bats in warm regions appear to be increasing their fat stores over time. This could be an early warning sign of how climate change is affecting their survival.

Climate change isn’t just about rising temperatures. It’s also making seasons more unpredictable.

Bats may be storing more energy in advance of dry seasons that are becoming longer or harder to predict. That’s risky, because it means more foraging, more exposure to predators and potentially greater mortality.

The implications can ripple outward. Bats help regulate insect populations, fertilise crops and maintain healthy ecosystems. If their survival strategies falter, entire food webs could feel the effects.

Fat bats, fragile futures

Our study changes how we think about bats. They are not just passive victims of environmental change but active strategists, finely tuned to seasonal rhythms. Yet their ability to adapt has limits, and those limits are being tested by a rapidly changing world.

By understanding how bats respond to climate, we gain insights into broader ecosystem resilience. We also gain a deeper appreciation for one of nature’s quiet heroes – fattening up, flying through the night and holding ecosystems together, one wingbeat at a time.

The Conversation

Nicholas Wu was the lead author of a funded Australian Research Council Linkage Grant awarded to Christopher Turbill at Western Sydney University.

ref. Bats get fat to survive hard times. But climate change is threatening their survival strategy – https://theconversation.com/bats-get-fat-to-survive-hard-times-but-climate-change-is-threatening-their-survival-strategy-259560

What’s the difference between an eating disorder and disordered eating?

Source: The Conversation – Canada – By Gemma Sharp, Researcher in Body Image, Eating and Weight Disorders, Monash University

PIKSEL/Getty

Following a particular diet or exercising a great deal are common and even encouraged in our health and image-conscious culture. With increased awareness of food allergies and other dietary requirements, it’s also not uncommon for someone to restrict or eliminate certain foods.

But these behaviours may also be the sign of an unhealthy relationship with food. You can have a problematic pattern of eating without being diagnosed with an eating disorder.

So, where’s the line? What is disordered eating, and what is an eating disorder?

What is disordered eating?

Disordered eating describes negative attitudes and behaviours towards food and eating that can lead to a disturbed eating pattern.

It can involve:

  • dieting

  • skipping meals

  • avoiding certain food groups

  • binge eating

  • misusing laxatives and weight-loss medications

  • inducing vomiting (sometimes known as purging)

  • exercising compulsively.

Disordered eating is the term used when these behaviours are not frequent and/or severe enough to meet an eating disorder diagnosis.

Not everyone who engages in these behaviours will develop an eating disorder. But disordered eating – particularly dieting – usually precedes an eating disorder.

What is an eating disorder?

Eating disorders are complex psychiatric illnesses that can negatively affect a person’s body, mind and social life. They’re characterised by persistent disturbances in how someone thinks, feels and behaves around eating and their bodies.

To make a diagnosis, a qualified health professional will use a combination of standardised questionnaires, as well as more general questioning. These will determine how frequent and severe the behaviours are, and how they affect day-to-day functioning.

Examples of clinical diagnoses include anorexia nervosa, bulimia nervosa, binge eating disorder and avoidant/restrictive food intake disorder.

How common are eating disorders and disordered eating?

The answer can vary quite radically depending on the study and how it defines disordered behaviours and attitudes.

An estimated 8.4% of women and 2.2% of men will develop an eating disorder at some point in their lives. This is most common during adolescence.

Disordered eating is also particularly common in young people with 30% of girls and 17% of boys aged 6–18 years reporting engaging in these behaviours.

Although the research is still emerging, it appears disordered eating and eating disorders are even more common in gender diverse people.

Can we prevent eating disorders?

There is some evidence eating disorder prevention programs that target risk factors – such as dieting and concerns about shape and weight – can be effective to some extent in the short term.

The issue is most of these studies last only a few months. So we can’t determine whether the people involved went on to develop an eating disorder in the longer term.

In addition, most studies have involved girls or women in late high school and university. By this age, eating disorders have usually already emerged. So, this research cannot tell us as much about eating disorder prevention and it also neglects the wide range of people at risk of eating disorders.

Is orthorexia an eating disorder?

In defining the line between eating disorders and disordered eating, orthorexia nervosa is a contentious issue.

The name literally means “proper appetite” and involves a pathological obsession with proper nutrition, characterised by a restrictive diet and rigidly avoiding foods believed to be “unhealthy” or “impure”.

These disordered eating behaviours need to be taken seriously as they can lead to malnourishment, loss of relationships, and overall poor quality of life.

However, orthorexia nervosa is not an official eating disorder in any diagnostic manual.

Additionally, with the popularity of special diets (such as keto or paleo), time-restricted eating, and dietary requirements (for example, gluten-free) it can sometimes be hard to decipher when concerns about diet have become disordered, or may even be an eating disorder.

For example, around 6% of people have a food allergy. Emerging evidence suggests they are also more likely to have restrictive types of eating disorders, such as anorexia nervosa and avoidant/restrictive food intake disorder.

However, following a special diet such as veganism, or having a food allergy, does not automatically lead to disordered eating or an eating disorder.

It is important to recognise people’s different motivations for eating or avoiding certain foods. For example, a vegan may restrict certain food groups due to animal rights concerns, rather than disordered eating symptoms.

What to look out for

If you’re concerned about your own relationship with food or that of a loved one, here are some signs to look out for:

  • preoccupation with food and food preparation

  • cutting out food groups or skipping meals entirely

  • obsession with body weight or shape

  • large fluctuations in weight

  • compulsive exercise

  • mood changes and social withdrawal.

It’s always best to seek help early. But it is never too late to seek help.


In Australia, if you are experiencing difficulties in your relationships with food and your body, you can contact the Butterfly Foundation’s national helpline on 1800 33 4673 (or via their online chat).

For parents concerned their child might be developing concerning relationships with food, weight and body image, Feed Your Instinct highlights common warning signs, provides useful information about help seeking and can generate a personalised report to take to a health professional.

The Conversation

Gemma Sharp receives funding from an NHMRC Investigator Grant. She is a Professor and the Founding Director and Member of the Consortium for Research in Eating Disorders, a registered charity.

ref. What’s the difference between an eating disorder and disordered eating? – https://theconversation.com/whats-the-difference-between-an-eating-disorder-and-disordered-eating-256787

‘Do not eat’: what’s in those little desiccant sachets and how do they work?

Source: The Conversation – Canada – By Kamil Zuber, Senior Industry Research Fellow, Future Industries Institute, University of South Australia

towfiqu ahamed/Getty Images

When you buy a new electronic appliance, shoes, medicines or even some food items, you often find a small paper sachet with the warning: “silica gel, do not eat”.

What exactly is it, is it toxic, and can you use it for anything?

The importance of desiccants

That little sachet is a desiccant – a type of material that removes excess moisture from the air.

It’s important during the transport and storage of a wide range of products because we can’t always control the environment. Humid conditions can cause damage through corrosion, decay, the growth of mould and microorganisms.

This is why manufacturers include sachets with desiccants to make sure you receive the goods in pristine condition.

The most common desiccant is silica gel. The small, hard and translucent beads are made of silicon dioxide (like most sands or quartz) – a hydrophilic or water-loving material. Importantly, the beads are porous on the nano-scale, with pore sizes only 15 times larger than the radius of their atoms.

Silica gel looks somewhat like a sponge when viewed with scanning electron microscopy.
Trabelsi et al. (2009), CC BY-NC-ND

These pores have a capillary effect, meaning they condense and draw moisture into the bead similar to how trees transport water through the channelled structures in wood.

In addition, sponge-like porosity makes their surface area very large. A single gram of silica gel can have an area of up to 700 square metres – almost four tennis courts – making them exceptionally efficient at capturing and storing water.

Is silica gel toxic?

The “do not eat” warning is easily the most prominent text on silica gel sachets.

According to health professionals, most silica beads found in these sachets are non-toxic and don’t present the same risk as silica dust, for example. They mainly pose a choking hazard, which is good enough reason to keep them away from children and pets.

However, if silica gel is accidentally ingested, it’s still recommended to contact health professionals to determine the best course of action.

Some variants of silica gel contain a moisture-sensitive dye. One particular variant, based on cobalt chloride, is blue when the desiccant is dry and turns pink when saturated with moisture. While the dye is toxic, in desiccant pellets it is present only in a small amount – approximately 1% of the total weight.

Two plastic containers, one with blue translucent beads, one with pink.
Indicating silica gel with cobalt chloride – ‘fresh’ on the left, ‘used’ on the right.
Reza Rio/Shutterstock

Desiccants come in other forms, too

Apart from silica gel, a number of other materials are used as moisture absorbers and desiccants. These are zeolites, activated alumina and activated carbon – materials engineered to be highly porous.

Another desiccant type you’ll often see in moisture absorbers for larger areas like pantries or wardrobes is calcium chloride. It typically comes in a box filled with powder or crystals found in most hardware stores, and is a type of salt.

Kitchen salt – sodium chloride – attracts water and easily becomes lumpy. Calcium chloride works in the same way, but has an even stronger hygroscopic effect and “traps” the water through a hydration reaction. Once the salt is saturated, you’ll see liquid separating in the container.

A shelf in a wardrobe with a purple box with white powder inside in the corner.
Closet and pantry dehumidifiers like this one typically contain calcium chloride which binds water.
Healthy Happy/Shutterstock

I found something that doesn’t seem to be silica gel – what is it?

Some food items such as tortilla wraps, noodles, beef jerky, and some medicines and vitamins contain slightly different sachets, labelled “oxygen absorbers”.

These small packets don’t contain desiccants. Instead, they have chemical compounds that “scavenge” or bond oxygen.

Their purpose is similar to desiccants – they extend the shelf life of food products and sensitive chemicals such as medicines. But they do so by directly preventing oxidation. When some foods are exposed to oxygen, their chemical composition changes and can lead to decay (apples turning brown when cut is an example of oxidation).

There is a whole range of compounds used as oxygen absorbers. These chemicals have a stronger affinity to oxygen than the protected substance. They range from simple compounds such as iron which “rusts” by using up oxygen, to more complex such as plastic films that work when exposed to light.

A pile of various sachets and sheets found inside products.
Some of the sachets in your products are oxygen absorbers, not desiccants – but they may look similar.
Sergio Yoneda/Shutterstock

Can I reuse a desiccant?

Although desiccants and dehumidifiers are considered disposable, you can relatively easily reuse them.

To “recharge” or dehydrate silica gel, you can place it in an oven at approximately 115–125°C for 2–3 hours, although you shouldn’t do this if it’s in a plastic sachet that could melt in the heat.

Interestingly, due to how they bind water, some desiccants require temperatures well above the boiling point of water to dehydrate (for example, calcium chloride hydrates completely dehydrate at 200°C).

After dehydration, silica gel sachets may be useful for drying small electronic items (like your phone after you accidentally dropped it into water), keeping your camera dry, or preventing your family photos and old films from sticking to each other.

This is a good alternative to the questionable method of using uncooked rice, as silica gel doesn’t decompose and won’t leave starch residues on your things.

The Conversation

Kamil Zuber does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. ‘Do not eat’: what’s in those little desiccant sachets and how do they work? – https://theconversation.com/do-not-eat-whats-in-those-little-desiccant-sachets-and-how-do-they-work-258398

Qu’est-ce que le paiement pour service environnemental ?

Source: The Conversation – Indonesia – By Alain Karsenty, Économiste de l’environnement, chercheur et consultant international, Cirad

Ils rémunèrent des personnes ou des entreprises pour un certain usage des terres que l’on associe, à tort ou à raison, à une amélioration des services écosystémiques. IZE-5tyle/Shutterstock

Le paiement pour services environnementaux ou PSE est, selon sa définition officielle, un instrument économique pour l’environnement consistant en la rémunération d’une personne ou d’une organisation qui rend un service environnemental. Un dispositif de ce type a été créé par le gouvernement français, notamment à destination des agriculteurs. L’objectif : rémunérer les pratiques vertueuses contribuant à maintenir et à restaurer les services écosystémiques dont bénéficie toute la société.


D’ici le 1er septembre 2026, tous les États membres de l’Union européenne doivent élaborer leur plan de restauration de la nature. Il vise à enrayer l’érosion de la biodiversité et à revitaliser les écosystèmes sur l’ensemble du territoire. Une action politique qui met en lumière la notion de paiement pour service environnemental.

Les services environnementaux sont rendus à d’autres humains – où qu’ils soient dans le temps et l’espace – à travers une action intentionnelle visant à préserver, restaurer ou augmenter un service écosystémique. Un agriculteur qui stoppe l’usage de pesticides ou créé des îlots de plantes mellifères fournit un service environnemental qui va favoriser le service écosystémique de pollinisation rendu par les abeilles.

Quant aux services écosystémiques, ils sont définis par le Millenium Ecosystem Assessment comme « les bénéfices directs et indirects que les hommes retirent de la nature ». Certains insectes apportent un service de pollinisation, une formation végétale apporte des bénéfices en termes de régulation du ruissellement et de fixation du carbone.

Alors faut-il parler de paiements pour services écosystémiques ou pour services environnementaux ?

Service écosystémique et environnemental

Si l’on se réfère aux travaux de l’évaluation française des écosystèmes et services écosystémiques (EFESE), il est plus logique de parler de paiements pour services environnementaux. L’EFESE distingue clairement deux notions :

  • Service écosystémique : fonction d’un écosystème dont l’utilisation permet de retirer un avantage – pour l’agriculteur, ou de manière plus générale pour la société.

  • Service environnemental : action ou mode de gestion d’un acteur, comme un agriculteur, qui améliore l’état de l’environnement en permettant l’augmentation d’un service écosystémique.

Fonctionnement type d’un paiement pour service environnemental.
Banque des territoires

Muradian et ses collègues envisagent les paiements pour service environnemental comme « des transferts de ressources entre des acteurs sociaux, dans le but de créer des incitations pour aligner les décisions individuelles et/ou collectives quant à l’usage des sols avec l’intérêt social concernant la gestion des ressources naturelles ». Ils rémunèrent des personnes ou des entreprises pour un certain usage des terres que l’on associe, à tort ou à raison, à une amélioration des services écosystémiques.

Sept points clés

Nous pouvons caractériser les paiements pour service environnemental à travers sept points clés :

  • Paiements directs : les paiements sont effectués directement aux prestataires de services environnementaux, souvent par le biais d’un intermédiaire comme un programme. Les bénéficiaires sont les usagers du foncier et des ressources dont les pratiques ont un impact direct sur les services écosystémiques. Les paiements peuvent être en argent, sous forme d’investissements ou de services, comme la sécurisation foncière. Le montant des paiements peut être négocié ou, le plus souvent, faire l’objet d’un barème fixé par le programme.

  • « Bénéficiaire-payeur » : les bénéficiaires directs ou indirects des services environnementaux – individus, communautés, entreprises, ou institutions agissant au nom de l’intérêt général – paient pour ces services.

  • Caractère volontaire et contractuel : les bénéficiaires des paiements adhèrent librement au programme à travers des accords contractuels qui les engagent pour une certaine durée. Mais le financement des PSE n’est pas forcément volontaire et peut être contraint (renchérissement des factures d’eau, redevances sur différents produits ou services, impôt).

  • Conditionnalité : les paiements dépendent de la fourniture continue des services environnementaux, au sens du respect des obligations contractuelles. Les paiements sont généralement basés sur la mise en œuvre de pratiques de gestion dont les parties conviennent contractuellement qu’ils sont susceptibles de donner lieu à ces avantages.

  • Ciblage : En général, les PSE sont concentrés sur certaines zones d’importance écologique et/ou menacée (ciblage géographique). Le ciblage social, c’est-à-dire le fait de réserver l’offre de PSE aux producteurs à faibles revenus ou à des populations autochtones, est moins pratiqué. Toutefois, quand les paiements se font sur une base surfacique, les montants versés sont souvent plafonnés ou dégressifs à partir d’un seuil de surface.

  • Additionnalité : les paiements sont effectués pour des actions qui vont au-delà de celles que les gestionnaires des terres accompliraient même en l’absence de paiements. Cette condition est controversée, car elle risque d’exclure des bénéfices des PSE des populations ou des producteurs aux pratiques déjà écologiquement vertueuses. En pratique, elle est rarement appliquée du fait des coûts de vérification, par rapport à un scénario business-as-usual. Il en va de même pour l’additionnalité légale. On ne devrait pas, en principe, rémunérer pour empêcher des pratiques déjà prohibées par les règlements, mais beaucoup de programmes de PSE le font néanmoins pour faciliter l’application des lois.

  • Sécurité foncière : un titre de propriété n’est pas une condition sine qua non. Les bénéficiaires des paiements doivent disposer au minimum d’un « droit d’exclusion » et de droits de gestion sur la terre dont ils sont les usagers.


La série « L’envers des mots » est réalisée avec le soutien de la Délégation générale à la langue française et aux langues de France du ministère de la Culture

À découvrir aussi dans cette série :

The Conversation

Alain Karsenty ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Qu’est-ce que le paiement pour service environnemental ? – https://theconversation.com/quest-ce-que-le-paiement-pour-service-environnemental-256607