From sovereignty to sustainability: a brief history of ocean governance

Source: The Conversation – (in Spanish) – By Kevin Parthenay, Professeur des Universités en science politique, membre de l’Institut Universitaire de France (IUF), Université de Tours

The United Nations Ocean Conference (UNOC 3) will open in Nice, France, on June 9, 2025. It is the third conference of its kind, following events in New York in 2017 and Lisbon in 2022. Co-hosted by France and Costa Rica, the conference will bring together 150 countries and nearly 30,000 individuals to discuss the sustainable management of our planet’s oceans.

This event is presented as a pivotal moment, but it is actually part of a significant shift in marine governance that has been going on for decades. While ocean governance was once designed to protect the marine interests of states, nowadays it must also address the numerous climate and environmental challenges facing the oceans.

Media coverage of this “political moment” however should not overshadow the urgent need to reform the international law applicable to the oceans. Failing that, this summit will risk being nothing more than another platform for vacuous rhetoric.

To understand what is at stake, it is helpful to begin with a brief historical overview of marine governance.

The meaning of ocean governance

Ocean governance changed radically over the past few decades. The focus shifted from the interests of states and the corresponding body of international law, solidified in the 1980s, to a multilateral approach initiated at the end of the Cold War, involving a wide range of actors (international organizations, NGOs, businesses, etc.).

This governance has gradually moved from a system of obligations pertaining to different marine areas and regimes of sovereignty associated to them (territorial seas, exclusive economic zones (EEZs), and the high seas) to a system that takes into consideration the “health of the oceans.” The aim of this new system is to manage the oceans in line with the sustainable development goals.

Understanding how this shift occurred can help us grasp what is at stake in Nice. The 1990s were marked by declarations, summits and other global initiatives. However, as evidenced below, the success of these numerous initiatives has so far been limited. This explains why we are now seeing a return to an approach more firmly rooted in international law, as evidenced by the negotiations on the international treaty on plastic pollution, for example.

The “Constitution of the Seas”

The law of the sea emerged from the Hague Conference in 1930. However, the structure of marine governance gradually came to be defined in the 1980s, with the adoption of the United Nations Convention on the Law of the Sea (UNCLOS) in 1982.

UNOC 3 is a direct offshoot of this convention: discussions on sustainable ocean management stem from the limitations of this founding text, often referred to as the “Constitution of the Seas”.

UNCLOS was adopted in December 1982 at the Montego Bay Convention in Jamaica and came into force in November 1994, following a lengthy process of international negotiations that resulted in 60 states ratifying the text. At the outset, the discussions focused on the interests of developing countries, especially those located along the coast, in the midst of a crisis in multilateralism. The United States managed to exert its influence in this arena without ever officially adopting the Convention. Since then, the convention has been a pillar of marine governance.

It established new institutions, including the International Seabed Authority, entrusted with the responsibility of regulating the exploitation of mineral resources on the seabed in areas that fall outside the scope of national jurisdiction. UNCLOS is the source of nearly all international case law on the subject.

Although the convention did define maritime areas and regulate their exploitation, new challenges quickly emerged: on the one hand, the Convention was essentially rendered meaningless by the eleven-year delay between its adoption and implementation. On the other hand, the text also became obsolete due to new developments in the use of the seas, particularly technological advances in fishing and seabed exploitation.

The early 1990s marked a turning point in the traditional maritime legal order. The management of the seas and oceans came to be viewed within an environmental perspective, a process that was driven by major international conferences and declarations such as the Rio Declaration (1992), the Millennium Declaration (2005), and the Rio+20 Summit (2012). These resulted in the 2030 Agenda and the Sustainable Development Goals (SDGs), the UN’s 17 goals aimed at protecting the planet (with SDG 14, “Life Below Water”, directly addressing issues related to the oceans) and the world’s population by 2030.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The United Nations Conference on Environment and Development (UNCED, or Earth Summit), held in Rio de Janeiro, Brazil, in 1992, ushered in the era of “sustainable development” and, thanks to scientific discoveries made in the previous decade, helped link environmental and maritime issues.

From 2008 to 2015, environmental issues became more important as evidenced by the regular adoption of environmental and climate resolutions.

A shift in UN language

Biodiversity and the sustainable use of the oceans (SDG 14) are the two core themes that became recurring topics in the international agenda since 2015, with ocean-related issues now including items like acidification, plastic pollution and the decline of marine biodiversity.

The United Nations General Assembly resolution on oceans and the law of the seas (LOS is a particularly useful tool to acknowledge this evolution: drafted annually since 1984, the resolution has covered all aspects of the United Nations maritime regime while reflecting new issues and concerns.

Some environmental terms were initially absent from the text but have become more prevalent since the 2000s.

This evolution is also reflected in the choice of words.

While LOS resolutions from 1984 to 1995 focused mainly on the implementation of the treaty and the economic exploitation of marine resources, more recent resolutions have used terms related to sustainability, ecosystems, and maritime issues.

Toward a new law of the oceans?

As awareness of the issues surrounding the oceans and their link to climate change has grown, the oceans gradually became a global “final frontier” in terms of knowledge.

The types of stakeholders involved in ocean issues have also changed. The expansion of the ocean agenda has been driven by a more “environmentalist” orientation, with scientific communities and environmental NGOs standing at the forefront of this battle. This approach, which represents a shift away from a monopoly held by international law and legal practitioners, clearly is a positive development.

However, marine governance has so far relied mainly on non-binding declaratory measures (such as the SDGs) and remains ineffective. A cycle of legal consolidation toward a “new law of the oceans” therefore appears to be underway and the challenge is now to supplement international maritime law with a new set of measures. These include:

Of these agreements, the BBNJ is arguably the most ambitious: since 2004, negotiators have been working toward filling the gaps of the United Nations Convention on the Law of the Sea (UNCLOS) by creating an instrument on marine biodiversity in areas beyond national jurisdiction.

The agreement addresses two major concerns for states: sovereignty and the equitable distribution of resources.

Adopted in 2023, this historic agreement has yet to enter into force. For this to happen, sixty ratifications are required and to date, only 29 states have ratified the treaty (including France in February 2025, editor’s note).

The BBNJ process is therefore at a crossroads and the priority today is not to make new commitments or waste time on complicated high-level declarations, but to address concrete and urgent issues of ocean management, such as the frantic quest for critical minerals launched in the context of the Sino-American rivalry, and exemplified by Donald Trump’s signing of a presidential decree in April 2025 allowing seabed mining – a decision that violates the International Seabed Authority’s well established rules on the exploitation of these deep-sea resources.

At a time when U.S. unilateralism is leading to a policy of fait accompli, the UNOC 3 should, more than anything and within the framework of multilateralism, consolidate the existing obligations regarding the protection and sustainability of the oceans.

The Conversation

Kevin Parthenay is a member of the Institut Universitaire de France (IUF).

Rafael Mesquita ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. From sovereignty to sustainability: a brief history of ocean governance – https://theconversation.com/from-sovereignty-to-sustainability-a-brief-history-of-ocean-governance-258200

How a postwar German literary classic helped eclipse painter Emil Nolde’s relationship to Nazism

Source: The Conversation – (in Spanish) – By Ombline Damy, Doctorante en Littérature Générale et Comparée, Sciences Po

Emil Nolde, Red Clouds, watercolour on handmade paper, 34.5 x 44.7 cm. Emil Nolde/Museo Nacional Thyssen-Bornemisza, Madrid, CC BY-NC-ND

Paintings by German artist Emil Nolde (1867-1956) were recently on display at the Musée Picasso in Paris as part of an exhibition on what the Nazis classified as “degenerate art”. At first glance, his works fit perfectly, but recent research shows that Nolde’s relationship to Nazism is much more nuanced than the exhibition revealed.

The German Lesson: a postwar literary classic

While Nolde was one of the many victims of the Third Reich’s repressive responses to “degenerate art”, he was also one of Nazism’s great admirers. The immense popularity of The German Lesson (1968) by author Siegfried Lenz, however, greatly contributed to creating the legend of Nolde as a martyr of the Nazi regime.

La Leçon d’allemand, Siegfried Lenz, pavillons Poche

The cover of the French edition, which was on sale in the Musée Picasso bookstore, subtly echoes one of Nolde’s works, Hülltoft Farm, which hung in the exhibition.

Set against the backdrop of Nazi policies on “degenerate art”, the novel is about a conflict between a father and son. It addresses in literary form the central postwar issue of Vergangenheitsbewältigung, a term referring to the individual and collective work of German society on coming to terms with its Nazi past.

The German Lesson was met with huge success upon publication. Since then, it has become a classic of postwar German literature. Over 2 million copies have been sold across the world, and the novel has been translated into more than 20 languages. It is still studied in Germany as part of the national school curriculum. Adding to its popularity, the book was adapted for the screen in 1971 and in 2019. More than 50 years after its publication, The German Lesson continues to shape the way we think about Nazi Germany.

Max Ludwig Nansen, a fictional painter turned martyr

Set in Germany in the 1950s, the novel is told through the eyes of Siggi, a young man incarcerated in a prison for delinquent youths. Asked to pen an essay on the “joys of duty”, he dives into his memories of a childhood in Nazi Germany as the son of a police officer.

He remembers that his father, Jens Ole Jepsen, was given an order to prevent his own childhood friend, Max Ludwig Nansen, from painting. As a sign of protest against the painting ban, Nansen created a secret collection of paintings titled “the invisible pictures”. Because he was young enough to appear innocent, Siggi was used by his father to spy on the painter.

Siggi found himself torn between the two men, who related to duty in radically opposite ways. While Jepsen thought it his duty to follow the orders given to him, Nansen saw art as his only duty. Throughout the novel, Siggi becomes increasingly close to the painter, whom he sees as a hero, all the while distancing himself from his father, who in turn is perceived as a fanatic.

The novel’s point of view, that of a child, demands of its reader that they complete Siggi’s omissions or partial understanding of the world around him with their adult knowledge. This deliberately allusive narrative style enables the author to elude the topic of Nazism – or at least to hint at it in a covert way, thus making the novel acceptable to a wide German audience at the time of its publication in 1968.

Nevertheless, the book leaves little room for doubt on the themes it tackles. While Nazism is never explicitly named, the reader will inevitably recognize the Gestapo (the political police of the regime) when Siggi speaks of the “leather coats” who arrest Nansen. Readers will also identify the ban on painting issued to Nansen as a part of Nazi policies on “degenerate art”. And, what’s more, they will undoubtedly perceive the real person hiding behind the fictional character of Max Ludwig Nansen: Emil Nolde, born Hans Emil Hansen.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


Emil Nolde, a real painter become legend

Much like his fictional counterpart Max Ludwig Nansen, the painter Emil Nolde fell victim to Nazi policies aimed at artists identified as “degenerate”. More than 1,000 of his artworks were confiscated, some of which were integrated into the 1937 travelling exhibition on “degenerate art” orchestrated by the regime. Nolde was banned from the German art academy, and he was forbidden to sell and exhibit his work.

A photograph of Nazi propagandist Joseph Goebbels’ visit to the exhibition titled Entartete Kunst (Degenerate Art) in Munich, 1937. At left, from top, two paintings by Emil Nolde: Christ and the Sinner (1926) and the Wise and the Foolish Virgins (1910), a painting that has disappeared
A photograph of Nazi propagandist Joseph Goebbels’ visit to the exhibition titled Entartete Kunst (Degenerate Art) in Munich, 1937. At left, from top, two paintings by Emil Nolde: Christ and the Sinner (1926) and the Wise and the Foolish Virgins (1910), a painting that has disappeared.
Wikimedia

After the collapse of the Nazi regime, the tide turned for this “degenerate” artist. Postwar German society glorified him as a victim and opponent of Nazi politics, an image which Nolde carefully fostered. In his memoirs, he claimed to have been forbidden to paint by the regime, and to have created a series of “unpainted pictures” in a clandestine act of resistance.

Countless exhibits on Nolde, in Germany and around the world, served to perpetuate the myth of a talented painter, fallen victim to the Nazi regime, who decided to fight back. His works even made it into the hallowed halls of the German chancellery. Helmut Schmidt, chancellor of the Federal Republic of Germany from 1974 to 1982, and Germany’s former chancellor Angela Merkel decorated their offices with his paintings.

The popularity of The German Lesson, inspired by Nolde’s life, further solidified the myth – until the real Nolde and the fictional Nansen became fully inseparable in Germany’s collective imagination.

Twilight of an idol

Yet, the historical figure and the fictional character could not be more different. Research conducted for exhibits on Nolde in Frankfurt in 2014 and in Berlin in 2019 revealed the artist’s true relationship to Nazism to the wider public.

Nolde was indeed forbidden from selling and exhibiting his works by the Nazi regime. But he was not forbidden from painting. The series of “unpainted pictures”, which he claimed to have created in secret, are in fact a collection of works put together after the war.

What’s more, Nolde joined the Nazi Party as early as 1934. To make matters worse, he also hoped to become an official artist of the regime, and he was profoundly antisemitic. He was convinced that his work was the expression of a “German soul” – with all the racist undertones that such an affirmation suggests. He relentlessly tried to convince Goebbels and Hitler that his paintings, unlike those of “the Jews”, were not “degenerate”.

Why, one might ask, did more than 70 years go by before the truth about Nolde came out?

Yes, the myth built by Nolde himself and solidified by The German Lesson served to eclipse historical truth. Yet this seems to be only part of the story. In Nolde’s case, like in many others that involve facing a fraught national past, it looks like fiction was a great deal more attractive than truth.

In Lenz’s book, the painter Nansen claims that “you will only start to see properly […] when you start creating what you need to see”. By seeing in Nolde the fictional character of Nansen, Germans created a myth they needed to overcome a painful past. A hero, who resisted Nazism. Beyond the myth, reality appears to be more complex.

The Conversation

Ombline Damy received funding from la Fondation Nationale des Sciences Politiques (National Foundation of Political Sciences, or FNSP) for her thesis.

ref. How a postwar German literary classic helped eclipse painter Emil Nolde’s relationship to Nazism – https://theconversation.com/how-a-postwar-german-literary-classic-helped-eclipse-painter-emil-noldes-relationship-to-nazism-258310

Defence firms must adopt a ‘flexible secrecy’ to innovate for European rearmament

Source: The Conversation – (in Spanish) – By Sihem BenMahmoud-Jouini, Associate Professor, HEC Paris Business School

In the face of US President Donald Trump’s wavering commitments and Russian President Vladimir Putin’s inscrutable ambitions, the talk in European capitals is all about rearmament.

To that end, the European Commission has put forward an €800 billion spending scheme designed to “quickly and significantly increase expenditures in defence capabilities”, in the words of Commission President Ursula von der Leyen.

But funding is only the first of many challenges involved when pursuing military innovation. Ramping up capabilities “quickly and significantly” will prove difficult for a sector that must keep pace with rapid technological change.

Of course, defence firms don’t have to do it alone: they can select from a wide variety of potential collaborators, ranging from small and medium-sized enterprises (SMEs) to agile start-ups. Innovative partnerships, however, require trust and a willingness to share vital information, qualities that appear incompatible with the need for military secrecy.

That is why rearming Europe requires a new approach to secrecy.

A paper I co-authored with Jonathan Langlois of HEC and Romaric Servajean-Hilst of KEDGE Business School examines the strategies used by one leading defence firm (which we, for our own secrecy-related reasons, renamed “Globaldef”) to balance open innovation with information security. The 43 professionals we interviewed – including R&D managers, start-up CEOs and innovation managers – were not consciously working from a common playbook. However, their nuanced and dynamic approaches could serve as a cohesive role model for Europe’s defence sector as it races to adapt to a changing world.

How flexible secrecy enables innovation

Our research took place between 2018 and 2020. At the time, defence firms looked toward open innovation to compensate for the withdrawal of key support. There was a marked decrease in government spending on military R&D across the OECD countries. However, even though the current situation involves more funding, the need for external innovation remains prevalent to speed up access to knowledge.

When collaborating to innovate, firms face what open innovation scholars have termed “the paradox of openness”, wherein the value to be gained by collaborating must be weighed against the possible costs of information sharing. In the defence sector – unlike, say, in consumer products – being too liberal with information could not only lead to business losses but to grave security risks for entire nations, and even prosecution for the executives involved.

Although secrecy was a constant concern, Globaldef’s managers often found themselves in what one of our interviewees called a “blurred zone” where some material could be interpreted as secret, but sharing it was not strictly off-limits. In cases like these, opting for the standard mode in the defence industry – erring on the side of caution and remaining tight-lipped – would make open innovation impossible.

A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

Practices that make collaboration work

Studying transcripts of more than 40 interviews along with a rich pool of complementary data (emails, PowerPoint presentations, crowdsourcing activity, etc.), we discerned that players at Globaldef had developed fine-grained practices for maintaining and modulating secrecy, even while actively collaborating with civilian companies.

Our research identifies these practices as either cognitive or relational. Cognitive practices acted as strategic screens, masking the most sensitive aspects of Globaldef’s knowledge without throttling information flow to the point of preventing collaboration.

Depending on the type of project, cognitive practices might consist of one or more of the following:

  • Encryption: relabelling knowledge components to hide their nature and purpose.

  • Obfuscation: selectively blurring project specifics to preserve secrecy while recruiting partners.

  • Simplification: blurring project parameters to test the suitability of a partner without revealing true constraints.

  • Transposition: transferring the context of a problem from a military to a civilian one.

Relational practices involved reframing the partnership itself, by selectively controlling the width of the aperture through which external parties could view Globaldef’s aims and project characteristics. These practices might include redirecting the focus of a collaboration away from core technologies, or introducing confidentiality agreements to expand information-sharing within the partnership while prohibiting communication to third parties.

When to shift strategy in defence projects

Using both cognitive and relational practices enabled Globaldef to skirt the pitfalls of its paradox. For example, in the early stages of open innovation, when the firm was scouting and testing potential partners, managers could widen the aperture (relational) while imposing strict limits on knowledge-sharing (cognitive). They could thereby freely engage with the crowd without violating Globaldef’s internal rules regarding secrecy.

As partnerships ripened and trust grew, Globaldef could gradually lift cognitive protections, giving partners access to more detailed and specific data. This could be counterbalanced by a tightening on the relational side, eg requiring paperwork and protocols designed to plug potential leaks.

As we retraced the firm’s careful steps through six real-life open innovation partnerships, we saw that the key to this approach was in knowing when to transition from one mode to the other. Each project had its own rhythm.

For one crowdsourcing project, the shift from low to high cognitive depth, and high to low relational width, was quite sudden, occurring as soon as the partnership was formalised. This was due to the fact that Globaldef’s partner needed accurate details and project parameters in order to solve the problem in question. Therefore, near-total openness and concomitant confidentiality had to be established at the outset.

In another case, Globaldef retained the cognitive blinders throughout the early phase of a partnership with a start-up. To test the start-up’s technological capacities, the firm presented its partner with a cognitively reframed problem. Only after the partner passed its initial trial was collaboration initiated on a fully transparent footing, driven by the need for the start-up to obtain defence clearance prior to co-developing technology with Globaldef.

How firms can lead with adaptive secrecy

Since we completed and published our research, much has changed geopolitically. But the high-stakes paradox of openness is still a pressing issue inside Europe’s defence firms. Managers and executives are no doubt grappling with the evident necessity for open innovation on the one hand and secrecy on the other.

Our research suggests that, like Globaldef, other actors in Europe’s defence sector can deftly navigate this paradox. Doing so, however, will require employing a more subtle, flexible and dynamic definition of secrecy rather than the absolutist, static one that normally prevails in the industry. The defence sector’s conception of secrecy must also progress from a primarily legal to a largely strategic framework.

The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

ref. Defence firms must adopt a ‘flexible secrecy’ to innovate for European rearmament – https://theconversation.com/defence-firms-must-adopt-a-flexible-secrecy-to-innovate-for-european-rearmament-258302

No packaging, no problem? The potential drawbacks of bulk groceries

Source: The Conversation – (in Spanish) – By Fanny Reniou, Maître de conférences HDR, Université de Rennes 1 – Université de Rennes

High-income professionals over the age of 50 make up 70% of all consumers of bulk products.
DCStudio/Shutterstock

The bulk distribution model has been in the news again lately, with well-known brands such as The Laughing Cow making their way into French supermarkets. Stakeholders in the bulk sector are seeking to introduce innovations in order to expand and democratise the concept. But is the bulk model such a clear-cut approach to consuming in a sustainable way?

Bulk can be described as a consumer practice with a lower impact on the environment, since it involves the sale of products with no packaging, plastic or unnecessary waste and the use of reusable containers by consumers. In this type of distribution, predetermined manufacturer packaging becomes a thing of the past.

In this model, distributors and consumers take on the task of packaging the product themselves to ensure the continuity of the multiple logistical and marketing functions that packaging usually fulfils. Unaccustomed to this new role, stakeholders in the bulk sector may make mistakes or act in ways that run counter to the environmental benefits that are generally expected to result from this practice.

Contrary to the usually positive discourse on bulk products, our research points to the perverse and harmful effects of bulk distribution. When bulk stakeholders are left to “cope with” this new task of packaging products, can bulk still be described as ecologically sound?

A new approach to packaging

Packaging has always played a key role. It performs multiple functions that are essential for product distribution and consumption:

  • Logistical functions to preserve, protect and store the product: packaging helps to limit damage and loss, particularly during transport.

  • Marketing functions for product or brand recognition, which is achieved by distinctive colours or shapes to create on-shelf appeal. Packaging also has a positioning function, visually conveying a particular range level, as well as an informative function, serving as a medium for communicating a number of key elements such as composition, best-before date, etc.

  • Environmental functions, such as limiting the size of packaging and promoting certain types of materials – in particular recycled and recyclable materials.

In the bulk market, it is up to consumers and distributors to fulfil these various functions in their own way: they may give them greater or lesser importance, giving priority to some over others. Insofar as manufacturers no longer offer predetermined packaging for their products, consumers and distributors have to take on this task jointly.

Assimilation or accommodation

Our study of how consumers and retailers appropriate these packaging functions used a variety of data: 54 interviews with bulk aisle and store managers and consumers of bulk products, as well as 190 Instagram posts and 428 photos taken in people’s homes and in stores.

The study shows that there are two modes of appropriating packaging functions:

  • by assimilation – when individuals find ways to imitate typical packaging and its attributes

  • by accommodation – when they imagine new packaging and new ways of working with it

Woman filling her container with detergent
Bulk packaging can lead to hygiene problems if consumers reuse packaging for a new purpose.
GaldricPS/Shutterstock

Some consumers reuse industrial packaging, such as egg cartons and detergent cans, because of their proven practicality. But packaging may also mirror its owners’ identity. Some packaging is cobbled together, while other packaging is carefully chosen with an emphasis on certain materials like wax, a fabric popular in West Africa and used for reusable bags.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


Once packaging disappears, so does relevant information

Appropriating the functions of packaging is not always easy. There is a “dark side” to bulk, with possible harmful effects on health or the environment, and social exclusion. Bulk can lead, for example, to hygiene-related problems or misinformation when consumers fail to label their jars correctly, or use packaging for another purpose. For example, using a glass juice bottle to store detergent can be hazardous if a household member is unaware of its contents.

Bulk shopping can also appear exclusive for people with less culinary education. (High-income professionals over the age of 50 make up 70% of all consumers of bulk products.) Once the packaging disappears, so does the relevant information. Some consumers actually do need packaging to recognize, store and know how to cook a product. Without this information, products may end up in the garbage can!

Our study also shows the ambivalence of the so-called “environmental function” of bulk shopping – the initial idea being that bulk should reduce the amount of waste generated by packaging. In fact, this function is not always fulfilled, as many consumers tend to buy a great deal of containers along with other items, such as labels, pens and so on, to customise them.

Some consumers’ priority is not so much to reuse old packaging, but to buy new storage containers, which are often manufactured in faraway lands! The result is the production of massive amounts of waste – the exact opposite of the original purpose of the bulk trade.

Lack of consumer guidance

After a period of strong growth, the bulk sector went through a difficult period during the Covid-19 pandemic, leading to closures for many specialist stores in France, according to a first survey on bulk and on reuse. In supermarkets though, some retailers invested to make their bulk aisles more attractive – though in the absence of any effective guidance, consumers failed to make them their own. Bulk aisles have become just one among a host of other aisles.

Things seem to be improving however, and innovation is on the rise. In France, 58% of the members of the “Bulk and Reuse Network” (réseau Vrac et réemploi) reported an increase in daily traffic between January and May 2023 compared with 2022.

Distributors need to adapt to changing regulations. These stipulate that, by 2030, stores of over 400 m2 will have to devote 20% of their FMCG (Fast-Moving Consumer Goods) sales areas to bulk sales. Moreover, bulk sales made their official entry into French legislation with the law on the fight against waste and the circular economy (loi relative à la lutte contre le gaspillage et à l’économie circulaire) published in the French official gazette on February 11, 2020.

In this context, it is all the more necessary and urgent to support bulk stakeholders, so that they can successfully adopt the practice and develop it further.

The Conversation

Fanny Reniou has received funding from Biocoop as part of a research partnership.

Elisa Robert-Monnot has received funding from Biocoop as part of a research partnership and collaboration.

Sarah Lasri ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. No packaging, no problem? The potential drawbacks of bulk groceries – https://theconversation.com/no-packaging-no-problem-the-potential-drawbacks-of-bulk-groceries-258305

Lower revenues, pricier loans: how flooding in Europe affects firms and the financial system they depend on

Source: The Conversation – (in Spanish) – By Serena Fatica, Principal Economist — Team Leader, Joint Research Centre (JRC)

In Europe, the fastest-warming continent, the intensification of extreme weather events and changes in precipitation patterns have led to widespread and catastrophic flooding. Last year, storms and flooding affected an estimated 413,000 people, resulting in the loss of at least 335 lives. Material damage is estimated to amount to at least €18 billion, according to the 2024 European State of the Climate report from the Copernicus Climate Change Service and the World Meteorological Organization.

The flooding in October that hit southeastern Spain and the Valencia province in particular took the heaviest toll. Intense and prolonged rainfall and river flooding led to 232 fatalities, and infrastructure damage and economic losses totalled around €16.5 billion. More than seven months later, the local economy has rebounded, thanks in part to public aid packages worth 0.5% of the country’s GDP. However, in early May, the same part of Spain found itself exposed again to the disruptive consequences of climate change when extreme weather hit.

The costs of flooding

The direct costs from the damage to public infrastructure and private assets are only part of the economic losses originating from flooding. The indirect costs might not be immediately visible, but they are certainly not less significant. Business interruptions reduce firms’ revenue and cash flows, straining liquidity and, in the worst cases, threatening their survival. In addition, the increasing likelihood of future flooding may be priced into the valuation of assets and real estate in areas exposed to these types of climate risks. Firms impacted by climate-related hazards might find it difficult to pay back loans or bonds, or to raise finance as physical assets that can be pledged as collateral for bank credit lose value. Ultimately, this can affect the stability of the financial system.

For these reasons, climate change is not just a long-term environmental issue, but a threat to our economy and financial systems now. Economists at the European Commission’s Joint Research Centre (JRC) have been conducting research to better understand how the links between the business sector and the financial system amplify its impact.

A JRC study of flood events between 2007 and 2018 finds that flooding significantly worsened the performance of European firms. Manufacturers exposed to flooding experienced reductions in sales, number of employees and the value of their assets. These impacts occurred in the year following the flooding and tended to be persistent, with no clear signs of recovery seven years after the disaster. Some firms even went out of business. The study also finds that companies in flood-prone areas were better able to weather the shock than businesses exposed to less frequent flooding. This is consistent with the fact that adaptation and protection measures reduce the impacts of flooding.

Threats to smaller firms

Water damage is particularly disruptive for companies that are highly indebted. A second JRC study zooms in on the mechanisms whereby financing choices, and reliance on bank loans in particular, amplify the impact of climate change. This study focuses on loans extended to small and medium-sized enterprises (SMEs) in Italy, Spain and Belgium between 2008 and 2019. It was motivated by the idea that smaller firms, which are more financially fragile than larger ones, might also be more vulnerable to the localised impact of climate-related hazards, not least because of their limited capacity to geographically diversify their operations and access market-based finance. The study shows that flood episodes under analysis strained SMEs’ ability to meet their debt obligations. Flooded firms were more likely to incur delays in servicing their loans and eventually fail to repay them, even two years after the disaster.

In turn, this entails losses for the banks that finance these firms. In general, if banks anticipate the impact of flooding on business operations, they could be expected to divert lending toward safer borrowers or charge a higher interest rate on credit extended to at-risk firms. Indeed, the study finds evidence that prospective flood risk is priced into new loans. In the period under analysis, the “flood risk premium” was especially high for loans to smaller firms and for those granted by local, specialised banks, both of which tend to have geographically concentrated activities that are more exposed to disaster impacts. Loans to borrowers exposed to high flood risk were 12 percent more expensive, all things being equal.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


Thus, flooding causes worse financial conditions for businesses and exposes the banking sector to losses on their loan portfolios. The numbers can be staggering: days after the October 2024 flooding, the Spanish Central Bank said that banks’ exposure in the affected areas would total €20 billion, with €13 billion in household loans and €7 billion in business loans (60% to SMEs), impacting 23,000 companies and 472,000 individuals.

With extreme weather events becoming more frequent and severe, the direct and indirect costs of climate change are projected to increase, unevenly affecting households, firms and territories across Europe. Increasing investments in adaptation, eg in flood defence, and closing the climate insurance protection gap – the uninsured portion of economic losses caused by natural hazards – are crucial to increase the resilience of our economies and financial systems and preserve the wellbeing of our societies. The complex structure of investment incentives calls for a multilayered approach, with a mix of private and public funding and risk-sharing mechanisms.

The Conversation

Serena Fatica ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Lower revenues, pricier loans: how flooding in Europe affects firms and the financial system they depend on – https://theconversation.com/lower-revenues-pricier-loans-how-flooding-in-europe-affects-firms-and-the-financial-system-they-depend-on-258755

Testing between intervals: a key to retaining information in long-term memory

Source: The Conversation – (in Spanish) – By Émilie Gerbier, Maîtresse de Conférence en Psychologie, Université Côte d’Azur

The proverb “practice makes perfect” highlights the importance of repetition to master a skill. This principle also applies to learning vocabulary and other material. In order to fight our natural tendency to forget information, it is essential to reactivate it in our memory. But, how often?

Research in cognitive psychology provides answers to this question. However, it is also important to understand underlying principles of long-term learning to apply them in a useful and personalised way.

The ‘spacing effect’

There are two key principles for memorising information in the long term.

First, test yourself to learn and review content. It is much more effective to do this using question-and-answer cards than just to reread the material. After each attempt to recall pieces of information, review the one that could not be retrieved.

The second principle is to space out reactivations over time. This phenomenon, known as the “spacing effect”, suggests that when reviews of specific content are limited to, for instance, three sessions, it is preferable to space them over relatively longer periods (eg every three days) rather than shorter ones (every day).

Reviewing material at long intervals requires more effort, because it is more difficult to recall information after three days than one. However, it is precisely this effort that reinforces memories and promotes long-term retention.

When it comes to learning, we must therefore be wary of effortlessness: easily remembering a lesson today does not indicate how likely we are to remember it in a month, even though this feeling of easiness can cause us to mistakenly believe that review is unnecessary.

Robert Bjork of the University of California coined the term “desirable difficulty” to describe an optimal level of difficulty between two extremes. The first extreme corresponds to learning that is too easy (and therefore ineffective in the long run), while the other extreme corresponds to learning that is too difficult (and therefore ineffective and discouraging).

Finding the right pace

There is a limit to how much time can pass between information retrievals. After a long delay, such as a year, information will have greatly declined in memory and will be difficult, if not impossible, to recall. This situation may generate negative emotions and force us to start learning from scratch, rendering our previous efforts useless.

The key is to identify the right interval between retrievals, ensuring it is not too long and not too short. The ideal interval varies depending on several factors, such as the type of information that needs to be learned or the history of that learning. Some learning software use algorithms taking these factors into account, to test each piece of information at the “ideal” time.

There are also paper-and-pencil methods. The simplest method is to follow an “expansive” schedule, which uses increasingly longer intervals between sessions. This technique is used in the “méthode des J” (method of days), which some students may be familiar with. The effectiveness of this method lies in a gradual strengthening of the memory.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


When you first learn something, retention is fragile, and memorised content needs to be reactivated quickly not to be forgotten. Each retrieval strengthens the memory, allowing the next retrieval opportunity to be delayed. Another consequence is that each retrieval is moderately difficult, which places the learner at a “desirable” level of difficulty.

Here is an example of an expansive schedule for a given piece of content: Day 1, Day 2, D5, D15, D44, D145, D415, etc. In this schedule, the interval length triples from one session to the next: 24 hours between Day 1 and Day 2, then three days between D2 and D5, and so on.

Gradually incorporating new knowledge

There is no scientific consensus on the optimal interval schedule. However, carrying out the first retrieval on the day after the initial moment of learning (thus, on D2) seems beneficial, as a night’s sleep allows the brain to restructure and/or reinforce knowledge learned the previous day. The subsequent intervals can be adjusted according to individual constraints.

This method is flexible; if necessary, a session can be postponed a few days before or after the scheduled date without affecting long-term effectiveness. It is the principle of regular retrieval that is key here.

The expansive schedule also has a considerable practical advantage in that it allows new information to be gradually integrated. For instance, new content can be introduced on D3, because no session on the initial content is scheduled for that day. Adding content gradually makes it possible to memorise large amounts of information in a lasting way without spending more time studying it.

The other method is based on the Leitner box system. In this case, the length of interval before the next retrieval depends on the outcome of the attempt to retrieve information from memory. If the answer was easily retrieved, the next retrieval should happen in a week. If the answer was retrieved with difficulty, then three days need to elapse before the next test. If the answer could not be retrieved, the next test should take place the following day. With experience, you will be able to adjust these intervals and develop your own system.

In short, effective and lasting learning not only requires that a certain amount of effort be made to retrieve information from memory, but a regular repetition of this process, at appropriate intervals, to thwart the process of forgetting.

The Conversation

Émilie Gerbier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Testing between intervals: a key to retaining information in long-term memory – https://theconversation.com/testing-between-intervals-a-key-to-retaining-information-in-long-term-memory-246511

From Scrooge to science: how dairy might disrupt your sleep and dreams

Source: The Conversation – UK – By Timothy Hearn, Senior Lecturer in Bioinformatics, Anglia Ruskin University

New Africa/Shutterstock

Ebenezer Scrooge tried to wave away the ghost of Jacob Marley by blaming the apparition on “an undigested bit of beef … a crumb of cheese”. Charles Dickens might have been writing fiction, but the idea that late-night dairy can warp dreams has now gained scientific support.

Researchers in Canada surveyed 1,082 university students about their eating habits, sleep patterns and dreams.  Remarkably, 40% reported that certain foods affected their sleep. Of that group, 20% blamed dairy – suggesting that Scrooge’s midnight cheese might have had more of an impact than he realised.

Just 5.5% believed food changed their dreams, but among those respondents dairy again loomed large, second only to sugary desserts as a perceived trigger for bizarre or disturbing dreams.

The researchers asked about everything from nightmare frequency to food allergies and intolerances. A clear pattern emerged: participants who reported lactose intolerance were significantly more likely to have frequent nightmares. And the link was strongest in people who also experienced bloating or cramps.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


Statistical modelling suggested the stomach distress partly explains the bad dreams. In other words, food that keeps the gut churning can also set the imagination spinning.

That gut–brain route makes physiological sense. Abdominal discomfort can jolt sleepers into lighter stages of sleep where vivid or negative dreams are most common. Inflammation and spikes in cortisol (a stress hormone) triggered by digestive upset may further shape the emotional tone of dreams, especially by amplifying anxiety or negativity.

Earlier work backs the idea. A 2015 survey of Canadian undergraduates found that nearly 18% linked what they ate to their dreams, with dairy the top suspect, while a 2022 online study of 436 dream enthusiasts reported that people who ate more sugary snacks remembered more nightmares.

The new study from Canada echoes a wider literature on diet and sleep. Diets rich in fibre, fruit and vegetables are associated with deeper, more refreshing sleep, whereas meals high in saturated fat and sugar predict lighter, more fragmented rest.

A man waking from a bad dream.
Stomach distress partly explains bad dreams.
Lysenko Andrii/Shutterstock

Eating late in the evening has been tied to poorer sleep quality and to an “evening chronotype” (that is, night owls), itself linked to nightmare frequency.

If future work confirms the cheese–nightmare connection, the implications could be practical. Nightmares affect about 4% of adults worldwide and are particularly common in post-traumatic stress disorder.

Drug treatments exist but carry side-effects. Adjusting the timing or composition of evening meals, or choosing low-lactose dairy options, would be a far cheaper, lower-risk intervention.

Gut-friendly diets such as the Mediterranean diet are already being explored for mood disorders; nightmares may be another frontier for nutritional psychiatry.

What the research can’t prove

That said, the new findings come with caveats. The sample was young, mostly healthy psychology students filling out online questionnaires. Food intake, lactose intolerance and nightmare frequency were all self-reported, so “recall biases” (inaccurate memory) or the power of suggestion could inflate the associations.

Only 59 participants believed food influenced their dreams, so small-number effects (unreliable results from too few data) are possible. And a survey can only reveal associations – it can’t prove that cheese causes bad dream.

Cheese keeps cropping up in nightmare stories, and people who struggle to digest dairy report the worst of it. Scientists still have to match meal diaries, gut clues and lab-monitored dreams to prove the link. In the meantime, try eating earlier or choosing low-lactose options. Your stomach – and your dreams – may calm down.

The Conversation

Timothy Hearn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. From Scrooge to science: how dairy might disrupt your sleep and dreams – https://theconversation.com/from-scrooge-to-science-how-dairy-might-disrupt-your-sleep-and-dreams-260328

Why the l-carnitine sport supplement is controversial

Source: The Conversation – UK – By Julia Haarhuis, PhD student – Food, Microbiomes and Health, Quadram Institute

Miljan Zivkovic/Shutterstock

Sport supplements are hard to get away from if you like to exercise regularly. Even if you’re not interested in them, there’s a good chance your gym will have posters extolling their virtues or your sporty friends will want to talk to you about them.

It can be hard to know what supplements to take as there is a lot of mixed information out there. L-carnitine is among the more controversial supplements. While there is evidence it supports muscle recovery and enhances exercise performance, research has also shown it can contribute to cardiovascular disease.

In a new study, my colleagues and I found it may be possible to counter the negative effects of l-cartinine by eating pomegranate with it.

First, it’s important to understand what l-carnitine is. Your body produces a small amount of l-carnitine naturally. This happens in the kidneys, liver and brain.

When l-carnitine was first identified in humans in 1952, it was thought to be a vitamin and it was referred to as vitamin BT. After years of research on this compound, l-carnitine is now considered a quasi-vitamin because for most people the human body can produce enough l-carnitine itself.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


L-carnitine can be bought as a dietary supplement, but the nutrient is also added to energy drinks and some protein powders by manufacturers to try and enhance the value of their products. Manufacturers normally clearly state it on the product if it contains l-carnitine – it’s not something a company will try to hide.

Some foods naturally contain l-carnitine, such as meat and in tiny amounts in dairy products. L-carnitine is not fed to livestock but it is present in muscle tissue. L-carnitine was first found in meat in 1905. It is for this reason that the name carnitine is derived from the Latin word carnis, meaning “of the flesh”.

Jar and scoop filled with powder against yellow background.
L-carnitine is sold in sport supplements.
9dream studio/Shutterstock

The harmful effects of l-carnitine supplements

It is not thought to be intrinsically harmful. Your gut microbes are to blame for the risks associated with l-carnitine.

Less than 20% of l-carnitine supplements can be taken in by the human body. The unabsorbed l-carnitine travels down the gastrointestinal tract and reaches the colon. The colon is home to trillions of microbes, including bacteria, viruses and fungi.

When the remaining 80% of the l-carnitine supplement arrives in the colon, the microbes start absorbing the nutrient and they use it to produce something else: trimethylamine (TMA). TMA is a compound the human body can efficiently absorb, and that is where the potentially harmful effects of l-carnitine supplements arise.

Once the body absorbs TMA, it goes to the liver via the blood stream. The liver converts TMA to trimethylamine N-oxide (TMAO). Research has shown that high levels of TMAO in the blood can contribute to cardiovascular disease.

For example, a research group at the Cleveland Clinic in the US gave human participants a nutrient similar to l-carnitine that is also converted into TMA by gut microbes. The researchers found that the nutrient caused an increased risk of thrombosis (blood clots) in their participants.

L-carnitine itself is a beneficial nutrient. When it is produced by our bodies, which happens in the kidneys, brain and liver, it’s not metabolised by the gut microbiota and isn’t converted to TMAO. Your body can absorb more l-carnitine from meat than from supplements, which makes it less harmful as that means less of it ends up in the colon.

Dietary intervention can reduce harmful effects

In my team’s lab at the Quadram Institute in Norwich, England, we simulated what happens when the l-carnitine supplement reaches the microbes in the colon. We fed a culture of gut microbes with l-carnitine and measured the TMA that the microbes produced.

Then, we fed a culture of gut microbes with l-carnitine together with a pomegranate extract, which is rich in polyphenols. Polyphenols are plant compounds with antioxidant, antimicrobial, and anti-inflammatory properties that may help keep you healthy and protect you against diseases.

The main polyphenols in pomegranate belong to a group called ellagitannins, a type of polyphenol that can reach the colon almost entirely intact, where they can interact with the gut microbiota. When we measured the TMA that the gut microbes produced in the second experiment, we saw much less TMA.

Our experiments in the lab show that a polyphenol-rich pomegranate extract can reduce microbial TMA production and eliminate the potentially harmful effects of l-carnitine supplements.

Our laboratory experiments showed that the pomegranate extract can reduce the production of TMA. Ellagitannins are also abundant in other fruits and nuts, such as raspberries and walnuts. So, if you take l-carnitine supplements, our research suggests that it may be a good idea to include ellagitannin-rich foods in your diet. Eating more fruits and nuts can be good for your health, so including these in your diet will probably be beneficial anyway.

Our group is now moving the science outside of the lab. We are testing in human participants how effective the pomegranate extract is at reducing TMAO production from l-carnitine supplements. This study will tell us whether taking an l-carnitine supplement along with a pomegranate extract may be better than taking the supplement on its own.

The Conversation

Julia Haarhuis works at the Quadram Institute and receives funding from the Wellcome Trust.

ref. Why the l-carnitine sport supplement is controversial – https://theconversation.com/why-the-l-carnitine-sport-supplement-is-controversial-219520

Most plant-friendly fungi are a mystery to scientists

Source: The Conversation – UK – By Katie Field, Professor in Plant-Soil Processes, University of Sheffield

Fly agaric mushrooms partner with trees. Magnus Binnerstam/Shutterstock

If you walk through a forest and look down, you might think you’re stepping on dead leaves, twigs and soil. In reality, you’re walking over a vast underground patchwork of fungal filaments, supporting life above ground.

These are mycorrhizal fungi, which form partnerships with the roots of nearly all plants. Found everywhere from tropical rainforests to boreal forests and farmland, these underground fungi sustain life above ground, often without us realising they’re even there.

A recent academic review argues that up to 83% of ectomycorrhizal (ECM) fungi species, which form partnerships with trees, may be unknown to science.

Mycorrhizal fungi grow around root tips and form webs between root cells or penetrate root cells, then make structures inside them. They scavenge nutrients such as phosphorus and nitrogen from the soil and, in return, receive carbon from their host plants.

Traces of these unidentified fungi are often found in soil DNA. The researchers surveyed global DNA databases to see how many DNA traces that seemed to belong to ECM fungi matched to a species. Only 17% could. Scientists call these “dark taxa” – organisms that have been detected, but not formally described, named or studied.

Many of these fungi produce large fruiting bodies such as mushrooms and are foundational to forest ecosystems.

One example is the fly agaric (Amanita muscaria) which produces the iconic red and white spotted toadstools often linked to folklore and can have a range of host trees. It typically associates with birch, pine and spruce, especially in colder climates, helping trees survive in nutrient-poor soils.

Porcini fungi, (for example Boletus edulis), produce delicious mushrooms prized for their rich, nutty flavor, are ECMs too. These fungi grow with pines, firs and oaks. And the chanterelle is highly sought-after by mushroom collectors and often found near oaks, beech and conifers.

Chanterelles thrive in undisturbed, healthy forests. Their presence often signals a well-functioning forest ecosystem. They have a fruity, apricot-like scent that may attract insects to help spread spores.

Yellow mushrooms grow on forest floor.
Chanterelle mushrooms are highly sought after.
Nitr/Shutterstock

The new report shows how little we know about the world beneath our feet. This ignorance has important implications. Entire landscapes are being reshaped by deforestation and agriculture.

But reforestation efforts are happening without fully understanding how these changes affect the fungal life that underpins these ecosystems. For example, in the Amazon, deforestation for farming continues at an alarming pace with 3,800 square miles (equivalent to 1.8 million football fields) of tropical rainforest destroyed for beef production in 2018-19 alone.

Meanwhile, well-meaning carbon offset schemes often involve planting trees of a single species, potentially severing ancient relationships between native trees and their fungal partners. This is because the mycorrhizal fungi in these area will have developed in partnership with the native plants for many years – and may not be compatible with the tree species being planted for these schemes.

Although not all trees have specific fungal partners, many ECM fungi will only form symbioses with certain trees. For example, species within the Suillus genus (which includes the sticky bun mushroom) are specific to certain species of pine.

Introducing non-native plantation species may inadvertently drive endemic fungi, including species not yet known to science, toward extinction. We may be growing forests that look green and vibrant, but are damaging the invisible systems that keep them alive.

The problem isn’t limited to ECM fungi. Entire guilds (species groups that exploit resources in a similar way) of mycorrhizal fungi, remain virtually unexplored.

These dark guilds are ecologically crucial, yet most of their members have never been named, cultured or studied.

Ericoid Mycorrhizal Fungi (ERM)

These fungi form symbioses with many ericaceous shrubs, including heather, cranberry and rhododendrons. They dominate in some of the world’s harshest landscapes, including the Arctic tundra, the boreal forest (also known as snow forest), bogs and mountains.

Research suggests ERM fungi not only help plants thrive in harsh environments but also drive some of the carbon accumulation in these environments, making them potentially part of an important carbon sink.

Despite their abundant coverage across some of the most carbon-rich soils on Earth, the ecology of ERM fungi remains somewhat mysterious. Only a small number have been formally identified. However, even the few known species suggest remarkable potential.

Their genomes contain vast repertoires of genes for breaking down organic matter. This is important because it suggests ERM fungi are not just symbionts living in close interaction with other species but also active decomposers, influencing both plant nutrition and soil carbon cycling. Their dual lifestyle may play a critical role in nutrient-poor ecosystems.

Mucoromycotina fine root endophytes (MFRE)

MFRE are another group of enigmatic fungi that form beneficial relationships with plants. Long mistaken for the arbuscular mycorrhizal (AM) fungi until distinguished in 2017, MFRE are also found across a range of ecosystems including farmland and nutrient-poor soils and often live alongside AM fungi.

MFRE appear to be important in helping plants access nitrogen from within the soil, while AM fungi are more associated with phosphorus uptake. Like ERM fungi, MFRE appear to also alternate between free-living and symbiotic lifestyles.

As researchers begin to uncover their roles, MFRE are emerging as important players in plant resilience and sustainable agriculture.

These fungi frequently appear in plant roots. They are characterised by darkly pigmented, segmented fungal filaments, or hyphae, but their role is highly context-dependent.

Some DSEs appear to enhance host stress tolerance or nutrient uptake. Others may act as latent pathogens, potentially harming the host plant. Most DSEs remain unnamed and poorly understood.

Time is running out

Many of the ecosystems connected to these dark guilds of fungi are among the most vulnerable on the planet. The Arctic and alpine regions which are strongholds for ERMs, DSEs and potentially MFREs, are warming at two to four times the global average.

Peatlands have been drained and converted for agriculture or development while heathlands are increasingly targeted for tree-planting initiatives meant to sequester carbon.

Planting fast-growing, non-native species in monocultures may improve short-term carbon metrics above ground, but it could come at the cost of soil health and belowground biodiversity. Many fungi are host-specific, co-evolving with native plants over millions of years.

Replacing those plants with non-native trees or allowing invasive plants to spread could lead to local extinctions of fungi we’ve never had the chance to study. Soil fungi also mediate processes from nutrient cycling to pathogen suppression to carbon sequestration.

We are changing landscapes faster than we can understand them and in doing so we may be unravelling critical ecological systems that took millennia to form.

The Conversation

Katie Field receives funding from the European Research Council, the Biotechnology and Biological Research Council and the Natural Environment Research Council.

Tom Parker receives funding from The Scottish Government’s Rural and Environment Science and Analytical Services Division and Natural Environment Research Council

ref. Most plant-friendly fungi are a mystery to scientists – https://theconversation.com/most-plant-friendly-fungi-are-a-mystery-to-scientists-259705

What makes a good football coach? The reality behind the myths

Source: The Conversation – UK – By Alan McKay, Senior Research Assistant for the Centre for Football Research in Wales, University of South Wales

With Women’s Euro 2025 underway, attention is turning not just to the players hoping for glory, but to the head coaches tasked with leading them.

These include England’s Sarina Wiegman, who guided the Netherlands to Euro victory in 2017 and repeated the feat with England in 2022; Spain’s Montse Tomé, the reigning world champions’ first female head coach; and Rhian Wilkinson, who is preparing Wales for their first ever appearance at a major tournament.

The pressure is immense, but what actually makes a good football coach? My colleagues and I recently conducted a study on behalf of the Uefa Academy to better understand this topic.

There are plenty of myths. That the best coaches eat, sleep and breathe football 24/7. That they’re “natural leaders” who inspire through sheer charisma. That success demands constant self-sacrifice. But when coaches try to live up to these ideas, it can leave them feeling burnt out – physically and emotionally exhausted, disconnected from their personal lives and questioning their ability.

In reality, effective coaching is about much more than tactics or motivation. It’s about performance, not just on the pitch, but in the way coaches manage themselves, their staff and their players. A good coach must balance their responsibilities with time for rest and recovery. They must communicate clearly, stay calm under pressure and create an environment where everyone knows their role.

Sarina Wiegman discusses the importance of creating positive environments.

Sarina Wiegman has described her approach in just these terms: “We try to turn every stone to get as best prepared as we can be before we go into the tournament… to perform under the highest pressure.”

But coaches don’t arrive at this mindset by accident. It’s developed through experience and, importantly, through structured education.

One important finding was that the most effective coaches have a strong sense of who they are – including their values, their communication style, and their strengths and limitations. These are things which affect the players and staff with whom they work.

Even top coaches need support

This type of self-awareness is often shaped through formal coach education programmes, where participants work closely with a mentor. These mentors can offer honest feedback, challenge assumptions and help coaches develop a philosophy they can share with their team.

That process is essential at every level, whether it’s grassroots football or the international stage. Coaches who understand themselves and who can use their education are better able to adapt their approach to the context they’re working in. They can build trust, foster unity and know when to step back.

Gareth Southgate, former England men’s head coach, is a fantastic example of this. He has spoken about the importance of supporting the person first and the player second. He has discussed the value of empathy and empowering players to make decisions on and off the pitch.

Through this process, Southgate helped players focus on the “joy of playing for their country” rather than simply achieving results. This may have helped to relieve some of the inevitable pressure and expectations placed on the England squad by the media, fans and English Football Association to win tournaments.

After qualifying, a good coach will continue to seek out their mentor for advice on both professional and personal issues they may be experiencing in their role. Emma Hayes, head coach of the US women’s team, has credited her own mentor with helping her fine tune her leadership style and build team cohesion. Her ability to create a safe, supportive environment was central to Team USA’s gold medal win at the 2024 Paris Olympics.

Hayes’ methods demonstrate that coaching is not a destination but a lifelong process. It requires constant learning, reflection and adaptation. The best coaches don’t just chase trophies. They aim to build something lasting – a culture of trust, a resilient team and a space where people can thrive.

As Euro 2025 continues, it’s worth keeping an eye, not just on the scorelines, but on the sidelines. The real mark of a good coach isn’t always found on the scoreboard. It’s found in how a team plays, how they talk about each other and whether they’re still smiling at the end.

The Conversation

Alan McKay received funding from the Union of European Football Associations (UEFA) to conduct the research mentioned in this article. Alan wishes to acknowledge Professor Brendan Cropley, who was instrumental in conducting this research.

ref. What makes a good football coach? The reality behind the myths – https://theconversation.com/what-makes-a-good-football-coach-the-reality-behind-the-myths-259947