Source: The Conversation – France in French (2) – By Guy Royal, Professeur de chimie, en délégation au sein du Laboratoire EDYTEM de l’université Savoie Mont Blanc, Université Grenoble Alpes (UGA)
Les PFAS constituent une très grande famille de molécules dont certaines sont maintenant reconnues comme polluants éternels. Si la réglementation actuelle impose de mesurer les concentrations de certains PFAS uniquement, le nombre de ces composés à analyser est encore bien limité et pourrait être bien plus large, à condition d’améliorer les méthodes de mesure existantes et d’en développer de nouvelles, notamment en concevant des capteurs portables.
Les substances per- et polyfluoroalkylées, plus connues sous l’acronyme PFAS (on prononce « pifasse ») constituent une famille de plus de 12 000 molécules synthétiques. Ces composés contiennent des liaisons chimiques très fortes entre des atomes de Carbone et de Fluor, ce qui leur confère des propriétés remarquables.
En particulier, les PFAS peuvent posséder des propriétés antiadhésives, anti-frottements et déperlantes, associées à des stabilités chimiques et thermiques exceptionnelles. Pour ces raisons, les PFAS sont utilisés depuis les années 1950 dans de très nombreux produits de l’industrie (plasturgie, résines, peintures, mousses anti-incendie…) et de grande consommation (cosmétiques, textiles, emballages alimentaires…).
Malheureusement, certains PFAS sont aujourd’hui reconnus comme toxiques et nocifs pour la santé humaine et l’environnement. Leur grande stabilité chimique les rend difficilement (bio) dégradables et certains d’entre eux sont maintenant qualifiés de « polluants éternels ».
Il est donc nécessaire de limiter l’utilisation des PFAS et de résoudre les problèmes environnementaux et sanitaires dont ils sont responsables. C’est ainsi que des actions sont actuellement menées à l’échelle mondiale pour réglementer l’utilisation des PFAS, comprendre leurs effets sur la santé et l’environnement, mais aussi pour les remplacer, parvenir à décontaminer efficacement les sites pollués et aussi assurer leur suivi et leur surveillance.
Dans ce contexte, un point clé consiste à pouvoir détecter et quantifier efficacement les PFAS, que ce soit notamment dans l’environnement, l’eau de consommation et les eaux de rejets ou encore dans les milieux biologiques (fluides, organes…).
Malheureusement, le nombre considérable des substances per- et polyfluoroalkylées, la grande diversité de leurs propriétés et les faibles limites de détection à atteindre rendent leur surveillance et leur analyse extrêmement compliquées !
Deux exemples de PFAS particulièrement surveillés (PFOA et PFOS) — parmi la dizaine de milliers de composés PFAS existants. Guy Royal, Fourni par l’auteur
Comment analyse-t-on les PFAS ?
Plusieurs méthodes d’analyse des PFAS existent actuellement, mais la majorité des mesures est réalisée à l’aide d’une technique répondant au doux nom de « chromatographie liquide couplée à la spectrométrie de masse en tandem » (LC-MS/MS). Celle-ci permet de différentier, d’identifier et de quantifier les différents PFAS présents dans l’échantillon initial.
Cette technique d’analyse associe le pouvoir de séparation de la chromatographie en phase liquide aux capacités d’analyse de la spectrométrie de masse, hautement sensible et sélective.
Principe de la chromatographie liquide couplée à la spectrométrie de masse en tandem (LC-MS/MS). Guy Royal, Fourni par l’auteur
Cette technique, très utilisée notamment dans le domaine pharmaceutique et par les laboratoires d’analyse et de recherche, est extrêmement sensible et performante puisqu’elle permet d’analyser simultanément un grand nombre de molécules contenues dans des échantillons complexes avec des limites de détection très basses.
Toutefois, son utilisation est coûteuse et délicate à mettre en œuvre, car elle requiert l’utilisation d’un matériel de pointe et d’utilisateurs experts.
Avec cette technique, on ne peut détecter que ce que l’on recherche
De plus, il est nécessaire d’utiliser une colonne chromatographique adaptée aux molécules à analyser. Il faut également étalonner l’appareil, c’est-à-dire utiliser préalablement des échantillons de PFAS dont la composition en molécules et leurs concentrations sont connues afin de les reconnaître et de les quantifier lors de l’analyse.
Ainsi, on ne peut donc détecter que ce que l’on recherche : c’est pour cela que l’on parle d’« analyse ciblée ». Seule une gamme limitée de PFAS est ainsi détectée (quelques dizaines sont généralement recherchées), ce qui peut avoir pour effet de sous-estimer le total des PFAS présents dans un échantillon.
De surcroît, dans le cas spécifique des PFAS, ceux-ci peuvent se retrouver dans des matrices extrêmement variées pouvant être de l’eau (potable, naturelle, industrielle et/ou de rejet…), des sols et des boues, mais aussi des milieux biologiques tels que le sang ou les organes. Il est ainsi souvent nécessaire de procéder à un prétraitement de l’échantillon afin de le rendre compatible avec l’analyse.
Cette étape supplémentaire allonge significativement le temps nécessaire pour obtenir des résultats et en augmente le coût de chaque analyse pouvant représenter plusieurs centaines d’Euros. On comprend dès lors toute la complexité que revêt ce type d’analyse !
Enfin, la mesure des PFAS par chromatographie est réalisée exclusivement en laboratoire. Les échantillons doivent donc être transportés, ce qui rallonge le temps entre le prélèvement et le résultat d’analyse.
Pourra-t-on bientôt détecter les PFAS rapidement et sur site ?
À ce jour, il n’existe pas de test simple permettant de détecter des PFAS de manière rapide et directement sur le site à analyser (rivière, eau de rejet…). Il n’est pas non plus possible de mesurer en continu et de suivre la concentration de PFAS dans le temps.
Pour répondre à cette problématique, des recherches sont en cours à l’échelle mondiale afin de développer des capteurs simples permettant une détection rapide et à faible coût. L’objectif est notamment d’obtenir, de manière rapide et aisée, un signal — généralement électrique ou optique — indiquant la présence de PFAS dans un échantillon.
C’est dans ce contexte que le laboratoire EDYTEM de l’Université Savoie Mont-Blanc et l’entreprise grenobloise GRAPHEAL (start-up en essaimage du CNRS issue des travaux de recherche réalisés au sein de l’Institut Néel de Grenoble) travaillent conjointement au développement d’un capteur électronique à base de graphène.
Le graphène, dont les découvreurs ont été nobélisés en 2010, est un film moléculaire de carbone cristallin à deux dimensions et de l’épaisseur d’un simple atome. Son empilement constitue le graphite, et il est doté de propriétés électriques exceptionnelles car les électrons, forcés de circuler sur la surface du film en raison de son épaisseur ultimement fine, interagissent fortement avec les éléments adsorbés sur le graphène.
Une photo des capteurs de molécules développés par Grapheal, avec une illustration de leur principe : la présence de molécules entre la source et le drain affecte le courant électrique qui circule dans le dispositif, ce que l’on peut mesurer. Grapheal, Fourni par l’auteur
Le principe du dispositif visé, de type transistor, repose sur la connexion d’un plan de graphène à deux électrodes, le matériau graphène étant recouvert d’un film moléculaire capable d’interagir sélectivement avec une ou plusieurs molécules de type PFAS présentes dans l’échantillon à doser. Cette interaction à l’échelle moléculaire entraîne une modification de la tension entre les deux électrodes. L’amplitude de cette modification étant liée à la concentration de molécules PFAS présentes dans l’échantillon, il est alors possible de quantifier ces dernières.
Le développement d’une telle technique représente un véritable défi scientifique car il s’agit de mesurer l’équivalent d’une goutte d’eau dans un volume équivalent à trois piscines olympiques ! Il est aussi nécessaire d’explorer un vaste panel de molécules PFAS et de conditions expérimentales puisque les PFAS peuvent être présents dans des échantillons très divers qui vont de l’eau potable aux eaux de rejets.
À ce jour, ces dispositifs permettent de détecter différents types de PFAS actuellement surveillés, dont des PFAS ayant des chaînes fluorées longues (plus de 5 atomes de carbone) et courtes. Notre seuil de détection atteint actuellement 40 nanogrammes par litre pour le PFOA, qui est un des PFAS les plus couramment rencontrés à l’état de traces dans l’environnement.
Des techniques de préparation permettant de concentrer les PFAS dans le prélèvement pourraient encore améliorer ce seuil.
En cas de succès, ces capteurs permettront de réaliser des tests rapides, peu coûteux et utilisables directement sur site. À l’image des autotests Covid qui sont complémentaires des analyses PCR, ces capteurs électroniques à base de graphène — tout comme d’autres dispositifs d’analyse rapide, tels que des capteurs reposant sur un changement de coloration — viendront en complément des méthodes chromatographiques. Ils permettront d’obtenir davantage de résultats préliminaires, facilitant ainsi une surveillance accrue et mieux adaptée aux enjeux actuels liés au PFAS.
Guy Royal, professeur à l’Université Grenoble Alpes, développe sa thématique de recherche dédiée aux PFAS au sein du laboratoire EDYTEM de l’Université Savoie Mont Blanc dans le cadre d’une délégation.
Micheline Draye a reçu des financements de l’ANR projet ANR-23-LCV2-0008-01.
La galaxie du Sculpteur est imagée en grand détail par l’instrument MUSE du VLT de l’Observatoire européen austral, au Chili.ESO/E. Congiu et al., CC BY
Une nouvelle étude a permis de couvrir presque totalement la galaxie du Sculpteur, à 11 millions d’années-lumière de la Terre, avec un niveau de détail inégalé, et dans de très nombreuses couleurs. Ces informations, libres d’accès, permettent d’avancer dans notre compréhension de la formation des étoiles.
Une collaboration internationale d’astronomes, dont je fais partie, vient de rendre publique une des plus grandes mosaïques multicouleur d’une galaxie emblématique de l’Univers proche, la galaxie spirale du Sculpteur, ou NGC 253.
En seulement quelques nuits d’observations, nous avons pu couvrir la quasi-totalité de la surface apparente de cette galaxie de 65 000 années-lumière de large, avec un niveau de détail inégalé, grâce à un instrument unique, MUSE, attaché au Very Large Telescope de l’Observatoire européen austral (ESO).
Ces images permettent de voir à la fois les petites et les grandes échelles de cette galaxie située à 11 millions d’années-lumière de la nôtre. En d’autres termes, on peut zoomer et dézoomer à l’envi, ce qui ouvre la porte à de nouvelles découvertes, et à une compréhension approfondie du processus de formation des étoiles.
Un des Graals de l’astrophysique moderne : comprendre la formation des étoiles
Voici ce que l’on sait déjà. Pour former des étoiles, du gaz interstellaire est comprimé et de petits grumeaux s’effondrent : des étoiles naissent, vivent, et certaines étoiles finissent leurs vies dans une explosion qui disperse de nouveaux atomes et molécules dans le milieu interstellaire environnant — ce qui procure une partie du matériel nécessaire à la prochaine génération d’étoiles.
La galaxie du Sculpteur vue par le VLT : les étoiles déjà présentes en gris, auxquelles se surimposent les pouponnières d’étoiles en rose. Source : ESO.
Ce que l’on comprend beaucoup moins bien, c’est de quelle façon les grandes structures du disque galactique comme les spirales, les filaments ou les barres évoluent dans le temps et promeuvent (ou inhibent) ce processus de formation d’étoiles.
Pour comprendre ces processus, il faut étudier de nombreuses échelles à la fois.
En premier lieu, des échelles spatiales : les grandes structures elles-mêmes (spirales, filaments, barres font plusieurs milliers d’années-lumière), les régions denses de gaz appelées « pouponnières d’étoiles » (qui ne font « que » quelques années-lumière)… et une vision globale et cohérente de la galaxie hôte (jusqu’à des dizaines de milliers d’années-lumière de rayon).
De plus, ces processus ont lieu sur des durées totalement différentes les unes des autres : les étoiles massives explosent au bout de quelques millions d’années, alors que les structures dynamiques comme les spirales évoluent sur des centaines de millions d’années.
De nombreuses études ont permis, dans les quinze dernières années, une cartographie de galaxies voisines à l’aide d’imageurs très performants au sol et dans l’espace.
Mais les différents acteurs de ces processus complexes (par exemple le gaz interstellaire, les étoiles jeunes ou vieilles, naines ou massives, la poussière) émettent de la lumière de manière spécifique. Certains par exemple n’émettent que certaines couleurs, d’autres un large domaine de longueur d’onde.
Ainsi, seule la spectroscopie — qui distingue les différentes couleurs de la lumière — permet d’extraire simultanément des informations telles que la composition des étoiles, l’abondance des différents atomes dans le gaz interstellaire, le mouvement du gaz et des étoiles, leur température, etc.
Tous les quinze jours, de grands noms, de nouvelles voix, des sujets inédits pour décrypter l’actualité scientifique et mieux comprendre le monde. Abonnez-vous gratuitement dès aujourd’hui !
L’avancée de l’ESO et de son instrument MUSE
C’est pourquoi cette étude dirigée par Enrico Congiu, un astronome de l’ESO, est si pertinente et excitante pour la communauté scientifique.
En rassemblant plus de 100 champs de vue, et quelques 9 millions de spectres obtenus avec le spectrographe MUSE au VLT, Enrico Congiu et son équipe, dont j’ai la chance de faire parti, ont pu sonder simultanément et pour la première fois l’ensemble de la galaxie mais aussi les différentes régions de formation stellaire individuellement dans leur environnement spécifique (dans les bras des spirales, au sein de la barre ou vers le centre), apportant des mesures robustes de leur composition et de leur dynamique.
La galaxie du Sculpteur vue dans différentes couleurs grâce au spectrographe MUSE : les éclats correspondent à des éléments chimiques abondants, comme l’hydrogène, qui émettent à des longueurs d’onde spécifiques. Source : ESO.
L’équipe d’astronomes en a profité pour découvrir plusieurs centaines de nébuleuses planétaires — vingt fois plus que ce qui était connu jusque-là. Ces données nous donnent simultanément des indications sur l’histoire de formation stellaire de la galaxie. Par exemple, elles vont permettre d’identifier et de caractériser en détail presque 2500 régions de formation stellaire, la plus grande base de données spectroscopiques pour une seule galaxie (article en préparation).
Mais c’est aussi une opportunité unique de tester la mesure de la distance à cette galaxie.
En effet, le nombre relatif de nébuleuses planétaires brillantes et moins brillantes au sein d’une même galaxie est un levier puissant pour déterminer la distance de cette galaxie. L’équipe internationale a ainsi montré que la méthode basée sur les nébuleuses planétaires était certainement entachée d’erreur si l’on ignore l’impact de l’extinction due à la poussière présente dans la galaxie.
Exploiter librement les données de MUSE pour démultiplier le potentiel de découverte
Ces magnifiques données calibrées et documentées de NGC 253 sont le fruit d’un travail important de la collaboration menée par Enrico Congiu, qui a décidé de les rendre publiques. Au-delà des études présentes et futures conduites par cette équipe, c’est donc la communauté astronomique mondiale (et n’importe quel astronome amateur !) qui peut aujourd’hui librement exploiter le cube de données spectroscopique MUSE.
Cet aspect « Science ouverte » est une composante primordiale de la science moderne, permettant à d’autres équipes de reproduire, tester et étendre le travail effectué, et d’appliquer de nouvelles approches, à la fois créatives et rigoureuses scientifiquement, pour sonder ce magnifique héritage de la science et de la technologie.
Eric Emsellem a reçu des financements de la Fondation Allemande pour la Recherche (DFG) pour par exemple l’emploi d’etudiants. Il travaille pour l’ESO (Observatoire Europeen Austral) et est en détachement de l’Observatoire de Lyon qui a mené la construction de l’instrument MUSE. Il fait parti de la collaboration internationale PHANGS et est un co-auteur de l’étude menée par Enrico Congiu.
The global ecosystem of climate finance is complex, constantly changing and sometimes hard to understand. But understanding it is critical to demanding a green transition that’s just and fair. That’s why The Conversation has collaborated with climate finance experts to create this user-friendly guide, in partnership with Vogue Business. With definitions and short videos, we’ll add to this glossary as new terms emerge.
Blue bonds
Blue bonds are debt instruments designed to finance ocean-related conservation, like protecting coral reefs or sustainable fishing. They’re modelled after green bonds but focus specifically on the health of marine ecosystems – this is a key pillar of climate stability.
By investing in blue bonds, governments and private investors can fund marine projects that deliver both environmental benefits and long-term financial returns. Seychelles issued the first blue bond in 2018. Now, more are emerging as ocean conservation becomes a greater priority for global sustainability efforts.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Carbon border adjustment mechanism
Did you know that imported steel could soon face a carbon tax at the EU border? That’s because the carbon border adjustment mechanism is about to shake up the way we trade, produce and price carbon.
The carbon border adjustment mechanism is a proposed EU policy to put a carbon price on imports like iron, cement, fertiliser, aluminium and electricity. If a product is made in a country with weaker climate policies, the importer must pay the difference between that country’s carbon price and the EU’s. The goal is to avoid “carbon leakage” – when companies relocate to avoid emissions rules and to ensure fair competition on climate action.
But this mechanism is more than just a tariff tool. It’s a bold attempt to reshape global trade. Countries exporting to the EU may be pushed to adopt greener manufacturing or face higher tariffs.
The carbon border adjustment mechanism is controversial: some call it climate protectionism, others argue it could incentivise low-carbon innovation worldwide and be vital for achieving climate justice. Many developing nations worry it could penalise them unfairly unless there’s climate finance to support greener transitions.
Carbon border adjustment mechanism is still evolving, but it’s already forcing companies, investors and governments to rethink emissions accounting, supply chains and competitiveness. It’s a carbon price with global consequences.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Carbon budget
The Paris agreement aims to limit global warming to 1.5°C above pre-industrial levels by 2030. The carbon budget is the maximum amount of CO₂ emissions allowed, if we want a 67% chance of staying within this limit. The Intergovernmental Panel on Climate Change (IPCC) estimates that the remaining carbon budgets amount to 400 billion tonnes of CO₂ from 2020 onwards.
Think of the carbon budget as a climate allowance. Once it has been spent, the risk of extreme weather or sea level rise increases sharply. If emissions continue unchecked, the budget will be exhausted within years, risking severe climate consequences. The IPCC sets the global carbon budget based on climate science, and governments use this framework to set national emission targets, climate policies and pathways to net zero emissions.
By Dongna Zhang, assistant professor in economics and finance, Northumbria University
Carbon credits
Carbon credits are like a permit that allow companies to release a certain amount of carbon into the air. One credit usually equals one tonne of CO₂. These credits are issued by the local government or another authorised body and can be bought and sold. Think of it like a budget allowance for pollution. It encourages cuts in carbon emissions each year to stay within those global climate targets.
The aim is to put a price on carbon to encourage cuts in emissions. If a company reduces its emissions and has leftover credits, it can sell them to another company that is going over its limit. But there are issues. Some argue that carbon credit schemes allow polluters to pay their way out of real change, and not all credits are from trustworthy projects. Although carbon credits can play a role in addressing the climate crisis, they are not a solution on their own.
By Sankar Sivarajah, professor of circular economy, Kingston University London
Carbon credits explained.
Carbon offsetting
Carbon offsetting is a way for people or organisations to make up for the carbon emissions they are responsible for. For example, if you contribute to emissions by flying, driving or making goods, you can help balance that out by supporting projects that reduce emissions elsewhere. This might include planting trees (which absorb carbon dioxide) or building wind farms to produce renewable energy.
The idea is that your support helps cancel out the damage you are doing. For example, if your flight creates one tonne of carbon dioxide, you pay to support a project that removes the same amount.
While this sounds like a win-win, carbon offsetting is not perfect. Some argue that it lets people feel better without really changing their behaviour, a phenomenon sometimes referred to as greenwashing.
Not all projects are effective or well managed. For instance, some tree planting initiatives might have taken place anyway, even without the offset funding, deeming your contribution inconsequential. Others might plant the non-native trees in areas where they are unlikely to reach their potential in terms of absorbing carbon emissions.
So, offsetting can help, but it is no magic fix. It works best alongside real efforts to reduce greenhouse gas emissions and encourage low-carbon lifestyles or supply chains.
By Sankar Sivarajah, professor of circular economy, Kingston University London
Carbon offsetting explained.
Carbon tax
A carbon tax is designed to reduce greenhouse gas emissions by placing a direct price on CO₂ and other greenhouse gases.
A carbon tax is grounded in the concept of the social cost of carbon. This is an estimate of the economic damage caused by emitting one tonne of CO₂, including climate-related health, infrastructure and ecosystem impacts.
A carbon tax is typically levied per tonne of CO₂ emitted. The tax can be applied either upstream (on fossil fuel producers) or downstream (on consumers or power generators). This makes carbon-intensive activities more expensive, it incentivises nations, businesses and people to reduce their emissions, while untaxed renewable energy becomes more competitively priced and appealing.
Carbon tax was first introduced by Finland in 1990. Since then, more than 39 jurisdictions have implemented similar schemes. According to the World Bank, carbon pricing mechanisms (that’s both carbon taxes and emissions trading systems) now cover about 24% of global emissions. The remaining 76% are not priced, mainly due to limited coverage in both sectors and geographical areas, plus persistent fossil fuel subsidies. Expanding coverage would require extending carbon pricing to sectors like agriculture and transport, phasing out fossil fuel subsidies and strengthening international governance.
What is carbon tax?
Sweden has one of the world’s highest carbon tax rates and has cut emissions by 33% since 1990 while maintaining economic growth. The policy worked because Sweden started early, applied the tax across many industries and maintained clear, consistent communication that kept the public on board.
Canada introduced a national carbon tax in 2019. In Canada, most of the revenue from carbon taxes is returned directly to households through annual rebates, making the scheme revenue-neutral for most families. However, despite its economic logic, inflation and rising fuel prices led to public discontent – especially as many citizens were unaware they were receiving rebates.
Carbon taxes face challenges including political resistance, fairness concerns and low public awareness. Their success depends on clear communication and visible reinvestment of revenues into climate or social goals. A 2025 study that surveyed 40,000 people in 20 countries found that support for carbon taxes increases significantly when revenues are used for environmental infrastructure, rather than returned through tax rebates.
By Meilan Yan, associate professor and senior lecturer in financial economics, Loughborough University
Climate resilience
Floods, wildfires, heatwaves and rising seas are pushing our cities, towns and neighbourhoods to their limits. But there’s a powerful idea that’s helping cities fight back: climate resilience.
Resilience refers to the ability of a system, such as a city, a community or even an ecosystem – to anticipate, prepare for, respond to and recover from climate-related shocks and stresses.
Sometimes people say resilience is about bouncing back. But it’s not just about surviving the next storm. It’s about adapting, evolving and thriving in a changing world.
Resilience means building smarter and better. It means designing homes that stay cool during heatwaves. Roads that don’t wash away in floods. Power grids that don’t fail when the weather turns extreme.
It’s also about people. A truly resilient city protects its most vulnerable. It ensures that everyone – regardless of income, age or background – can weather the storm.
And resilience isn’t just reactive. It’s about using science, local knowledge and innovation to reduce a risk before disaster strikes. From restoring wetlands to cool cities and absorb floods, to creating early warning systems for heatwaves, climate resilience is about weaving strength into the very fabric of our cities.
By Paul O’Hare, senior lecturer in geography and development, Manchester Metropolitan University
The meaning of climate resilience.
Climate risk disclosure
Climate risk disclosure refers to how companies report the risks they face from climate change, such as flood damage, supply chain disruptions or regulatory costs. It includes both physical risks (like storms) and transition risks (like changing laws or consumer preferences).
Mandatory disclosures, such as those proposed by the UK and EU, aim to make climate-related risks transparent to investors. Done well, these reports can shape capital flows toward more sustainable business models. Done poorly, they become greenwashing tools.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Emissions trading scheme
An emissions trading scheme is the primary market-based approach for regulating greenhouse gas emissions in many countries, including Australia, Canada, China and Mexico.
Part of a government’s job is to decide how much of the economy’s carbon emissions it wants to avoid in order to fight climate change. It must put a cap on carbon emissions that economic production is not allowed to surpass. Preferably, the polluters (that’s the manufacturers, fossil fuel companies) should be the ones paying for the cost of climate mitigation.
Regulators could simply tell all the firms how much they are allowed to emit over the next ten years or so. But giving every firm the same allowance across the board is not cost efficient, because avoiding carbon emissions is much harder for some firms (such as steel producers) than others (such as tax consultants). Since governments cannot know each firm’s specific cost profile either, it can’t customise the allowances. Also, monitoring whether polluters actually abide by their assigned limits is extremely costly.
An emissions trading scheme cleverly solves this dilemma using the cap-and-trade mechanism. Instead of assigning each polluter a fixed quota and risking inefficiencies, the government issues a large number of tradable permits – each worth, say, a tonne of CO₂-equivalent (CO₂e) – that sum up to the cap. Firms that can cut greenhouse gas emissions relatively cheaply can then trade their surplus permits to those who find it harder – at a price that makes both better off.
By Mathias Weidinger, environmental economist, University of Oxford
Emissions trading schemes, explained by climate finance expert Mathias Weidinger.
Environmental, social and governance (ESG) investing
ESG investing stands for environmental, social and governance investing. In simple terms, these are a set of standards that investors use to screen a company’s potential investments.
ESG means choosing to invest in companies that are not only profitable but also responsible. Investors use ESG metrics to assess risks (such as climate liability, labour practices) and align portfolios with sustainability goals by looking at how a company affects our planet and treats its people and communities. While there isn’t one single global body governing ESG, various organisations, ratings agencies and governments all contribute to setting and evolving these metrics.
For example, investing in a company committed to renewable energy and fair labour practices might be considered “ESG aligned”. Supporters believe ESG helps identify risks and create long-term value. Critics argue it can be vague or used for greenwashing, where companies appear sustainable without real action. ESG works best when paired with transparency and clear data. A barrier is that standards vary, and it’s not always clear what counts as ESG.
Why do financial companies and institutions care? Issues like climate change and nature loss pose significant risks, affecting company values and the global economy.
However, gathering reliable ESG information can be difficult. Companies often self-report, and the data isn’t always standardised or up to date. Researchers – including my team at the University of Oxford – are using geospatial data, like satellite imagery and artificial intelligence, to develop global databases for high-impact industries, across all major sectors and geographies, and independently assess environmental and social risks and impacts.
For instance, we can analyse satellite images of a facility over time to monitor its emissions effect on nature and biodiversity, or assess deforestation linked to a company’s supply chain. This allows us to map supply chains, identify high-impact assets, and detect hidden risks and opportunities in key industries, providing an objective, real-time look at their environmental footprint.
The goal is for this to improve ESG ratings and provide clearer, more consistent insights for investors. This approach could help us overcome current data limitations to build a more sustainable financial future.
By Amani Maalouf, senior researcher in spatial finance, University of Oxford
Environmental, social and governance investing explained.
Financed emissions
Financed emissions are the greenhouse gas emissions linked to a bank’s or investor’s lending and investment portfolio, rather than their own operations. For example, a bank that funds a coal mine or invests in fossil fuels is indirectly responsible for the carbon those activities produce.
Measuring financed emissions helps reveal the real climate impact of financial institutions not just their office energy use. It’s a cornerstone of climate accountability in finance and is becoming essential under net zero pledges.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Green bonds
Green bonds are loans issued to fund environmentally beneficial projects, such as energy-efficient buildings or clean transportation. Investors choose them to support climate solutions while earning returns.
Green bonds are a major tool to finance the shift to a low-carbon economy by directing finance toward climate solutions. As climate costs rise, green bonds could help close the funding gap while ensuring transparency and accountability.
Green bonds are required to ensure funds are spent as promised. For instance, imagine a city wants to upgrade its public transportation by adding electric buses to reduce pollution. Instead of raising taxes or slashing other budgets, the city can issue green bonds to raise the necessary capital. Investors buy the bonds, the city gets the funding, and the environment benefits from cleaner air and fewer emissions.
The growing participation of government issuers has improved the transparency and reliability of these investments. The green bond market has grown rapidly in recent years. According to the Bank for International Settlements, the green bond market reached US$2.9 trillion (£2.1 trillion) in 2024 – nearly six times larger than in 2018. At the same time, annual issuance (the total value of green bonds issued in a year) hit US$700 billion, highlighting the increasing role of green finance in tackling climate change.
By Dongna Zhang, assistant professor in economics and finance, Northumbria University
Just transition
Just transition is the process of moving to a low-carbon society that is environmentally sustainable and socially inclusive. In a broad sense, a just transition means focusing on creating a more fair and equal society.
Just transition has existed as a concept since the 1970s. It was originally applied to the green energy transition, protecting workers in the fossil fuel industry as we move towards more sustainable alternatives.
These days, it has so many overlapping issues of justice hidden within it, so the concept is hard to define. Even at the level of UN climate negotiations, global leaders struggle to agree on what a just transition means.
The big battle is between developed countries, who want a very restrictive definition around jobs and skills, and developing countries, who are looking for a much more holistic approach that considers wider system change and includes considerations around human rights, Indigenous people and creating an overall fairer global society.
A just transition is essentially about imagining a future where we have moved beyond fossil fuels and society works better for everyone – but that can look very different in a European city compared to a rural setting in south-east Asia.
For example, in a British city it might mean fewer cars and better public transport. In a rural setting, it might mean new ways of growing crops that are more sustainable, and building homes that are heatwave resistant.
By Alix Dietzel, climate justice and climate policy expert, University of Bristol
The meaning of just transition.
Loss and damage
A global loss and damage fund was agreed by nations at the UN climate summit (Cop27) in 2022. This means that the rich countries of the world put money into a fund that the least developed countries can then call upon when they have a climate emergency.
At the moment, the loss and damage fund is made up of relatively small pots of money. Much more will be needed to provide relief to those who need it most now and in the future.
By Mark Maslin, professor of earth system science, UCL
Mark Maslin explains loss and damage.
Mitigation v adaptation
Mitigation means cutting greenhouse gas emissions to slow climate change. Adaptation means adjusting to its effects, like building sea walls or growing heat-resistant crops. Both are essential: mitigation tackles the cause, while adaptation tackles the symptoms.
Globally, most funding goes to mitigation, but vulnerable communities often need adaptation support most. Balancing the two is a major challenge in climate policy, especially for developing countries facing immediate climate threats.
By Narmin Nahidi, assistant professor in finance at the University of Exeter
Nationally determined contributions
Nationally determined contributions (NDCs) are at the heart of the Paris agreement, the global effort to collectively combat climate change. NDCs are individual climate action plans created by each country. These targets and strategies outline how a country will reduce its greenhouse gas emissions and adapt to climate change.
Each nation sets its own goals based on its own circumstances and capabilities – there’s no standard NDC. These plans should be updated every five years and countries are encouraged to gradually increase their climate ambitions over time.
The aim is for NDCs to drive real action by guiding policies, attracting investment and inspiring innovation in clean technologies. But current NDCs fall short of the Paris agreement goals and many countries struggle to turn their plans into a reality. NDCs also vary widely in scope and detail so it’s hard to compare efforts across the board. Stronger international collaboration and greater accountability will be crucial.
By Doug Specht, reader in cultural geography and communication, University of Westminster
Fashion depends on water, soil and biodiversity – all natural capital. And forward-thinking designers are now asking: how do we create rather than deplete, how do we restore rather than extract?
Natural capital is the value assigned to the stock of forests, soils, oceans and even minerals such as lithium. It sustains every part of our economy. It’s the bees that pollinate our crops. It’s the wetlands that filter our water and it’s the trees that store carbon and cool our cities.
If we fail to value nature properly, we risk losing it. But if we succeed, we unlock a future that is not only sustainable but also truly regenerative.
My team at the University of Oxford is developing tools to integrate nature into national balance sheets, advising governments on biodiversity, and we’re helping industries from fashion to finance embed nature into their decision making.
Natural capital, explained by a climate finance expert.
By Mette Morsing, professor of business sustainability and director of the Smith School of Enterprise and the Environment, University of Oxford
Net zero
Reaching net zero means reducing the amount of additional greenhouse gas emissions that accumulate in the atmosphere to zero. This concept was popularised by the Paris agreement, a landmark deal that was agreed at the UN climate summit (Cop21) in 2015 to limit the impact of greenhouse gas emissions.
There are some emissions, from farming and aviation for example, that will be very difficult, if not impossible, to reach absolute zero. Hence, the “net”. This allows people, businesses and countries to find ways to suck greenhouse gas emissions out of the atmosphere, effectively cancelling out emissions while trying to reduce them. This can include reforestation, rewilding, direct air capture and carbon capture and storage. The goal is to reach net zero: the point at which no extra greenhouse gases accumulate in Earth’s atmosphere.
By Mark Maslin, professor of earth system science, UCL
Mark Maslin explains net zero.
For more expert explainer videos, visit The Conversation’s quick climate dictionary playlist here on YouTube.
Mark Maslin is Pro-Vice Provost of the UCL Climate Crisis Grand Challenge and Founding Director of the UCL Centre for Sustainable Aviation. He was co-director of the London NERC Doctoral Training Partnership and is a member of the Climate Crisis Advisory Group. He is an advisor to Sheep Included Ltd, Lansons, NetZeroNow and has advised the UK Parliament. He has received grant funding from the NERC, EPSRC, ESRC, DFG, Royal Society, DIFD, BEIS, DECC, FCO, Innovate UK, Carbon Trust, UK Space Agency, European Space Agency, Research England, Wellcome Trust, Leverhulme Trust, CIFF, Sprint2020, and British Council. He has received funding from the BBC, Lancet, Laithwaites, Seventh Generation, Channel 4, JLT Re, WWF, Hermes, CAFOD, HP and Royal Institute of Chartered Surveyors.
Amani Maalouf receives funding from IKEA Foundation and UK Research and Innovation (NE/V017756/1).
Narmin Nahidi is affiliated with several academic associations, including the Financial Management Association (FMA), British Accounting and Finance Association (BAFA), American Finance Association (AFA), and the Chartered Association of Business Schools (CMBE). These affiliations do not influence the content of this article.
Paul O’Hare receives funding from the UK’s Natural Environment Research Council (NERC). Award reference NE/V010174/1.
Alix Dietzel, Dongna Zhang, Doug Specht, Mathias Weidinger, Meilan Yan, and Sankar Sivarajah do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
There are more refugees in the world today than at any other point in history. The United Nations estimates that there are now more than 120 million people forcibly displaced from their homes. That is one in every 69 people on Earth. Some 73% of this population is hosted in lower or middle-income countries.
From the legacies of European colonialism to global inequality, drone warfare and climate instability, politicians have failed to address the causes driving this mass displacement. Instead, far-right parties exploit the crisis by inflaming cultures of hatred and hostility towards migrants, particularly in high-income western countries.
This is exacerbated by visual media, which makes refugees an easy target by denying them the means of telling their own stories on their own terms. Pictures of migrants on boats or climbing over border walls are everywhere in tabloid newspapers and on social media. But these images are rarely accompanied by any detailed account of the brutal experiences that force people into these situations.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Many different kinds of visual storytelling live under the umbrella of refugee comics. They include short strips and stories, such as A Perilous Journey (2016) with testimonies from people fleeing the civil war in Syria, and Cabramatta (2019), about growing up as a Vietnamese migrant in a Sydney suburb. They also include codex-bound graphic novels, such as The Best We Could Do by Thi Bui (2017), and interactive web-comics such as Exodus by Jasper Rietman (2018).
They include documentaries made by journalists about the specific experiences of individual refugees. They also include fiction by artists who combine elements of several refugee testimonies into representative stories. Additionally, there are both fictional and non-fictional artworks made by migrants and refugees themselves.
Refugee comics address different forced mass displacements over the 20th and 21st centuries. These include the 1948 Nakba in Palestine, the 1970s flight of refugees from Vietnam and the 2010s displacement of people from Syria and other countries across sub-Saharan Africa and the Middle East.
These refugee comics challenge anti-migrant images in at least three ways. First, they often integrate the direct testimonies of refugees. This is enhanced by the combination of words and pictures that comprise the comics page, which allows refugees to frame the way we see and respond to images of displaced people.
For example, in The Unwanted by Joe Sacco (2012), familiar images of migrants crossing the Mediterranean on small boats are narrated by a refugee called Jon. Jon’s testimony turns our attention to the fears and desires that drive people to attempt dangerous sea crossings.
A second way comics challenge anti-migrant images is by allowing refugees to tell their stories without disclosing their identities. Because comics are drawn by hand and use abstract icons rather than photographs, refugees can tell their stories while also avoiding any unwanted scrutiny while also maintaining personal privacy. This reintroduces refugee agency into a visual culture that often seeks to reduce migrants to voiceless victims or security threats.
For example, in Escaping Wars and Waves: Encounters with Syrian Refugees (2018) German comics journalist Olivier Kugler dedicates two pages to a man he calls “The Afghan” because he didn’t want his name or identity revealed. Kugler presents this man’s testimony of failed attempts to get to the UK, but he never draws his face or refers to him by name.
The third way comics challenge anti-migrant images is by shifting our attention from refugees themselves to the hostile environments and border infrastructures that they are forced to travel through and inhabit. Refugee researchers describe this different way of seeing as a “places and spaces, not faces” approach.
For instance, in Undocumented: The Architecture of Migrant Detention (2017), Tings Chak walks her readers through migrant detention centres from the perspective of those who are being processed and detained.
Drawing displacement
This emphasis on place and space is built into the structure of our own book, Graphic Refuge. We begin by focusing on graphic stories about ocean crossings, particularly on the Mediterranean sea. We then turn to comics concerned with the experience of refugee camps, and we also ask how interactive online comics bring viewers into virtual refugee spaces in a variety of ways.
It is the obliteration of homes that forces people to become refugees in the first place. Later in the book, we explore how illustrated stories document the destruction of cityscapes across Syria and also in Gaza. Finally, we turn to graphic autobiographies by second-generation refugees, those who have grown up in places such as the US or Australia, but who must still negotiate the trauma of their parents’ displacement.
Where most previous studies of refugee comics have focused on trauma and empathy, in Graphic Refuge we take a different approach. We set out to show how refugee comics represent migrant agency and desire, and how we are all implicated in the histories and systems that have created the very idea of the modern refugee.
As critical refugee scholar Vinh Nguyen writes in our book’s foreword, while it is difficult to truly know what refugee lives are like, those of us who enjoy the privileges of citizenship can at least read these comics to better understand “what we – we who can sleep under warm covers at night – are capable of”.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Stephen Spielberg’s original Jurassic Park film (1993) instilled awe and trepidation in his characters and audience alike. As his protagonists wrestled with the unintended consequences and ethical dilemmas of reanimating extinct apex predators, viewers marvelled at the novel use of CGI. At a keystroke it seemed to consign the hand-crafted stop-motion wonders of dinosaur films past to the archive.
Alongside pulse-pounding action set pieces delivered with trademark Spielberg panache, that first film flamboyantly inaugurated a new era in fantasy effects. And it solicited delight and wonder from its audience. On opening day in New York the dinosaurs’ first appearance prompted a spontaneous ovation: I was there and clapped too.
Thirty-two years, six Jurassic iterations and countless monstrous digital apparitions later, that initial wow factor is a distant memory. By Jurassic World: Rebirth (set nearly 35 years after the original film) dinosaurs are treated by their human prey as barely more than inconvenient obstacles. They’re dangerous, of course, but certainly not wondrous.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Palaeontologist Dr Henry Loomis’s (Jonathan Bailey) delight in coming face-to-face with his objects of study is a pale echo of the giddy euphoria that overtook Sam Neill and Laura Dern’s characters all those years ago.
In fact, early in the film we’re told that the public have since lost all interest in dinosaurs. Wildlife parks and museum displays are closing and the animals themselves have mostly died off outside their quarantined tropical habitat.
As this has information has little bearing for the plot, it’s hard not to sense some ironic commentary from screenwriter David Koepp (returning to the franchise for the first time since 1997) on the exhaustion of the Jurassic Park model. Always incipiently reflexive – as a blockbuster set in a theme park – by this stage in the game, the franchise machinery is inescapably visible.
Almost as ironic is a plot line promoting the open-source sharing of intellectual property for the benefit of the whole world rather than exploitative corporations. I doubt NBCUniversal-Comcast would agree.
The Jurassic World Rebirth trailer.
The Jurassic franchise
The Jurassic Park format is among the most unforgivingly rigid of any current film franchise.
Each instalment (bar to some extent the last, the convoluted 2022 Jurassic World: Dominion, whose characters and story the new release completely ignores) places humans in perilous proximity to genetically rejuvenated sauropods. And generally does so in a remote, photogenic tropical location with minimal contact with the outside world. (Will the franchise ever run out of uncharted Caribbean islands where demented bio-engineers have wreaked evolutionary havoc?)
The human characters in this new film are the usual pick-and-mix of daredevil adventurers, amoral corporate types and idealistic palaeontologists. And there are the mandatory school-age children too – important to keep the interest of younger viewers. The real stars of course, are the primeval leviathans who grow larger and more fearsome – though not more interesting – with each new episode of the franchise.
How this human-dino jeopardy comes about tends not to matter very much. Jurassic World: Rebirth produces one of the least interesting MacGuffins in movie history (meaning something that drives the plot and which the charcters care about but the audience does not). Blood drawn from each of the three largest dinosaur species in the aforesaid remote tropical island will produce a serum to cure human heart disease (dinosaur hearts are huge, you see, so … never mind).
This feeble contrivance suffices for sneery Big Pharma suit Martin (Rupert Friend) to hire freebooters Zora (Scarlett Johansson) and Duncan (Mahershala Ali) for his expedition. Along the way they encounter a marooned family (dad, two teens, one winsome but plucky grade-schooler) who subsequently have their own largely self-contained adventures before reuniting for the big climax.
Franchise filmmaking is generally an auteur-free zone. Welsh blockbuster specialist Gareth Edwards is no Spielberg (though he pays homage at several point, notably in a waterborne first act studded with Jaws references). But he handles the action with unremarkable competence.
In truth, Jurassic World: Rebirth suggests that the intellectual property so expensively vested in the franchise would benefit from some genetic modification.
Barry Langford does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
South Africans want to shop more sustainably, according to research published in the journal Sustainable Development. But most can’t tell which products are environmentally friendly.
Some food manufacturers have introduced eco labels – a certification symbol placed on product packaging. This indicates the product meets specific environmental standards set by a third party organisation.
These labels are meant to signal to consumers that a product has been produced in a way that limits harm to the environment. But our recent study with 108 South African consumers showed low recognition of eco labels, widespread confusion, and a need for clearer guidance.
The results show that most South African shoppers are unfamiliar with these labels or unable to differentiate between real and fictional ones.
In the European Union eco labels like the EU Energy Label are easily understood and highly visible. They are also usually supported by government awareness campaigns. Other examples of labelling systems that work well include those of Germany and Japan.
These countries show that long term institutional support, mandatory labelling in key sectors, and consistent public messaging can greatly improve eco label recognition.
We concluded from our research that South Africa lacks that national visibility and public education, leaving even motivated consumers unsure of what labels to trust. Based on our findings we recommend steps businesses, government and nonprofits can take to ensure that eco labels are clear, visible and understood.
Eco labelling at its best
The EU Energy Label is used on appliances such as fridges, washing machines and light bulbs to indicate their energy efficiency on a scale from A (most efficient) to G (least efficient).
In countries like Germany and Japan, eco labels are government backed as well as being integrated into school curricula, public service announcements and shopping platforms.
Germany’s Blue Angel label, which states “protects the environment”, has been in use since the 1970s. It appears on over 12,000 products and services, including paper goods, cleaning products, paints and electronics, that meet strict environmental criteria. It is supported by ongoing public education campaigns.
In Japan the the Eco Mark appears on products with minimal environmental impact. It appears on items like stationery, detergents, packaging and appliances. Many retailers display explanations next to these products to help consumers understand the label.
South Africans struggle to identify eco labels
We conducted a structured online survey of 108 South African consumers. Participants were asked about their environmental awareness and their ability to recognise both real and fictional eco labels across ten images. According to the global directory of eco labels and environmental certification schemes, there are around 50 eco labels in South Africa.
The EU Energy Label was the most recognised (87%).
The Afrisco Certified Organic label, which is a legitimate South African label, was the least recognised, identified by just 22% of respondents.
Fictional labels were mistakenly identified as real by many participants, revealing widespread confusion.
Only 3 out of 10 labels were recognised by at least half the participants, suggesting a general lack of eco label awareness. These include the Energy Star Eco label; the EU Energy label and the Forest Stewardship council label.
Age and employment status were significantly related to environmental awareness. Older and employed individuals showed higher levels of awareness.
These findings suggest that consumers are not opposed to eco labels, they simply lack the knowledge and confidence to use them effectively.
Eco labels have the potential to build brand trust, drive green purchasing behaviour, and support national sustainability goals. But they only work if consumers recognise and trust them.
In South Africa, inconsistent use, small label size, and a lack of consumer education are holding eco labels back from achieving their purpose.
What businesses can do
Based on our findings, we recommend the following:
Use recognised and credible labels: Third-party certified labels are more trustworthy and reliable.
Improve label visibility: The most recognised label in our study was the EU Energy Label and was also the most prominent. Small, cluttered logos go unnoticed.
Educate your market: Explain what eco labels mean through packaging, marketing, and digital platforms.
Partner with government and NGOs: Awareness campaigns at national and community levels can help standardise eco label understanding.
Tailor communication efforts: Awareness efforts should consider age and employment demographics, as these affect levels of environmental engagement.
The way forward
South Africans are willing to support environmentally responsible products, but they need help identifying them.
Businesses, government and nonprofits all have a role to play in making eco labels clearer, more visible, and more trustworthy.
Eco labels must become more than symbols. They should be tools for transparency and trust, and a gateway to more sustainable shopping.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Motorcycle-taxis are one of the fastest and most convenient ways to get around Uganda’s congested capital, Kampala. But they are also the most dangerous. Though they account for one-third of public transport trips taking place within the city, police reports suggest motorcycles were involved in 80% of all road-crash deaths registered in Kampala in 2023.
Promising to solve the safety problem while also improving the livelihoods of moto-taxi workers, digital ride-hail platforms emerged a decade ago on the city’s streets. It is no coincidence that Uganda’s ride-hailing pioneer and long-time market leader goes by the name of SafeBoda.
Conceived in 2014 as a “market-based approach to road safety”, the idea is to give riders a financial incentive to drive safely by making digital moto-taxi work pay better. SafeBoda claimed at the time that motorcyclists who signed up with it would increase their incomes by up to 50% relative to the traditional mode of operation, in which riders park at strategic locations called “stages” and wait for passengers.
In the years since, the efforts of SafeBoda and its ride-hail competitors to bring safety to the sector have largely been deemed a success. One study carried out in 2017 found that digital riders were more likely to wear a helmet and less likely to drive towards oncoming traffic. Early press coverage wasparticularlyglowing, while recentacademicstudies continue to cite the Kampala case as evidence that ride-hailing platforms may hold the key to making African moto-taxi sectors a safer place to work and travel.
Is it all as clear-cut as this? In a new paper based on PhD research, I suggest not. Because at its core the ride-hail model – in which riders are classified as independent contractors who do poorly paid “gig work” rather than as wage-earning employees – undermines its own safety ambitions.
Speed traps
In my study of Kampala’s vast moto-taxi industry – estimated to employ hundreds of thousands of people – I draw on 112 in-depth interviews and a survey of 370 moto-taxi riders to examine how livelihoods and working conditions have been affected by the arrival of the platforms.
To date, there has been only limited critical engagement with how this change has played out over the past decade. I wanted to get beneath the big corporate claims and alluring platform promises to understand how riders themselves had experienced the digital “transformation” of their industry, several years after it first began.
One of the things I found was that, from a safety perspective, the ride-hail model represents a paradox. We can think of it as a kind of “speed trap”.
On one hand, ride-hail platforms try to moderate moto-taxi speeds and behaviours through managerial techniques. They make helmet use compulsory. They put riders through road safety training before letting them out onto the streets. And they enforce a professional “code of conduct” for riders.
In some cases, companies also deploy “field agents” to major road intersections around the city. Their task is to monitor the behaviour of riders in company uniform and, should they be spotted breaking the rules, discipline them.
On the other hand, however, the underlying economic structure of digital ride-hailing pulls transport workers in the opposite direction by systematically depressing trip fares and rewarding speed.
Under the “gig economy” model used by Uganda’s ride-hail platforms, the livelihood promise hangs not in the offer of a guaranteed wage but in the possibility of higher earnings. Crucially, it is a promise that only materialises if riders are able to reach and maintain a faster, harder work-rate throughout the day – completing enough jobs that pay “little money”, as one rider put it, to make the gig-work deal come good. Or, as summed up by another interviewee:
We are like stakeholders, I can say that. No basic salary, just commission. So it depends on your speed.
And yet, it is precisely these factors that routinely lead to road traffic accidents. Extensive research from across east Africa has shown that motorcycle crashes arestronglyassociated with financial pressure and the practices that lead directly from this, such as speeding, working long hours and performing high-risk manoeuvres. All are driven by the need to break even each day in a hyper-competitive informal labour market, with riders compelled to go fast by the raweconomics of their work.
Deepening the pressure
Ride-hail platforms may not be the reason these circumstances exist in the first place. But the point is that they do not mark a departure from them.
If anything, my research suggests they may be making things worse. According to the survey data, riders working through the apps make on average 12% higher gross earnings each week relative to their analogue counterparts. This is because the online world gets them more jobs.
But to stay connected to that world they must shoulder higher operating costs, for: mobile data (to remain logged on); fuel (to perform more trips); the use of helmets and uniforms (which remain company property); and commissions extracted by the platform companies (as much as 15%-20% per trip).
As soon as these extras are factored in, the difference completely disappears. The digital rider works faster and harder – but for no extra reward.
But it is important to remember that these are private enterprises with a clear bottom line: to one day turn a profit. As recentreports and my own thesis show, efforts to reach that point often alienate and ultimately repel the workers on whom these platforms depend – and whose livelihoods and safety standards they claim to be transforming.
A recent investment evaluation by one of SafeBoda’s first funders perhaps puts it best: it is time to reframe ride-hailing as a “risky vehicle” for safety reform in African cities, rather than a clear road to success.
Rich received funding for this research from the UK’s Economic and Social Research Council (ESRC).
Hearing improvements were both rapid and significant after patients received the gene therapy we developed.Nina Lishchuk/ Shutterstock
Up to three in every 1,000 newborns has hearing loss in one or both ears. While cochlear implants offer remarkable hope for these children, it requires invasive surgery. These implants also cannot fully replicate the nuance of natural hearing.
But recent research my colleagues and I conducted has shown that a form of gene therapy can successfully restore hearing in toddlers and young adults born with congenital deafness.
Our research focused specifically on toddlers and young adults born with OTOF-related deafness. This condition is caused by mutations in the OTOF gene that produces the otoferlin protein –a protein critical for hearing.
The protein transmits auditory signals from the inner ear to the brain. When this gene is mutated, that transmission breaks down leading to profound hearing loss from birth.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Unlike other types of genetic deafness, people with OTOF mutations have healthy hearing structures in their inner ear – the problem is simply that one crucial gene isn’t working properly. This makes it an ideal candidate for gene therapy: if you can fix the faulty gene, the existing healthy structures should be able to restore hearing.
In our study, we used a modified virus as a delivery system to carry a working copy of the OTOF gene directly into the inner ear’s hearing cells. The virus acts like a molecular courier, delivering the genetic fix exactly where it’s needed.
The modified viruses do this by first attaching themselves to the hair cell’s surface, then convincing the cell to swallow them whole. Once inside, they hitch a ride on the cell’s natural transport system all the way to its control centre (the nucleus). There, they finally release the genetic instructions for otoferlin to the auditory neurons.
Our team had previously conducted studies in primates and young children (five- and eight-year-olds) which confirmed the virus therapy was safe. We were also able to illustrate the therapy’s potential to restore hearing – sometimes to near-normal levels.
But key questions had remained about whether the therapy could work in older patients – and what age is optimal for patients to receive the treatment.
To answer these questions, we expanded our clinical trial across five hospitals, enrolling ten participants aged one to 24 years. All were diagnosed with OTOF-related deafness. The virus therapy was injected into the inner ears of each participant.
We closely monitored safety during the 12-months of the study through ear examinations and blood tests. Hearing improvements were measured using both objective brainstem response tests and behavioural hearing assessments.
From the brainstem response tests, patients heard rapid clicking sounds or short beeps of different pitches while sensors measured the brain’s automatic electrical response. In another test, patients heard constant, steady tones at different pitches while a computer analysed brainwaves to see if they automatically followed the rhythm of these sounds.
The therapy used a synthetic version of a virus to deliver a functional gene to the inner ear. Kateryna Kon/ Shutterstock
For the behavioural hearing assessment, patients wore headphones and listened to faint beeps at different pitches. They pressed a button or raised their hand each time they heard a beep – no matter how faint.
Hearing improvements were both rapid and significant – especially in younger participants. Within the first month of treatment, the average total hearing improvement reached 62% on the objective brainstem response tests and 78% on the behavioural hearing assessments. Two participants achieved near-normal speech perception. The parent of one seven-year-old participant said her child could hear sounds just three days after treatment.
Over the 12-month study period, ten patients experienced very mild to moderate side-effects. The most common adverse effect was a decrease in white blood cells. Crucially, no serious adverse events were observed. This confirmed the favourable safety profile of this virus-based gene therapy.
Treating genetic deafness
This is the first time such results have been achieved in both adolescent and adult patients with OTOF-related deafness.
The findings also reveal important insights into the ideal window for treatment, with children between the ages of five and eight showing the most pronounced benefit.
While younger children and older participants also showed improvement, their recovery was less dramatic. These counter-intuitive results in younger children are surprising. Although preserved inner-ear integrity and function at early ages should theoretically predict a better response to the gene therapy, these findings suggest the brain’s ability to process newly restored sounds may vary at different ages. The reasons for this are not yet understood.
This trial is a milestone. By bridging the gap between animal and human studies and diverse patients of different ages, we’re entering a new era in the treatment of genetic deafness. Although questions still remain about how long the effects of this therapy last, as gene therapy continues to advance, the possibility of curing – not just managing – genetic hearing loss is becoming a reality.
OTOF-related deafness is just the beginning. We, along with other research teams, are working on developing therapies that target other, more common genes that are linked to hearing loss. These are more complex to treat, but animal studies have yielded promising results. We’re optimistic that in the future, gene therapy will be available for many different types of genetic deafness.
Maoli Duan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Lillian Hingley, Postdoctoral Researcher in English Literature, University of Oxford
With her latest album, Virgin, Lorde is stretching the concept of the virgin beyond the common definition. Some may consider the album’s title and its cover art – an X-ray of Lorde’s pelvis showing an IUD – to be contradictory.
But while Lorde could still be using contraception for purposes beyond birth control, its presence shows that the album doesn’t shy away from discussions of sexual activities and the risk of pregnancy (two themes that are clearly discussed in the track Clearblue).
As she also shows with her approach to gender in the album’s opening song, Hammer (“Some days, I’m a woman, some days, I’m a man”), Lorde is testing and muddying common dualisms.
The scientific perspective offered by the album art forces the viewer to look through Lorde’s body, but we are also looking beyond her reproductive organs. Certainly, Lorde sometimes conceptualises virginity as something that can only be given once, as she explains on David.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
In Hammer, her quip “don’t know if it’s love or if it’s ovulation” is a comedic musing on whether an experience is profoundly transcendental or just the product of hormones. But what strikes me is the fact that her concepts and themes are not static or singular.
This album is exploring the idea of being made, or even remade, through experience. In If She Could See Me Now, Lorde recounts how painful moments “made me a woman”.
Like French philosopher Simone de Beauvoir’s phrase “one is not born, but rather becomes, a woman”, Lorde is exploring how her body is being changed by what she has been through. As she sings in What Was That?: “I try to let whatever has to pass through me pass through.”
Again, while she on the one hand describes something moving through her body, she’s also describing an attempt to move through something that has happened to her – turning a passive experience into one of acceptance and action. Here we might think of another notion of virginity: a substance before it is processed. Virginity is part of the experience of being changed, or reborn, into something else.
This is not to say that Virgin is uninterested in the body. Lorde’s discussion of her eating disorder in Broken Glass is a case in point.
Lorde as performance artist
The visuals accompanying Virgin emphasise Lorde’s status as a performance artist. The crescendo of the What Was That video is a spontaneous public performance of Virgin’s first single.
TRYING TO MAKE IT SOUND LIKE A FONTANA, LIKE PAINTING BITTEN BY A MAN, LIKE THE NEW YORK EARTH ROOM. THE SOUND OF MY REBIRTH.
The simile here, or the idea of making music sound “like” visual art, emphasises the tactility of Lorde’s work. Each artistic piece referenced here is concerned with physically intervening into the conventional art gallery set-up.
Italian artist Lucio Fontana’s Spacial Concept series (1960) included slashed canvases a disruption of the body of the artwork with yonic – in other words, vulva-like – imagery (indeed, it challenges how “damaged” artworks are usually hidden from audiences, waiting to be restored).
Similarly, American artist Jasper Johns’ Painting Bitten by a Man (1961) is an encaustic painting (derived from the Greek word for “burned in”), which shows off the markings of someone who has bitten into the canvas.
The video for Man of the Year.
The music video for Man of the Year is filmed in a room that is filled with dirt. This is a clear nod to American sculptor Walter de Maria’s New York Earth Room (1977). The piece also fills a white room in New York with this unexpected material: earth inside a building, where mushrooms can grow.
The video for Man of the Year may also be referencing another artwork. Lorde is shown using duct tape to bind her breasts. While this points to Lorde’s exploration of her body and gender identity, the material also recalls Italian artist Maurizio Cattelan’s duct-taped banana artwork, Comedian.
Offering phallic imagery to Fontana’s yonic imagery, Cattelan’s piece mirrors Lorde’s concern with ontology, or definition. What makes something art?
Prometheus (Un)bound?
But just as Lorde is binding herself in new ways, she is unbinding herself in others. In If She Could See Me Now, Lorde declares: “I’m going back to the clay.”
Here that the album recalls the Prometheus myth: the ancient Greek story that Prometheus fashioned humans out of mud (or clay) and gave his creations fire.
The closing track, David, offers another ancient allusion, this time about David and Goliath. David – who, as a harpist, is a musician like Lorde – kills the giant man with stones. This reference furthers the song’s discussion of the problem of treating a man, a lover, like a god.
In David Lorde explores similar themes to Mary Shelley.
This subtle reference to the killing of Goliath adds another layer to the euphemism for male testicles explored in Shapeshifter: “Do you have the stones?”. Perhaps Virgin is doing what Mary Shelley’s Frankenstein (1818) did with the Prometheus tale: both exploring what happens when a man tries to create and determine the fate of another being, whether nature or nurture make a person, and how a new body can be refashioned from old ones.
After listening to the entire album, I was struck by how Lorde is exploring different facets of another question: who, exactly, is Lorde? Especially now that she is embracing who she is beyond the yoke of other people – or the demons – that have shaped her? Virgin shows that Lorde now wants to return “to the clay”, or to remake who she is, now that she is unbound by Prometheus.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Lillian Hingley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
We all need to learn how to place trust in others. It’s easy to be misled. Someone who doesn’t deserve trust can appear a lot like someone who does – and part of growing up in a society is developing the ability to tell the difference.
An important part of this is learning about the signals people give about themselves. These might be a smile, a style of dressing or a way of speaking. In particular, we use accents to make decisions about others – especially in the UK.
But what if people adapt or change their accents to fit into a certain social group or geographical area? Our past research has shown that native speakers are pretty good at spotting such speech. We’ve now published a follow-up study that supports and further strengthens our original results.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
We associate accents with places, classes and groups. Research shows that even infants use accents to determine whether they think someone is considered trustworthy. This can be a problem – studies have demonstrated that accents can affect someone’s odds of getting a job – and potentially the likelihood of being found guilty of a crime.
As with most topics in the social sciences, evolutionary theory has a lot to say about this process. Scientists are interested in understanding how people send and receive signals like accents, how those signals affect relationships between people and how, in turn, those relationships affect us.
But because accents can affect how we treat each other, we’d expect some people to try to change them for personal gain. A social chameleon who can pretend to be a member of any social class or group is likely to win trust within each – assuming they are not caught.
If that’s true, though, then we’d expect people to also be good at detecting when someone is “faking” it – what we call mimicry – setting up a kind of arms race between those who want to deceive us into trusting them and those who try to catch deceivers out.
Over the last few years, we’ve looked into how well people detect accent mimicry. Last year we found that generally speaking, people in the UK and Ireland are strong at this, detecting mimicked accents in the UK and Ireland better than we’d expect by chance alone.
What was more interesting, though, was that native listeners from the specific places of the imitated accent – Belfast, Glasgow and Dublin – were a lot better at this task than were non-natives or native listeners from further away in the UK, like Essex.
Beyond the UK
Our new findings went further, though. Of the roughly 2,000 people that participated, more than 1,500 were this time based in English-speaking countries outside the UK, including the US, Canada and Australia. And on average, this group did a lot worse at detecting mimicked accents from seven different regions in the UK and Ireland than did people from the UK.
In fact, people from places other than the UK barely did better than we’d expect by chance, while people who were native listeners were right between about two-thirds and three-quarters of the time.
As we argued in our original article, we believe it’s local cultural tensions — tribalism, classism or even warfare — that explain the differences. For example, as someone commented to me some time ago, people living in Belfast in the 1970s and 80s – a time of huge political tension – needed to be attuned to the accents of those around them. Hearing something off, like an out-group member’s accent, could signal an imminent threat.
This wouldn’t have put the same pressures on people living in a more peaceful regions. In fact, we found that people living in large, multicultural and largely peaceful areas, such as London, didn’t need to pay much attention to the accents of those around them and were worse at detecting mimicked accents.
The further you move out from the native accent, too, the less likely a listener is to place emphasis on or notice anything wrong with a local accent. Someone living in the US is likely to pay even less attention to an imitation Belfast accent than is someone living in London, and accordingly will be worse at detecting mimicry. Likewise, someone growing up in Australia would be better at spotting a mimicked Australian accent than a Brit.
So while accents, and our ability to detect differences in accents, probably evolved to help us place trust more effectively at a broad level, it’s the cultural environment that shapes that process at the local level.
Together, this has the unfortunate effect that we sometimes place a lot more emphasis on accents than we should. How someone speaks should be a lot less important than what is said.
Still, accents drive how people treat each other at every level of society, just as other signals, be they tattoos, smiles or clothes, that tell us something about another person’s background or heritage.
Learning how these processes work and why they evolved is critical for overcoming them – and helping us to override the biases that so often prevent us from placing trust in people who deserve it.
Jonathan R. Goodman receives funding from the Wellcome Trust (grant no. 220540/Z/20/A).