Pourquoi la transformation numérique n’est pas une voie royale vers la neutralité carbone

Source: The Conversation – France (in French) – By Bisrat Misganaw, Associate Professor of Strategy and Entrepreneurship, Neoma Business School

Souvent présenté comme un levier indispensable pour décarboner nos économies, le numérique est loin d’être une solution miracle. Derrière ses promesses techniques et économiques se cachent des coûts environnementaux et humains croissants. La planète ne bénéficiera pas nécessairement d’une numérisation massive qui risque, au contraire, d’aggraver notre dépendance en termes de ressources et d’énergie.


Au cours des dernières années, la transformation numérique a souvent été présentée comme nécessaire pour atteindre la neutralité carbone. Le Forum économique mondial de Davos, par exemple, estimait que le secteur des technologies numériques constitue le levier d’influence « le plus puissant pour accélérer l’action pour limiter la hausse des températures mondiales à moins de 2 °C ».

Lors de la COP29, fin 2024, la déclaration sur l’action numérique verte (Green Digital Action) affirmait
« le rôle vital des technologies numériques dans l’action climatique », tout l’enjeu étant d’en tirer parti pour atténuer le changement climatique. Mais dans le même temps, cette même déclaration « prenait note avec inquiétude des effets néfastes pour le climat dus aux […] technologies numériques et aux outils, dispositifs et infrastructures connexes ». Au final, le numérique est-il plutôt porteur de promesses ou de menaces pour l’atteinte des objectifs de neutralité carbone ? La déclaration ne le dit pas.

Dans une étude récente, nous avançons que le problème au cœur de l’idée d’un secteur numérique allié du climat repose sur plusieurs hypothèses clés, discutables à bien des égards.

Certes, il existe déjà – et existera à l’avenir – de nombreux exemples qui montrent que la numérisation peut soutenir la cause de la neutralité carbone. Par exemple, lorsqu’il s’agit de solutions qui permettent des gains d’efficacité énergétique, le pilotage de la production décentralisée d’électricité renouvelable, ou encore lorsqu’elles accélèrent les processus de recherche et développement (R&D).




À lire aussi :
Impact environnemental du numérique : l’inquiétant boom à venir


Mais l’argument selon lequel la numérisation de l’économie permettra d’atteindre la neutralité carbone repose sur quatre hypothèses implicites, selon lesquelles elle entraînerait nécessairement :

  • davantage de dématérialisation,
  • des gains d’efficacité énergétique,
  • une réduction des coûts de main-d’œuvre,
  • enfin, des décisions économiques plus respectueuses de l’environnement de la part des acteurs économiques.

Or nous montrons qu’aucune de ces hypothèses n’est réaliste.

Ne pas confondre numérisation et dématérialisation

Le lien entre numérisation et dématérialisation, souvent présenté comme allant de soi, doit être interrogé. En effet, la numérisation s’accompagne d’une dépendance aux infrastructures informatiques aux capteurs électroniques utilisés pour convertir et traiter toujours plus d’information sous forme numérique.

Cela implique de construire de nouvelles infrastructures et de nouveaux appareils informatiques. Ces derniers ont une matérialité : leur fabrication implique d’utiliser des ressources minérales limitées, en particulier des métaux rares. Ce problème est encore amplifié par la dépréciation et l’obsolescence plus rapide des appareils informatiques.

Certes, on pourrait dire que ces frais sont compensés par les avantages supplémentaires générés par les services numériques. Cependant, ces avantages ont eux-mêmes un coût pour l’environnement.




À lire aussi :
Pourquoi l’IA générative consomme-t-elle tant d’énergie ?


Cela tient d’abord à leur consommation d’énergie. Par exemple, une seule requête ChatGPT consomme entre 50 et 90 fois plus d’énergie qu’une recherche Google classique. Le fonctionnement des systèmes d’intelligence artificielle (IA) nécessite aussi de grandes quantités d’eau pour le refroidissement des infrastructures informatiques, certains modèles consommant, à large échelle, des millions de litres pendant leurs phases d’entraînement et d’utilisation. Enfin, l’essor des IA génératives pourrait faire croître la demande en cuivre d’un million de tonnes d’ici 2030.

Selon un rapport du ministère de la transition écologique, le secteur du numérique représentait 2,5 % de l’empreinte carbone annuelle de la France et 10 % de sa consommation électrique en 2020. Sans intervention, les émissions de gaz à effet de serre du secteur pourraient croître de plus de 45 % d’ici à 2030. Selon un rapport des Nations unies, en 2022, les data centers du monde entier ont consommé 460 térawattheures d’électricité, soit l’équivalent de la consommation d’électricité annuelle de la France. Il est attendu que cette consommation sera multipliée quasiment par deux en 2026 pour atteindre 1 000 térawattheures.

Les risques d’effet rebond

La promesse de gains d’efficacité énergétique dans le numérique doit également être interrogée, car ces technologies produisent des effets rebond. Les gains d’efficacité font baisser les prix, ce qui augmente la demande, augmentant la consommation d’énergie et la quantité de déchets électroniques produits. La conséquence : une pression accrue sur les limites planétaires.

Ces effets rebond peuvent être directs ou indirects. Un exemple d’effet rebond direct tient à la facilité d’usage des services numériques : en témoigne par exemple l’augmentation constante du nombre de messages en ligne, de visioconférences, de photos et de vidéos stockées sur nos téléphones et/ou dans le cloud, etc.

On peut illustrer l’idée d’effet rebond indirect ainsi : lorsque l’argent, économisé par une entreprise grâce à la réduction des déplacements professionnels (grâce aux réunions virtuelles ou au télétravail), versé sous forme d’augmentations au salarié, lui sert à acheter un billet d’avion pour partir en vacances.

Les cryptomonnaies ont des effets rebond indirects considérables en termes de consommation d’énergie, et donc d’impact climatique.
Jorge Franganillo/Flickr, CC BY-SA

Prenons enfin l’exemple des cryptomonnaies, souvent défendues pour leurs avantages en termes de décentralisation financière. Celle-ci s’accompagne d’un coût énergétique élevé : leur consommation d’électricité a dépassé celle de l’Argentine et devrait continuer à augmenter à mesure que la finance décentralisée se développe.

Moins de main-d’œuvre mais davantage d’impacts environnementaux

Le numérique est souvent vu par les décideurs comme une façon de réduire les coûts de main-d’œuvre, et cela dans la plupart des secteurs. La main-d’œuvre a un coût économique, mais elle est également la plus durable de tous les intrants :il s’agit d’une ressource abondante et renouvelable dont l’utilisation n’affecte pas directement les limites de la planète.

La numérisation du travail, si elle permet de réaliser des économies en remplaçant une partie de la main-d’œuvre humaine (et durable) par des machines gourmandes en énergie et en ressources, se fait donc au détriment de l’environnement et amoindrit la durabilité des activités économiques – et non l’inverse.

Même en considérant qu’une partie de la main-d’œuvre déplacée pourrait être absorbée par de nouveaux business models, ces derniers ne seront pas forcément plus durables que les business models d’aujourd’hui. De plus, cela ne ferait que renforcer les tendances actuelles en matière d’inégalités, qui ont des effets délétères sur la durabilité. Une neutralité carbone qui serait atteinte au prix d’un appauvrissement massif de la population et au mépris des objectifs de développement durable des Nations unies paraît inacceptable.

Enfin, l’argument selon lequel le numérique permettrait aux entreprises de prendre des décisions plus soutenables n’est pas fondé. Ces décisions sont prises en tenant d’abord compte de la maximisation des profits, des opportunités de croissance et de l’amélioration de son efficacité en interne, conformément aux structures de gouvernance en place. Les décisions en matière de numérique n’échappent pas à cette règle.

Tant que la maximisation de la valeur pour les actionnaires restera le principe directeur de la gouvernance d’entreprise, il n’y a aucune raison de s’attendre à ce que la numérisation impulsée par les entreprises privilégie le développement d’une économie neutre en carbone plutôt que les préoccupations de rentabilité. Au contraire, les technologies de l’information semblent avoir jusque-là surtout renforcé les tendances actuelles.

Se méfier du solutionnisme technologique

Les arguments qui précèdent montre que la numérisation en soi ne soutient pas toujours la neutralité carbone. Comme toutes les innovations majeures, elle permet d’élargir l’éventail des possibles au plan économique. Cela signifie qu’il existe des opportunités significatives d’investissements durables et transformateurs.

Mais il convient de se méfier des solutions purement technologiques aux problèmes de durabilité, même si elles sont réconfortantes car elles n’impliquent aucun changement réel du statu quo. Ce faux sentiment de sécurité est pourtant précisément ce qui nous a conduits collectivement à épuiser les limites planétaires.

Le numérique peut soutenir la transition verte, mais, pour que ses opportunités puissent être exploitées, un véritable changement dans les processus décisionnels doit s’opérer. Pour l’heure, les États et quelques entreprises restent les seuls niveaux auxquels ces décisions sont prises. En d’autres termes, nous avons besoin d’un déclic collectif pour mieux appréhender les liens entre technologie, énergie et société, sans quoi atteindre la neutralité carbone grâce au numérique ne restera qu’un vœu pieux.

The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

ref. Pourquoi la transformation numérique n’est pas une voie royale vers la neutralité carbone – https://theconversation.com/pourquoi-la-transformation-numerique-nest-pas-une-voie-royale-vers-la-neutralite-carbone-269636

L’IA générative est-elle soutenable ? Le vrai coût écologique d’un prompt

Source: The Conversation – France in French (2) – By Denis Trystram, Professeur des universités en informatique, Université Grenoble Alpes (UGA)

Circulez, il n’y a rien à voir ? Les estimations du bilan environnemental de l’intelligence artificielle générative, comme celles réalisées à l’été 2025 par Google sur son IA Gemini, semblent rassurantes : seulement 0,003 g de CO2 et cinq gouttes d’eau par « prompt ». Des résultats qui dépendent en réalité beaucoup des choix méthodologiques réalisés, alors que de telles études sont le plus souvent menées en interne et manquent de transparence. Le problème est que ces chiffres font de plus en plus souvent figure d’argument marketing pour inciter à l’utilisation de l’IA générative, tout en ignorant le risque bien réel d’effet rebond lié à l’explosion des usages.


Depuis la sortie de ChatGPT fin 2022, les IA génératives ont le vent en poupe. En juillet 2025, OpenAI annonçait que ChatGPT recevait 18 milliards de « prompts » (instructions écrites par les utilisateurs) par semaine, pour 700 millions d’utilisateurs – soit 10 % de la population mondiale.

Aujourd’hui, la ruée vers ces outils est mondiale : tous les acteurs de la Big Tech développent désormais leurs propres modèles d’IA générative, principalement aux États-Unis et en Chine. En Europe, le Français Mistral, qui produit l’assistant Le Chat, a récemment battu les records avec une capitalisation proche de 12 milliards d’euros. Chacun de ces modèles s’inscrit dans un environnement géopolitique donné, avec des choix technologiques parfois différents. Mais tous ont une empreinte écologique considérable qui continue d’augmenter de façon exponentielle, portée par la démultiplication des usages. Certains experts, dont ceux du think tank spécialisé The Shift Project, sonnent l’alerte : cette croissance n’est pas soutenable.




À lire aussi :
L’IA peut-elle vraiment être frugale ?


Or, tous les acteurs du domaine – y compris les consommateurs – sont aujourd’hui bien conscients que du coût environnemental qui accompagne les usages liés au numérique, mais pas forcément des chiffres que cela représente.

Poussés par de multiples raisons (obligations réglementaires, marketing, parfois par conscience environnementale), plusieurs des grands acteurs de la tech ont récemment réalisé l’analyse de cycle de vie (ACV, méthodologie permettant d’évaluer l’impact environnemental global d’un produit ou service) de leurs modèles.

Fin août 2025, Google a publié la sienne pour quantifier les impacts de son modèle Gemini. Que valent ces estimations, et peut-on s’y fier ?

Une empreinte carbone étonnement basse

Un modèle d’IA générative, pour fonctionner, doit d’abord être « entraîné » à partir d’une grande quantité d’exemples écrits. Pour mesurer l’électricité consommée par un « prompt », Google s’est donc concentré sur la phase d’utilisation – et non pas d’entraînement – de son IA Gemini. Selon ses propres calculs, Google annonce donc qu’un prompt ne consommerait que 0,24 wattheure (Wh) en moyenne – c’est très faible : environ une minute de consommation d’une ampoule électrique standard de 15 watts.

Comment les auteurs sont-ils arrivés à ce chiffre, significativement plus faible que dans les autres études déjà réalisées à ce sujet, comme celle menée par Mistral IA en juillet 2025 ?

La première raison tient à ce que Google mesure réellement. On apprend par exemple dans le rapport que l’électricité consommée par un prompt est utilisée pour 58 % par des processeurs spécialisés pour l’IA (l’unité de traitement graphique, ou GPU, et le circuit intégré spécifique Tensor Processing Unit, ou TPU), 25 % par des processeurs classiques et à hauteur d’environ 10 % par les processeurs en veille, et les 7 % restants pour le refroidissement des serveurs et le stockage de données.

Autrement dit, Google ne tient ici compte que de l’électricité consommée par ses propres data centers, et pas de celle consommée par les terminaux et les routeurs des utilisateurs.

Par ailleurs, aucune information n’est donnée sur le nombre d’utilisateurs ou le nombre de requêtes prises en compte dans l’étude, ce qui questionne sa crédibilité. Dans ces conditions, impossible de savoir comment le comportement des utilisateurs peut affecter l’impact environnemental du modèle.




À lire aussi :
Impact environnemental du numérique : l’inquiétant boom à venir


Google a racheté en 2024 l’équivalent de la production d’électricité annuelle de l’Irlande

La seconde raison tient à la façon de convertir l’énergie électrique consommée en équivalent CO2. Elle dépend du mix électrique de l’endroit où l’électricité est consommée, tant du côté des data centers que des terminaux des utilisateurs. Ici, on l’a vu, Google ne s’intéresse qu’à ses propres data centers.

Depuis longtemps, Google a misé sur l’optimisation énergétique, en se tournant vers des sources décarbonées ou renouvelables pour ses centres de données répartis partout dans le monde. Selon son dernier rapport environnemental, l’effort semble porter ses fruits, avec une diminution de 12 % des émissions en un an, alors que la demande a augmenté de 27 % sur la même période. Les besoins sont colossaux : en 2024, Google a consommé, pour ses infrastructures de calcul, 32 térawattsheures (TWh), soit l’équivalent de la production d’électricité annuelle de l’Irlande.




À lire aussi :
Un data center près de chez soi, bonne ou mauvaise nouvelle ?


De fait, l’entreprise a signé 60 contrats exclusifs de fourniture en électricité à long terme en 2024, pour un total de 170 depuis 2010. Compte tenu de l’ampleur des opérations de Google, le fait d’avoir des contrats d’électricité exclusifs à long terme compromet la décarbonation dans d’autres secteurs. Par exemple, l’électricité à faibles émissions qui alimente les prompts pourrait être utilisée pour le chauffage, secteur qui dépend encore fortement des combustibles fossiles.

Dans certains cas, ces contrats impliquent la construction de nouvelles infrastructures de production d’énergie. Or, même pour la production d’énergie renouvelable décarbonée, leur bilan environnemental n’est pas entièrement neutre : par exemple, l’impact associé à la fabrication de panneaux photovoltaïques est compris entre 14 gCO2eq et 73 gCO2eq/kWh, ce que Google ne prend pas en compte dans ses calculs.

Enfin, de nombreux services de Google font appel à de la « colocation » de serveurs dans des data centers qui ne sont pas nécessairement décarbonés, ce qui n’est pas non plus pris en compte dans l’étude.

Autrement dit, les choix méthodologiques réalisés pour l’étude ont contribué à minimiser l’ampleur des chiffres.

Cinq gouttes d’eau par prompt, mais 12 000 piscines olympiques au total

La consommation d’eau douce est de plus en plus fréquemment prise en compte dans les rapports environnementaux liés au numérique. Et pour cause : il s’agit d’une ressource précieuse, constitutive d’une limite planétaire récemment franchie.

L’étude de Google estime que sa consommation d’eau pour Gemini est de 0,26 ml – soit cinq gouttes d’eau – par prompt. Un chiffre qui semble dérisoire, ramené à l’échelle d’un prompt, mais les petits ruisseaux font les grandes rivières : il faut le mettre en perspective avec l’explosion des usages de l’IA.

Globalement, Google a consommé environ 8 100 millions de gallons (environ 30 millions de mètres cubes, l’équivalent de quelque 12 000 piscines olympiques) en 2024, avec une augmentation de 28 % par rapport à 2023.

Mais là aussi, le diable est dans les détails : le rapport de Google ne comptabilise que l’eau consommée pour refroidir les serveurs (selon un principe très similaire à la façon dont nous nous rafraîchissons lorsque la sueur s’évapore de notre corps). Le rapport exclut de fait la consommation d’eau liée à la production d’électricité et à la fabrication des serveurs et autres composants informatiques, qui sont pourtant prises en compte pour le calcul de son empreinte carbone, comme on l’a vu plus haut. En conséquence, les indicateurs d’impact environnemental (carbone, eau…) n’ont pas tous le même périmètre, ce qui complique leur interprétation.




À lire aussi :
Les métaux de nos objets connectés, face cachée de l’impact environnemental du numérique


Des études encore trop opaques

Comme la plupart des études sur le sujet, celle de Google a été menée en interne. Si on comprend l’enjeu de secret industriel, un tel manque de transparence et d’expertise indépendante pose la question de sa légitimité et surtout de sa crédibilité. On peut néanmoins chercher des points de comparaisons avec d’autres IA, par exemple à travers les éléments présentés par Mistral IA en juillet 2025 sur les impacts environnementaux associés au cycle de vie de son modèle Mistral Large 2, une première.

Cette étude a été menée en collaboration avec un acteur français reconnu de l’analyse du cycle de vie (ACV), Carbone4, avec le soutien de l’Agence de l’environnement et de la maîtrise de l’énergie (Ademe), ce qui est un élément de fiabilité. Les résultats sont les suivants.

Pendant les dix-huit mois de durée de vie totale du modèle, environ 20 000 tonnes équivalent CO2 ont été émises, 281 000 m3 d’eau consommée et 660 kg équivalent antimoine (indicateur qui prend en compte l’épuisement des matières premières minérales métalliques).

Résultats présentés par Mistral à l’été 2025.
Mistral AI

Mistral attire l’attention sur le fait que l’utilisation du modèle (inférence) a des effets qu’ils jugent « marginaux », si on considère un prompt moyen utilisant 400 « tokens » (unités de traitement corrélées à la taille du texte en sortie) : ce prompt correspond à l’émission de 1,14 g équivalent CO2, de 50 ml d’eau et 0,5 mg équivalent antimoine. Des chiffres plus élevés que ceux avancés par Google, obtenus, comme on l’a vu, grâce à une méthodologie avantageuse. De plus, Google s’est basé dans son étude sur un prompt « médian » sans donner davantage de détails statistiques, qui seraient pourtant bienvenus.

En réalité, l’une des principales motivations, que cela soit celles de Google ou de Mistral, derrière ce type d’étude reste d’ordre marketing : il s’agit de rassurer sur l’impact environnemental (ce qu’on pourrait qualifier de « greenwashing ») de l’IA pour pousser à la consommation. Ne parler que de l’impact venant des prompts des utilisateurs fait également perdre de vue la vision globale des coûts (par exemple, ceux liés à l’entraînement des modèles).




À lire aussi :
Comment rendre l’électronique plus soutenable ?


Reconnaissons que le principe d’effectuer des études d’impacts est positif. Mais l’opacité de ces études, même lorsqu’elles ont le mérite d’exister, doit être interrogée. Car, à ce jour, Mistral pas plus que Google n’ont pas dévoilé tous les détails des méthodologies utilisées, les études ayant été menées en interne. Or, il faudrait pouvoir disposer d’un référentiel commun qui permettrait de clarifier ce qui doit être pris en compte dans l’analyse complète du cycle de vie (ACV) d’un modèle d’IA. Ceci permettrait de réellement comparer les résultats d’un modèle à l’autre et de limiter les effets marketing.

Une des limites tient probablement à la complexité des IA génératives. Quelle part de l’empreinte environnementale peut-on rattacher à l’utilisation du smartphone ou de l’ordinateur pour le prompt ? Les modèles permettant le fine-tuning pour s’adapter à l’utilisateur consomment-ils plus ?

La plupart des études sur l’empreinte environnementale des IA génératives les considèrent comme des systèmes fermés, ce qui empêche d’aborder la question pourtant cruciale des effets rebonds induits par ces nouvelles technologies. Cela empêche de voir l’augmentation vertigineuse de nos usages de l’IA, en résumant le problème au coût environnemental d’un seul prompt.

The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

ref. L’IA générative est-elle soutenable ? Le vrai coût écologique d’un prompt – https://theconversation.com/lia-generative-est-elle-soutenable-le-vrai-cout-ecologique-dun-prompt-269432

COP30: ocho razones económicas que frenan a los países a la hora de asumir compromisos climáticos ambiciosos

Source: The Conversation – (in Spanish) – By José Alba Alonso, Profesor Titular de Economía Aplicada, Universidad de Oviedo

Foto de los líderes mundiales en la Cumbre del Clima (COP30) de Brasil. UN Climate Change – Zo Guimarães/Flickr, CC BY-NC-SA

El cambio climático es uno de los mayores retos de la sociedad. Sin embargo, a pesar de que se han sucedido varias cumbres mundiales para afrontarlo, como la que se está celebrando estos días en Brasil (la COP30, todavía no se han tomado las medidas necesarias para mitigar su avance con la suficiente rapidez y efectividad.

Si los Estados conocen los cambios que deberían llevar a cabo y la gravedad e implicaciones de no hacerlo, ¿por qué los compromisos que han llegado a alcanzar dichas reuniones son tan exiguos?

En la COP del año pasado, el gran logro fue consignar menos de tres milésimas del PIB mundial para ayudar a los países en desarrollo (300 000 millones de dólares anuales) en la lucha contra el cambio climático. En el 2025, se plantean 30 objetivos clave.

Gran parte de la contradicción entre lo que los negociadores piensan que es necesario y lo que finalmente rubrican no tiene nada que ver con la ciencia climática, sino con cuestiones políticas y económicas. Estas son algunas de ellas:

1. Asimetría entre los costes y los beneficios

Existe una clara asimetría entre los compromisos que cada Estado habría de asumir y los beneficios que podría obtener. Aplicar medidas ambientales que limitan la producción o aumentan los costes no garantiza, en modo alguno, que un país vaya a obtener unos réditos económicos equivalentes a corto plazo.

El caso de la Unión Europea, quien más ha avanzado en esos compromisos, es paradigmático: diversos actores la acusan de estar autopenalizándose y perdiendo competitividad frente a terceros al asumir mayores compromisos que otros.




Leer más:
Justicia ambiental en Europa: ¿se debería compensar a los países del sur por su vulnerabilidad climática?


2. Diferencias en la capacidad de influencia

Existe un conflicto entre los intereses de diferentes actores con distinta capacidad de influencia. Por una parte, existen intereses contra la aplicación de políticas que modifiquen el escenario productivo, mientras que, por otra, hay gran cantidad de individuos y organizaciones partidarios de actuar frente al cambio climático.

La balanza se inclina hacia quienes defienden algo muy propio y a lo que dan prioridad. Por ejemplo, las “cuatro grandes” petroleras” ejercen su influencia continuamente y con grandes medios (investigación, comunicación, lobbying…). Mientras tanto, millones de personas con gran interés por el clima difícilmente pueden influir en las COP. Sólo lo hacen débilmente, representados por los grupos medioambientalistas, que tienen gran voluntad, pero también mucha dificultad para lograr imponer acuerdos.

3. Impopularidad de las decisiones

Los plazos para el ejercicio político son limitados en el tiempo. Quienes pueden tomar decisiones para obtener un beneficio social diferido ven cómo la opinión pública juzga severamente las restricciones aplicadas. Conscientes de la impopularidad de medidas que afectan, finalmente, a la vida cotidiana (impuestos, cambios de vehículos, exigencias en edificación, obligaciones para agricultores, etc.) muchos decisores evitan, por impopulares, decisiones de calado.




Leer más:
¿Son los impuestos ambientales de derechas o de izquierdas?


4. Diferencias entre países ricos y pobres

Hay situaciones y argumentos de todo tipo para defender diferentes posturas frente al reparto de los esfuerzos necesarios. Si partimos de una cantidad máxima de emisiones “aceptable”, repartir esa potencial contaminación entre los Estados lleva a enfrentamientos.

Los países más ricos contaminan más, y plantean reducciones sobre la cantidad de emisiones que vienen realizando. Los más pobres argumentan que ellos no han provocado el problema y que ahora es su turno de crecer como hicieron otros durante mucho tiempo. Esto llevaría a que países con baja renta dispusiesen de mayores cuotas de contaminación que quienes ya crecieron sin restricciones, algo que no aceptan los países desarrollados.




Leer más:
Países desarrollados y en vías de desarrollo: ¿diferentes obligaciones ante la acción climática?


5. Medidas costosas

Es necesario aplicar las mejores alternativas para minimizar las emisiones perniciosas, pero eso entraña un coste. ¿Cómo afrontar ese incremento de presupuesto para infinidad de complejos industriales, medios de transporte, sistemas de acondicionamiento de la temperatura, etc.?

En muchas ocasiones, los Estados ricos han planteado canalizar financiación para usar la mejor tecnología, pero la concreción presupuestaria ha sido insuficiente. Por otro lado, no es fácil que los países con menos recursos estén dispuestos a aceptar compromisos de modernización que pueden suponerles un grave quebranto directo.

6. Dudas sobre la efectividad de las políticas

Ha empezado a surgir una seria duda sobre la efectividad de las medidas que pudieran tomarse. El incremento de temperatura que se ha producido puede haber sobrepasado el margen en el que cabría maniobrar. Pero también constituyen un lastre las experiencias fallidas, o que se perciben como tales.

Hay un enorme debate en la Unión Europea sobre el coche eléctrico, por ejemplo. También se ha visto cómo imponer penalizaciones al transporte marítimo por sus emisiones ha supuesto que muchos buques fondeen ahora primero en el norte de África, desde donde vienen después, para minimizar el recorrido por el que se les imponen pagos ambientales.

7. Falta de perspectiva global en las cumbres

Podemos subsumir muchos de los puntos anteriores en el talante y la técnica de quienes participan en las conferencias climáticas. La propia dinámica de la negociación lleva a que cada cual intente jugar sus bazas en la forma que más le favorezca, ganando ventaja en función de las percepciones respecto al posicionamiento de otros interlocutores. Dicha actitud propicia la pérdida de una perspectiva global y hace aún más complejo cualquier acuerdo.




Leer más:
Por qué las conferencias del clima no son reuniones científicas, sino políticas


8. Discontinuidad de los gobiernos

Finalmente, cabe glosar multitud de cuestiones que no están tan directamente relacionadas con las negociaciones, pero que configuran un entorno del que no se pueden sustraer quienes representan a los Estados.

La discontinuidad en los Gobiernos facilita que se interrumpan los acuerdos que supongan algo nuevo. En tal sentido, fue determinante la investidura de Donald Trump en Estados unidos a comienzos del año 2017, quebrando el mayor progreso logrado hasta entonces.

Además, la percepción social de numerosas consecuencias indeseadas, el desplazamiento de las empresas menos eficientes en la producción (que no serían capaces de competir en el nuevo escenario), la necesidad de replantear todo un modelo vital (energía, ordenación del territorio, transportes, materiales…), así como los sistemas productivos y las pautas de consumo, contribuyen a forjar un ánimo continuista.

¿Será diferente la COP30?

El clima no está mereciendo el debido cuidado por el conjunto de los Estados, cuyos Gobiernos actúan condicionados por las cuestiones apuntadas. Además, no existe una organización internacional que pueda asumir el liderazgo. Si una parte de los países no aplicase medidas, podrían atraer las actividades más perturbadoras, debilitando el resultado total. Como excepción a la desidia, la Unión Europea ha intentado liderar el proceso, pero no ha logrado que otros la sigan.

Esperemos que la COP30 tenga mejores resultados que los episodios precedentes. La sociedad ya es consciente de algunas consecuencias de la inacción y esto podría suavizar las aristas de los argumentos en los que he condensado el origen de las posposiciones. Pero no debemos ignorar la realidad socioeconómica y política, que influye decisivamente.

El desafío es ganar perspectiva, tener una visión holística y comprender la interdependencia –entre el medio ambiente y la economía, los países del norte y los del sur, los consumidores y los productores, etc.– para salvar las dificultades.

The Conversation

José Alba Alonso es Vicepresidente del Consejo Asturiano del Movimiento Europeo y miembro de la Asociación Asturiana de Amigos de la Naturaleza (en su día fue miembro del Comité Científico de Friends of Earth España)

ref. COP30: ocho razones económicas que frenan a los países a la hora de asumir compromisos climáticos ambiciosos – https://theconversation.com/cop30-ocho-razones-economicas-que-frenan-a-los-paises-a-la-hora-de-asumir-compromisos-climaticos-ambiciosos-269936

Agricultural exports from Africa are not doing well. Four ways to change that

Source: The Conversation – Africa – By Lilac Nachum, Visiting professor, Strathmore University

Africa is the world’s most endowed continent in agricultural potential, yet it remains a marginal player in global agribusiness. This paradox lies at the heart of Africa’s development challenge.

Africa’s land accounts for nearly half of the global total. Most of it can be used for growing crops. It is also largely unprotected, and not forested, with low population density. The continent’s climate supports the growth of 80% of the foods consumed globally. Economic theory would predict that these conditions would lead to strong export performance. Yet Africa’s share of global agricultural exports is the lowest worldwide. It fell from about 8% in 1960 to 4% in the early 2020s, according to World Bank data.

Policymakers have largely neglected agribusiness export performance, with a few exceptions, such as Kenya and Ghana. Agribusiness refers to the entire range of activities in producing, processing, distributing and marketing agricultural products.

Despite being the largest contributor to GDP and employment, agribusiness receives a disproportionately small share of government spending (on average 4%), far below its economic significance. There are variations across the continent, ranging from 8% and 7% respectively in Mali and South Sudan to less than 3% in Kenya and Ghana. Many governments have instead chosen manufacturing as the pathway to global integration.

Based on insights from over three decades of research, consulting and teaching on global markets and development, I argue that agriculture could lead Africa’s integration into the world economy. Four reforms would be necessary: improving access to capital; documenting land; designing targeted cross-border policies; and strategically employing trade policy.

In these ways, Africa could use its natural assets to secure broad-based economic growth and a stronger position in global value chains.

Four reforms to support agribusiness

1. Improve access to capital

Capital scarcity remains the most serious constraint on African agribusiness. Financial institutions are reluctant to lend due to high risk, long investment horizons, poor collateral, and profits being vulnerable to price shocks. The World Bank estimates that agriculture receives only about 1% of commercial lending despite contributing 25%-40% of GDP (up to 6% in Nigeria and Ethiopia). Lending rates are often double the economy-wide average, as UN Food and Agriculture Organization data show for Uganda.

Governments can help close this financing gap. In 2024, Kenya allocated US$7.7 million for developing its tea production. Domestic investment can generate savings by cutting food import bills. Nigeria’s Tomato Jos project, for instance, reduced annual tomato paste imports by US$360 million.

Governments should expand public lending while also enabling private sector participation through risk-sharing mechanisms. South Africa’s Khula Credit Guarantee Scheme illustrates how government-backed guarantees can unlock finance for collateral-poor farmers. This model has been reproduced in Kenya and Tanzania with EU and development bank support.

Private finance sources such as venture capital have also grown rapidly. In 2024, Nigeria and South Africa each attracted about US$500 million in venture funding. Funded African startups have grown six times faster than the global average. Micro-lending platforms now exceed US$8.5 billion in loans.

2. Document the land

Over 80% of Africa’s arable land is undocumented and governed by customary tenure systems poorly integrated into formal law. Weak land administration deters investment and limits land’s use as collateral. Transfers cost twice as much and take twice as long as in OECD countries (the world’s 38 most developed countries). That constrains access to credit and economies of scale needed for exports.

Several land tenure reforms introduced in the last decade demonstrate the benefits of formalisation. Ethiopia issued certificates to 20 million smallholders, boosting rental activity. Malawi’s redistribution of 15,000 hectares raised household incomes by 40%. In Mozambique, Uganda and Liberia, governments legally recognised customary institutions to facilitate formal land contracts. Rwanda’s comprehensive land mapping further improved transparency and investment incentives.

3. Design focused cross-border policies

Regional and global markets need different strategies for export success. Intra-African trade benefits from proximity and regulatory harmonisation. The East African Community’s trade facilitation measures increased intra-regional dairy exports 65-fold within a decade.

Most African agricultural exports, however, go to non-African markets, requiring infrastructure and logistics investments to ensure speed and quality. Senegal increased exports by 20% annually after investing in high-speed shipping, while Ethiopia’s flower growing boom owes much to its air transport and cold-chain systems.

Policies must also be crop specific. Kenya’s targeted avocado export strategy transformed it into Africa’s largest exporter, with double-digit annual growth. Mali’s mango export policy built a competitive value chain serving European markets.

4. Use trade policy as a tool for upgrading

African exporters primarily sell raw, low-value materials. Nigeria, a top tomato producer, exports nearly all production unprocessed – and imports paste. Less than 5% of Kenyan tea, the nation’s leading export, is branded. Trade policy can reverse this imbalance by encouraging domestic processing.

The East African Community’s differentiated tariff structure successfully encouraged value addition by lowering duties on intermediate goods while protecting local food processing. Governments could similarly tax or restrict unprocessed exports to motivate upgrading. At the same time, it’s necessary to invest in processing capacity. Several countries, including Botswana, Uganda and Côte d’Ivoire, have attempted raw export bans with limited success because the enabling conditions are missing.

A decisive shift

Africa’s agribusiness sector embodies the continent’s untapped potential for structural transformation. With abundant land, favourable climate and rapidly growing domestic demand, Africa possesses clear comparative advantages. Africa is also becoming more capable of addressing the challenges that have arrested the development of the agribusiness sector in the past. This article develops a policy agenda designed to reverse Africa’s declining share of world agricultural trade by amending institutional failures that have constrained competitiveness.

This agenda is based on enhancing access to finance, formalising land rights, implementing targeted cross-border initiatives, and using trade policy for upgrading. A decisive policy shift towards an agriculture-led development agenda is essential. Implementing this agenda will enable African countries to improve their economic position at home and in the world.

The Conversation

Lilac Nachum does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Agricultural exports from Africa are not doing well. Four ways to change that – https://theconversation.com/agricultural-exports-from-africa-are-not-doing-well-four-ways-to-change-that-268780

50 years after Franco’s death, giving a voice to Spanish dictator’s imprisoned mothers

Source: The Conversation – Global Perspectives – By Zaya Rustamova, Associate Professor of Spanish, Kennesaw State University

A protester holds a banner with pictures of people who went missing during the Spanish dictatorship of Francisco Franco. John Milner/SOPA Images/LightRocket via Getty Images

In the run-up to the 50th anniversary of Francisco Franco’s death on Nov. 20, 2025, the left-leaning Spanish government led a vigil honoring the many victims of the dictator’s regime.

While the exact numbers remain impossible to determine, historians estimate that Franco’s men killed up to 100,000 people during the brutal Spanish Civil War, and tens of thousands were executed during his dictatorial rule from 1939 until his death in 1975. Hundreds of thousands more were imprisoned, sent to labor camps or subjected to political persecution. To these figures, we must add the roughly half a million people who fled or were forced into exile.

Among the multitudes of Francoism’s victims were women and children who endured psychological and physical abuse in prisons, orphanages and asylums. Yet for decades their experiences have remained marginal in the public narrative – highlighting the uneven acknowledgment of different groups of victims amid Spain’s broader struggle to confront its past.

Still, their stories remain alive in the testimonies of the women who were imprisoned by the regime. In the summer of 2024, I conducted research at the Documentation Center of Historical Memory in Salamanca, collecting documented written accounts of traumatic experiences suffered by Spain’s female population under Franco. They reveal the extent to which Francoist repression was structured through gender, framing women as inherently subordinate and subjecting those who resisted the regime’s patriarchal order to especially severe punishment.

Franco’s gendered violence

My study explores the testimonies of women imprisoned during the civil war or subsequent decades, all of whom endured suffering related to their motherhood. While some were detained for their ideological allegiance to the republic that preceded Franco’s ascent, others had no formal partisan affiliations or were merely related to men who did.

These women suffered what many survivors and historians have described as a “double punishment” – targeted not only for their beliefs or associations but just for being women and mothers.

The earliest testimony I came across was from a woman detained in 1939, just three years after Franco, a military general, led an uprising against the democratically elected government of the Second Republic that precipitated the civil war and his subsequent reign.

A military men salutes toward a crowd.
Franco gives a fascist salute as he and his Nationalist forces enter Barcelona in March 1939.
AP Photo

Under Franco’s dictatorial regime, women’s roles were rigidly controlled by the ideology of National Catholicism, which linked femininity, motherhood and loyalty to the state. The church reinforced this vision, “dictating that women served the fatherland through self-sacrifice and dedication to the common good.”

Those who defied the patriarchy were criminalized and subjected to “re-education” focused on religious values.

Women’s so-called “redemption” under this reeducation was no less violent than their confinement. As one witness described, in May of 1939 the auditorium of Las Ventas prison was prepared to celebrate “two girls and a boy (… recently) born in prison.” During the ceremony, a choir “composed of forty inmates, including opera singers, music teachers, violinists, and amateurs,” had to perform the national anthems with the fascist arm-raised salute.

Yet confinement itself was particularly brutal.

According to Josefina García, a woman imprisoned during the 1940s, guards regularly insulted and beat inmates. “If you were at home behaving like decent women, you wouldn’t be here,” she recalled one saying. García continued: “Of course, they used a crude, sexist language. The police ‘used words’ in a way that sometimes leave a mark deeper than a bruise.”

Gender also played a role in the type of punishment prisoners received. Following their arrest, women were subjected to head shaving, forced ingestion of castor oil and the subsequent public humiliation of being made to walk in circles while defecating. In addition, they were often subjected to sexual violence by prison guards or interrogating officers.

Recounting her experience, another witness reported the case of an 18-year-old sister of a guerrilla fighter in Valencia who “was subjected to terrible torture, stripped naked in a room with several Civil Guards who pricked her breasts, genitals, and stomach with … needles.”

Someone holds a sign for a missing person at a protest.
A protester holds a banner with an image of an unknown woman – a victim of the Franco regime.
AP Photo/Paul White

Motherhood as battleground

One of the most painful aspects of Franco’s repression was the forced separation of mothers and their children.

Upon incarceration, women frequently lost custody of their sons and daughters, who were placed in orphanages or adopted by families loyal to Franco and his regime. Such violent ruptures of the maternal bond were more than an act of personal cruelty – they were a calculated political strategy rooted in the broader Francoist ideology.

Since Francoism promoted an image of women as obedient wives and self-sacrificing mothers devoted to the Catholic family model, Republican women were demonized as immoral, dangerous and unworthy of motherhood.

By stripping women of their children, the regime both punished them and reinforced its narrative that only “loyal” women could be true mothers.

Meanwhile, child-rearing or birth during incarceration was marked by fear and uncertainty. In certain cases, newborns were allowed to stay with their mothers for a short time. However, a lack of proper nourishment and mental exhaustion made breastfeeding an impossible task.

At times, women who began to lactate were denied the possibility of nursing their infants, leading to physical pain and emotional torment.

More often, babies were permanently taken away altogether, deemed at risk of being “contaminated” by their mothers’ ideological values.

“When I was arrested, my son was five days old,” one victim, Carmen Caamaño, reported. “About a year later, they said I no longer needed to breastfeed him and took the child out of the prison. Some friends had to take him in because I had no family there.”

Women present flowers at a memorial.
Women pay tribute to victims of the Franco regime in front of a flag of the Spanish Republic.
P Photo/Alvaro Barrientos

There were also countless cases in which children were imprisoned alongside their mothers. With no other relatives to care for them, these children suffered from hunger, disease and a lack of basic hygiene in their overcrowded cells. For mothers, the psychological burden was immense as they were forced to watch their children suffer, yet they had no power to protect them.

In the summer of 1941, about six or seven children died daily in these prisons from starvation and diseases, according to accounts of survivors.

Trauma and resistance

Alongside trauma, there were also moments of resistance.

Mothers in prison looked for ways to nurture their children despite scarcity and fear. Testimonies I reviewed relate cases of inmates sharing food, telling stories and protecting children as best they could. These small acts of care were a quiet but powerful form of defiance.

Yet for many women, the trauma of these losses never healed. Survivors often speak of the pain of separation as an open wound that lasted a lifetime. Children raised in prisons or separated from their families carried the scars into adulthood.

Even decades after the regime ended, many descendants still struggle with the weight of this silenced past. Yet because of Spain’s Amnesty Act of 1977, which was granted for past political crimes, those responsible for atrocities committed under Franco have seldom been held accountable.

Histories of the Franco years often leave the grief of the intergenerational trauma in the shadows. And for the victims themselves, the traumatic motherhood experiences under his dictatorship reveal more than just personal suffering – they expose how authoritarian power can reach into the most intimate parts of life.

The Conversation

Zaya Rustamova received funding to conduct this study from Radow J. Norman College of Humanities and Social Sciences, Kennesaw State Unviversity.

ref. 50 years after Franco’s death, giving a voice to Spanish dictator’s imprisoned mothers – https://theconversation.com/50-years-after-francos-death-giving-a-voice-to-spanish-dictators-imprisoned-mothers-249931

Behind every COP is a global data project that predicts Earth’s future. Here’s how it works

Source: The Conversation – Global Perspectives – By Andy Hogg, Professor and Director of ACCESS-NRI, Australian National University

Arash Hedieh/Unspalsh

Over the past week we’ve witnessed the many political discussions that go with the territory of a COP – or, more verbosely, the “Conference of the Parties to the United Nations Framework Convention on Climate Change”.

COP30 is the latest event in annual meetings aiming to reach global agreement on how to address climate change. But political events such as COP base the need for action on available science – to understand recent changes and to predict the magnitude and impact of future change.

This information is provided through other international activities – such as regular assessment reports that are written by the Intergovernmental Panel on Climate Change (IPCC). These reports are based on the best available scientific knowledge.

But how exactly do they evaluate what will happen in the future?

Climate futures

Predictions of future climate change are based on several key planks of evidence. These include the fundamental physics of radiation in our atmosphere, the trends in observed climate and longer-term records of ancient climates.

But there is only one way to incorporate the complex feedbacks and dynamics required to make quantitative predictions. And that is by using climate models. Climate models use supercomputers to solve the complex equations needed to make climate projections.

The most sophisticated climate models are known as Earth system models. They ingest our knowledge of climate physics, radiation, chemistry, biology and fluid dynamics to simulate the evolution of the entire Earth system.

Climate centres from many different nations develop Earth system models, and contribute to a global data project known as CMIP – the Coupled Model Intercomparison Project. This data is then used by scientists worldwide to better understand the possible trajectories of, and to study the reasons for, future change.

Regional climate changes

Data from Earth system models cover the whole globe, but there is a catch. The computational expense of these models means that we run them at low resolution – that is, aggregating information onto grid boxes that are about 100 kilometres across. This puts the entirety of Melbourne, for example, within a single grid box.

But the climate information that we need to guide future adaptation needs more detailed information. For this, scientists use tools known as “downscaling”, or regional climate projections. These take the global projections and produce higher resolution information over a limited region.

This high-resolution information feeds into products such as the recently released National Climate Risk Assessment from the Australian Climate Service. Similar climate information is used by local governments, businesses and industry to understand their exposure to climate risk.

We’re doing it all again

Each iteration of CMIP, which began in 1995, has brought about improvements which have helped us to better understand our global climate.

For example, CMIP5 (from the late 2000s) helped us to understand carbon feedbacks and the predictability of the climate system. The CMIP6 generation of climate models (from the late 2010s) provided more accurate simulation of clouds and aerosols, and a wider set of possible future scenarios.

Now we are doing it all again – to create what will be known as CMIP7. Why would we do this?

The first reason is that more climate information has become available since CMIP6. CMIP simulations use “scenarios” to look at the range of plausible futures of climate change under different socio-economic and policy pathways.

For CMIP6, the “future” scenarios were started from the year 2015, using the information available at the time. We now have an extra decade of information to refine our projections.

The second reason is that CMIP7 shifts more to emissions-driven simulations for carbon dioxide, allowing models to calculate atmospheric concentrations on the fly.

Simulating how atmospheric carbon dioxide and other greenhouse gases interact with the land and ocean (known as the carbon cycle) allows feedbacks and potential tipping points to be calculated. However, this also requires a more complex Earth system model.

Australia’s CMIP7 contribution aims to incorporate new science and knowledge with a refined carbon cycle which includes Australian vegetation, bushfires, land use change and improved ocean biology.

Thirdly, this time around we aim to run models at higher resolution – such as having 16 grid boxes over Melbourne, instead of one. This is possible thanks to advances in computational capability and modelling software.

We’ve started the process

This week, Australia’s newest Earth system model version – known as ACCESS-ESM1.6 – is initiating the first phase in the CMIP7 contribution process, which is supported through the National Environmental Science Program Climate Systems Hub.

This includes a long “preindustrial spinup”, where we run the model for about 1,000 virtual years using greenhouse gas levels from before the industrial revolution until the stable conditions are reached and available observations are matched. The spinup is required to ensure that all subsequent simulations start from a physically consistent state.

In the next phase we’ll run a “historical” simulation that emulates the last 200 years of civilisation. Only then can we implement a range of future scenarios and complete our climate projections.

This work is a partnership between CSIRO and Australia’s climate simulator (ACCESS-NRI), with support from university-based scientists and the Bureau of Meteorology. It’s an exercise that will take multiple years, consume hundreds of millions of compute hours on high performance supercomputers of the National Computational Infrastructure, and will produce about 8 petabytes of data – or 8 million gigabytes – to be processed and submitted to CMIP7.

As the only Southern Hemisphere nation submitting to past CMIPs, Australia has a unique and crucial perspective.

This data will also be used for higher resolution regional climate projections, which will then be used for future climate risk assessments and adaptation plans. It will also inform IPCC’s next assessment report.

Ultimately, a future COP will translate this evidence into global action to further refine our climate targets.


The authors acknowledge the work of Christine Chung and Sugata Narsey from the Bureau of Meteorology in preparing this article

The Conversation

Andy Hogg works for Australia’s Climate Simulator (ACCESS-NRI), based at the Australian National University. He receives funding for ACCESS-NRI from the Department of Education through the National Collaborative Research Infrastructure Strategy, and receives research funding from the Australian Research Council. He is a member of the Australian Meteorological and Oceanographic Society.

Tilo Ziehn receives funding from the National Environmental Science Program.

ref. Behind every COP is a global data project that predicts Earth’s future. Here’s how it works – https://theconversation.com/behind-every-cop-is-a-global-data-project-that-predicts-earths-future-heres-how-it-works-269893

The Dayton Peace Accords at 30: An ugly peace that has prevented a return to war over Bosnia

Source: The Conversation – Global Perspectives – By Gerard Toal, Professor of Government and International Affairs, Virginia Tech

World leaders clap as, from left, Serbian President Slobodan Milosevic, Croat President Franjo Tudjman and Bosnian President Alija Izetbegovic sign the Dayton Peace Agreement. Peter Turnley/Corbis/VCG via Getty Images

On Nov. 21, 1995, in the conference room of the Hope Hotel on the Wright-Patterson Air Force Base in Dayton, Ohio, the leaders of Bosnia-Herzegovina, Serbia and Croatia initialed an agreement that brought the three-and-a-half-year war in Bosnia to an end. Three weeks later, the General Framework Agreement, known as the Dayton Peace Accords, was signed.

The war over Bosnia was the most brutal and devastating of the wars spawned by the dissolution of Yugoslavia. Attacked from the moment it moved toward independence in early 1992 by militias supported by the neighboring nations of Croatia and Serbia, Bosnia was born under fire and nearly perished. Half of its population of 4.4 million were forcefully displaced, and over 100,000 people died during the conflict.

Ethnic cleansing and war crimes marked the war, including the Srebrenica genocide of July 1995, in which more than 8,000 Bosniak victims were murdered by the army of Republika Srpska.

The peace agreed to at Dayton left Bosnia, or Bosnia and Herzegovina as it is known in full, intact as a country but divided into two entities, Republika Srpska – a secessionist entity proclaimed by ethnonationalist Serbs in January 1992 – and the Bosnian Federation. Meanwhile, an international military force was deployed to secure the peace.

But it was an ugly peace: The patient was saved, but left deformed and weak. As scholars who have written extensively about the Bosnian war and its aftermath, we believe the legacy of the Dayton Peace Accords, 30 years on, is decidedly mixed.

The sorting of ethno-territories

Bosnia’s life after Dayton can be divided into three roughly decade-long eras: reconstruction, stalemate and permanent crisis.

The first decade was the toughest but most hopeful. With peace enforced by an international force including U.S. and Russian troops, Bosnians returned to their war-shattered country.

But restoring the country’s social fabric proved hard. While the international community aspired to reverse ethnic cleansing, the obstacles were immense.

A once proudly multicultural country was left divided into separate ethno-territories.

Under the Dayton Accords, Bosnians were promised the right to return home. But this was complicated by the fact that many houses were destroyed, while others were occupied by those who had forcefully displaced them.

Two people walk down a street in front of a damaged building.
Bosnians returning home after the war were confronted by damaged and destroyed homes.
Mike Abrahams/In Pictures Ltd./Corbis via Getty Images

By the summer of 2004, the UNHCR, the United Nations agency coordinating returns after the peace agreement, announced that it had achieved 1 million returns. What became evident, however, is that “minority returns” – that is, people returning to places where they would be a minority community – were limited. Many returnees reacquired their old property after a struggle but promptly sold it to build a life elsewhere among people who were the same ethnicity as them.

Cross-ethnic trust was largely shattered by wartime experiences.

Incompatible horizons

The first decade was peak liberal international statebuilding. An international high representative charged with “civilian implementation” of the Dayton Accords centralized control over military and intelligence functions at the state level. A central state border service and investigations agency was created. So also was a central state court, state-level criminal codes and an indirect taxation authority to unify indirect tax collection and finance state institutions.

Bosnia’s trajectory, though, stalled in 2006 when the high representative stepped back from state building. In April 2006, a package of constitutional amendments designed to streamline Dayton by strengthening central state institutions fell two votes short in the state Parliament.

Surprisingly, the package was not blocked by parties from Republika Srpska, traditional obstructionists, but by former Prime Minister Haris Silajdžić’s Bosniak-dominated party. This failure set the stage for a decade of polarization and stalemate.

Silajdžić campaigned for abolition of the entities – Republika Srpska and the Bosnian Federation – and the creation of a single united Bosnia. Republika Srpska’s leading politician, Milorad Dodik, answered by floating the prohibited idea of an entity independence referendum.

With the high representative largely passive, Bosnia was stalemated between incompatible horizons, each side strong enough to block but too weak to prevail.

Dodik turned referendum talk in Republika Srpska into a steady repertoire of threat, while casting central state institutions in Sarajevo as rotten, artificial and destined to fail. In the process, Dodik and his family got rich, creating a classic patronal power network across Republika Srpska.

With the media thoroughly divided by wartime allegiances, the public sphere was filled with incendiary rhetoric.

The word “inat” is a shared idiom across Bosniaks, Croats and Serbs. It is stubborn uprightness, a combination of narcissism and spite. Politics increasingly rewarded those who could perform “inat” more vividly than their rivals.

Central state institutions in Bosnia did not collapse but became sclerotic. Procedures multiplied, confidence thinned and decision-making settled into a theater of anticipatory vetoes where the point was less to implement a program than to keep imagined endpoints – the creation of a unified nation on one side; an independent Republika Srpska on the other – alive and to make the other side feel the pain of their impossibility.

A country on the brink

A decade of stalemate slowly evolved into a condition of permanent crisis.

In November 2015, Bosnia’s Constitutional Court ruled that the marking of Jan. 9 as “Republika Srpska Day” – a celebration of virtual independence – was discriminatory and illegal under human rights law.

Dodik, the Republika Srpska’s de facto leader, responded by organizing an extralegal referendum whose result asserted that the Republika Srpska population wanted the date retained.

Defiance evolved into active subversion of the constitutional order and provisions of Dayton. The Republika Srpska parliament passed laws that directly challenged central state institutions built in the first postwar decade. With weak enforcement capacity, the Bosnian state was unable to command compliance.

When in 2021 a new high representative was appointed over Russian objections, Dodik rejected his authority outright. By then, Bosnia was routinely described as “on the brink” of war.

Russia’s full-scale invasion of Ukraine in 2022 saw Dodik side firmly with Moscow. He visited Russian President Vladimir Putin in Moscow frequently. The Republika Srpska media relayed Russian propaganda, featuring correspondents reporting live from Russia’s front lines.

Two men talk while seated at a table.
Russian President Vladimir Putin meets with the president of Republika Srpska, Milorad Dodik, on Feb. 21, 2024.
Sergei Bobylyov/AFP via Getty Images

Meanwhile, people and institutions in the Bosnian Federation aligned with Ukraine and the West. A giant geopolitical rift ran through the country: two entities, two different realities.

In February 2025, the drama peaked when Bosnia’s Constitutional Court barred Dodik from political life. Predictably, he rejected the top court’s authority, and a standoff ensured. Dodik hired figures close to the Trump administration such as Rudy Giuliani to lobby on his behalf. By the end of October 2025, they had succeeded in getting U.S. sanctions on Dodik removed in exchange for him agreeing to leave the Republika Srpska presidency.

The ugly peace endures

To distant observers, Bosnia may register as a success story because it has not returned to war. But the peace forged at Dayton bound Bosnia in a straitjacket that has kept it divided since.

Ethnonationalism and crony capitalism have thrived while many Bosnians have left or aspire to do so.

Yet, unloved as it may be today, the Dayton Accords preserved Bosnia. It stopped a war, enabled freedom of movement, permitted economic revival, regularized elections, revived cultural life and allowed more than 1 million people to exercise their right of return.

As peace agreements go, the Dayton Peace Accords wasn’t the worst – but it is far from the best.

The Conversation

Gerard Toal received funding from the US National Science Foundation in the past for research on Bosnia-Herzegovina.

Adis Maksić does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Dayton Peace Accords at 30: An ugly peace that has prevented a return to war over Bosnia – https://theconversation.com/the-dayton-peace-accords-at-30-an-ugly-peace-that-has-prevented-a-return-to-war-over-bosnia-268424

Violent extremists wield words as weapons. New study reveals 6 tactics they use

Source: The Conversation – Global Perspectives – By Awni Etaywe, Lecturer in Linguistics | Forensic Linguist Analysing Cyber Terrorism, Threatening Communications and Incitement | Media Researcher Investigating How Language Shapes Peace, Compassion and Empathy, Charles Darwin University

Words are powerful tools. Violent extremists know this well, often choosing their phrasing extremely carefully to build loyalty among their followers. When wielded just so, they can do enormous harm.

Because their words are chosen so deliberately, researchers can look for patterns, trends and red flags. What exactly do extremists say that builds followings, incites hatred and violence, and can ultimately lead to deadly attacks?

Our research looking at the rhetoric of the extremists behind some of recent history’s worst terror attacks sheds light on this question. We’ve identified six key tactics terrorists use to mobilise people behind their cause.

By being able to spot the tactics, we can dismantle the language and protect people and communities from radicalisation.

Divide and conquer

In previous work, we examined the language of far-right incitement in the Christchurch shooter’s 87-page manifesto.

Our latest work analysed jihadist texts. These included al-Qaeda’s former leader Osama bin Laden’s speeches after September 11, and Islamic State’s former leader Abu Bakr al-Baghdadi’s statements in the organisation’s magazine.

We used linguistic analysis to focus on how language was used strategically to both reduce and accentuate cultural differences. We examined how inciters use words to create bonds and obligations to mobilise violence.

We found two main types of incitement messages: those that strengthen connections in the group to build a shared purpose, and those that separate the group from outsiders and paint others as enemies.

This kind of messaging can divide society and make people strongly identify with the group. As a result, following the group’s rules – even extreme actions – can feel like proof someone truly belongs and is loyal.

But in violent extremism, commands alone are often insufficient to inspire violence or mobilise support. So how do extremists use these underlying strategies to get people to act?

6 rhetorical tactics

Once violence has been established as a moral duty by isolating the group, there are six key techniques extremists deploy.

1. Weaponise difference

Extremists don’t just label outsiders as different. They frame them as immoral and dangerous. “Us” versus “them” becomes the backdrop for later calls to action.

Inciters link loyalty and honour to threats from outsiders. Osama bin Laden urged violence against pro-US Arab governments, calling them “traitor and collaborator governments […] created to annihilate Jihad”.

The Christchurch shooter, Brenton Tarrant, attacked nongovernmental organisations supporting immigrants, calling them “traitors”. He called immigrants “anti-white scum” and compared them to a “nest of vipers” that must be destroyed.

Dehumanising outsiders strengthens group bonds and can have deadly consequences.

2. Evoking heroes and icons

Extremists use famous people, places or events to make their audience feel part of a bigger story. Names like “Saladin” or places like “Hagia Sophia” and “Londinium” link followers to icons or past struggles, making them feel like defenders or avengers.

Tarrant said:

this Pakistani Muslim invader now sits as representative for the people of London. Londinium, the very heart of the British Isles. What better sign of the white rebirth than the removal of this invader?

3. Repurposing religious texts

Extremists use not religion itself, but twisted and decontextualised versions of religious texts to justify violence.

Quoting God or religious figures makes the message seem legitimate and frames violence as a moral or spiritual duty. This strengthens followers’ loyalty and belief that violent acts serve “our” shared values.

Tarrant quoted Pope Urban II of the first Crusade, while Al-Baghdadi misquoted Allah.

4. Tailored grievances and inflammatory language

Inciters tailor grievances before audiences voice them. Words like “humiliation”, “injustice” or “cultural loss” help bind followers to a common cause.

Osama bin Laden spoke of Muslims living in “oppression” and “contempt”. While the Christchurch shooter warned of “paedophile politicians” and that immigration would “destroy our communities”.

Naming and labelling unites followers and divides outsiders.

5. Metaphors and messages of kinship

Osama bin Laden hailed his audience through metaphor as “soldiers of Allah”, while describing enemies “under the banner of the cross”. Such contrasts intensify loyalty and hostility at once.

On the other hand, kinship terms pull people in. Words like “brothers”, “sisters”, “we” and “our” make strangers feel like family. Calling followers “our Muslim brothers” turns political duty into a personal, moral duty — like protecting family.

A man with a grey beard and turban speaks into a microphone on TV.
Osama bin Laden used familial terms to build loyalty among followers.
Maher Attar/Getty

Tarrant did this too. His line “why should you have peace when your other brothers in Europe face certain war?” links violence to family safety and future generations.

By contrast, “they” and “them” mark outsiders as non-kin. That sharp us versus them grammar strips empathy and makes exclusion or harm easier to justify.

6. Coercion into violent actions

In addition to commands, recommendations, or warnings that explicitly instruct someone to do something, there’s also coercion. It makes violence feel like care for the group.

Extremists do this by framing violence as duty. Phrases like “it is permissible” in jihadist texts shift violence from taboo to obligation, as in “it is permissible to take away their property and spill their blood”.

They also frame the outgroup as an existential threat. This justifies preemptive violence as self-defence or necessity, as in Tarrant’s “mass immigration will disenfranchise us, subvert our nations, destroy our communities, destroy our ethnic binds […]”.

What can be done with this research?

Extremist rhetoric does not just exist online. It echoes in protests, forums and political debates.

The “Great Replacement theory” once confined to extremist manifestos now surfaces in mainstream anti-immigration protests.

ASIO has warned the “promotion of communal violence” is rising, with politically motivated violence “flashing red” to authorities.




Read more:
How Australia’s anti-immigration rallies were amplified online by the global far right


Countering extremism means understanding its tactics. Policymakers, educators and community leaders can help by identifying and deconstructing these tactics if they encounter them.

Teaching critical literacy is also key so communities can spot and resist coercion.

We can also create counter-messages that affirm belonging without fuelling polarisation.

Extremist language hijacks shared values, turning them into obligations to hate and harm. Stopping violence before it starts means dismantling this language through education, transparency and proactive communication.

The Conversation

Awni Etaywe is affiliated with Charles Darwin University, Australia – a Lecturer in Linguistics and a researcher specialising in forensic linguistics, focusing on countering violent extremism, threatening communication, and incitement to hatred and violence.

ref. Violent extremists wield words as weapons. New study reveals 6 tactics they use – https://theconversation.com/violent-extremists-wield-words-as-weapons-new-study-reveals-6-tactics-they-use-266053

Toilets can make Africa’s roads safer, according to this new study

Source: The Conversation – Global Perspectives – By Festival Godwin Boateng, Senior Research Associate, University of Oxford

Travelling on Africa’s roads comes with many challenges. The biggest is arriving at your destination safely. The continent is one of the hotspots of global road trauma. Its traffic deaths account for about one quarter of the global number of victims, despite having less than 4% of the world’s vehicle fleet.

The situation in sub-Saharan Africa is particularly dire. Road crashes affect this region more than any other in the world. Its road fatality rate of 27 per 100,000 people is three times higher than Europe’s average of 9 and well above the global average of 18.

Then there’s Africa’s road infrastructure. Despite recent rising investments in road developments, the quality of roads in many African countries is generally low. This has been captured in research reports, the World Economic Forum’s surveys and the International Monetary Fund’s cross-country road quality ranking.

Crashes and poor roads are not the only things that can make travelling a less-than-pleasant experience. Another is a lack of toilets. You are in deep trouble if nature calls you while travelling on Africa’s roads. When planning roads and mobility, the authorities rarely include access to adequate, safe and clean toilets.

In 2020 a public interest lawyer, Adrian Kamotho Njenga, successfully sued some authorities in Kenya, compelling them to provide toilets for travellers.

It is not a uniquely African problem. Similar challenges exist in the US and the UK.

The difference is that in those places, researchers are building knowledge about the problem to influence and demand support for change.

I am a senior researcher in mobility governance at the Transport Studies Unit of the University of Oxford. My research interests include toilet access within mobility systems. In a recent paper, I drew attention to the road safety benefits of toilets.

I argue that enhancing drivers’ reasonable and reliable access to toilets can yield road safety benefits in ways that are comparable to enforcing laws against drunk or fatigued driving.

I searched academic databases such as Scopus and reviewed several papers. I found that improving toilet access for drivers was rarely researched as a road safety strategy in Africa. But it can enhance safer driving by reducing driver distraction and other unsafe driving practices that lead to road traffic crashes.

Road traffic crash losses in Africa are immense. Not long ago, the African Union was lamenting that they drain an estimated 2% of its member states’ GDP annually. Bringing the problem under control will require investing in a wide range of interventions, including unconventional ones – such as making it easy for drivers to “go” while on the road.

Road safety benefits of toilets

Driving while pressed for the bathroom can be a torturous experience and a significant distraction. It could make drivers a danger to themselves and other road users by diverting their attention away from the road and traffic conditions. The physical urgency can affect their judgment and reaction to dangerous situations.

The distraction and the urgency can make the driver impatient, and inclined to start speeding, tailgating, or trying reckless manoeuvres to get to the nearest place where they can ease themselves.

Research has shown that people who cannot urinate when their bladder is full experience cognitive or attention impairment that is equivalent to staying awake for 24 hours.

The cognitive deterioration associated with the extreme urge to void is also equivalent to having a blood alcohol concentration level of 0.05%. This is equivalent to or exceeds the blood alcohol concentration limits that Tunisia (0.05%); Sudan and Mauritania (0%); Morocco (0.02%); Mali (0.03%), Madagascar (0.04%), and other African countries impose on drivers.

All this suggests that driving while pressed for the bathroom is as dangerous as drunk or fatigued driving. It also implies that enhancing access to toilets can yield road safety benefits comparable to enforcing laws against drunk or fatigued driving.

Toilets should be integrated within road developments and mobility systems.

Time to invest in toilet access within mobility systems

For starters, governments on the continent can build more public toilets. Africa is one of the key locations of global toilet poverty. The World Health Organization says that some 779 million people on the continent do not have reasonable and reliable access to adequate, safe and clean toilets. Building more public toilets can help address general toilet poverty on the continent as well as in the context of mobility.

Refreshingly, in Ghana for example, private developers are investing in rest stops along highways. These social road transport infrastructures serve as places for commuters to relax, access goods and services, and socialise during their journey break. They often have toilets that travellers pay to access. Governments can explore ways to support these private provisions to expand and become more affordable.

Rest stops are often located on the outskirts, however. Most drivers and other road users operate in cities. When in need of a toilet while out and about, some drivers and other urban commuters are likely to use the toilet facilities available in fuel stations, hotels, restaurants, banks, coffee shops, hair salons, and other establishments in cities.

Not much is known about their cost, safety, cleanliness and location, or the embarrassment associated with using them. Researchers will have to investigate these issues and share the findings with the public.

When more people are aware of the issues, there could be a shift in thinking to demand and support better access to toilets as part of mobility policy.

The Conversation

Festival Godwin Boateng is affiliated with the American Restroom Association (ARA)

ref. Toilets can make Africa’s roads safer, according to this new study – https://theconversation.com/toilets-can-make-africas-roads-safer-according-to-this-new-study-269297

As AI leader Nvidia posts record results, Warren Buffett’s made a surprise bet on Google

Source: The Conversation – Global Perspectives – By Cameron Shackell, Adjunct Fellow, Centre for Policy Futures, The University of Queensland; Queensland University of Technology

Fortune Live Media, CC BY-NC-ND

The world’s most valuable publicly listed company, US microchip maker Nvidia, has reported record $US57 billion revenue in the third quarter of 2025, beating Wall Street estimates. The chipmaker said revenue will rise again to $US65 billion in the last part of the year.

The better than expected results calmed global investors’ jitters following a tumultuous week for Nvidia and broader worries about the artificial intelligence (AI) bubble bursting.

Just weeks ago, Nvidia became the first company valued at more than $US5 trillion – surpassing others in the “magnificent seven” tech companies: Alphabet (owner of Google), Amazon, Apple, Tesla, Meta (owner of Facebook, Instagram and Whatsapp) and Microsoft.

Nvidia stocks were up more than 5% to $US196 in after-hours trading immediately following the results.

Over the past week, news broke that tech billionaire Peter Thiel’s hedge fund had sold its entire stake in Nvidia in the third quarter of 2025 – more than half a million shares, worth around $US100 million.

But in that same quarter, an even more famous billionaire’s firm made a surprise bet on Alphabet, signalling confidence in Google’s ability to profit from the AI era.

Buffett’s new stake in Google

Based in Omaha, Nebraska in the United States, Berkshire Hathaway is a global investing giant, led for decades by 95-year-old veteran Warren Buffett.

Berkshire Hathaway’s latest quarterly filing reveals the company accumulated a US$4.3 billion stake in Alphabet over the September quarter.

The size of the investment suggests a strategic decision – especially as the same filing showed Berkshire had significantly sold down its massive Apple position. (Apple remains Berkshire’s single largest stock holding, currently worth about US$64 billion.)

Buffett is about to step down as Berkshire’s chief executive. Analysts are speculating this investment may offer a pre-retirement clue about where durable profits in the digital economy could come from.

Buffett’s record of picking winners with ‘moats’

Buffett has picked many winners over the decades, from American Express to Coca Cola.

Yet he has long expressed scepticism toward technology businesses. He also has form in getting big tech bets wrong, most notably his underwhelming investment in IBM a decade ago.

With Peter Thiel and Japan’s richest man Masayoshi Son both recently exiting Nvidia, it may be tempting to think the “Oracle of Omaha” is turning up as the party is ending.

But that framing misunderstands Buffett’s investment philosophy and the nature of Google’s business.

Buffett is not late to AI. He is doing what he’s always done: betting on a company he believes has an “economic moat”: a built-in advantage that keeps competitors out.

His firm’s latest move signals they see Google’s moat as widening in the generative-AI era.

Two alligators in Google’s moat

Google won the search engine wars of the late 1990s because it excelled in two key areas: reducing search cost and navigating the law.

Over the years, those advantages have acted like alligators in Google’s moat, keeping competitors at bay.

Google understood earlier and better than anyone that reducing search cost – the time and effort to find reliable information – was the internet’s core economic opportunity.

Google founders Sergey Brin and Larry Page in 2008, ten years after launching the company.
Joi Ito/Wikimedia Commons, CC BY

Company founders Sergey Brin and Larry Page started with a revolutionary search algorithm. But the real innovation was the business model that followed: giving away search for free, then auctioning off highly targeted advertising beside the results.

Google Ads now brings in tens of billions of dollars a year for Alphabet.

But establishing that business model wasn’t easy. Google had to weave its way through pre-internet intellectual property law and global anxiety about change.

The search giant has fended off actions over copyright and trademarks and managed international regulatory attention, while protecting its brand from scandals.

These business superpowers will matter as generative AI mutates how we search and brings a new wave of scrutiny over intellectual property.

Berkshire Hathaway likely sees Google’s track record in these areas as an advantage rivals cannot easily copy.

What if the AI bubble bursts?

Perhaps the genius of Berkshire’s investment is recognising that if the AI bubble bursts, it could bring down some of the “magnificent seven” tech leaders – but perhaps not its most durable members.

Consumer-facing giants like Google and Apple would probably weather an AI crash well. Google’s core advertising business sailed through the global financial crisis of 2008, the COVID crash, and the inflationary bear market of 2022.

By contrast, newer “megacaps” like Nvidia may struggle in a downturn.




Read more:
Could a ‘grey swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for


Plenty could still go wrong

There’s no guarantee Google will be able to capitalise on the new economics of AI, especially with so many ongoing intellectual property and regulatory risks.

Google’s brand, like Buffett, could just get old. Younger people are using search engines less, with more using AI or social media to get their answers.

New tech, such as “agentic shopping” or “recommender systems”, can increasingly bypass search altogether.

But with its rivers of online advertising gold, experience back to the dawn of the commercial internet, and capacity to use its platforms to nurture new habits among its vast user base, Alphabet is far from a bad bet.


Disclaimer: This article provides general information only and does not take into account your personal objectives, financial situation, or needs. It is not intended as financial advice. All investments carry risk.

The Conversation

Cameron Shackell works primarily as an Adjunct Fellow at The University of Queensland and Sessional Academic at QUT. He also works one day a week as CEO of a firm using AI to analyse brands and trademarks.

ref. As AI leader Nvidia posts record results, Warren Buffett’s made a surprise bet on Google – https://theconversation.com/as-ai-leader-nvidia-posts-record-results-warren-buffetts-made-a-surprise-bet-on-google-270140