OpenAI gets set to go public: can we entrust the financial markets with ChatGPT and AI?

Source: The Conversation – France – By Frédéric Fréry, Professeur de stratégie, CentraleSupélec, ESCP Business School

The OpenAI offices in San Francisco (California) when it was established in 2015. HaeB/Wikimedia, CC BY-SA

OpenAI, the creator of ChatGPT, is gearing up to launch its Initial Public Offerings (IPO) this year. This financial manoeuvre would represent a pivotal shift for a project originally designed for the “common good” towards a market-driven logic. Established in 2015, OpenAI started out amidst growing anxiety regarding artificial intelligence (AI). Founded by Sam Altman and Elon Musk, the tech company adopted a non-profit structure and made no secret of its goal to develop AI that is “beneficial to humanity” and prevent it from remaining in the hands of a few dominant players.

This ambition distinguished it from tech giants like Google, Microsoft, Meta, and Amazon, which were built on proprietary models and rent-seeking effects.

In contrast, OpenAI intended to champion general public interest by emphasising open research and sharing knowledge. However, this orientation – symbolised by its name – quickly collided with a structural constraint: the astronomical cost of generative AI.

Massive costs

Unlike traditional software, where marginal costs tend towards zero (for example, the millionth copy of Windows costs Microsoft nothing), generative AI requires massive infrastructure.

Every interaction mobilises computing resources, energy, and specialised equipment. A standard ChatGPT query, consisting of one question and one answer, costs between $0.01 and $0.10. Similarly, generating a high-definition image can cost between $0.10 and $0.20. While these amounts seem negligible in isolation, they become staggering when scaled to the billions of daily queries seen in 2026.

This is explained by the underlying infrastructure, particularly the Graphics Processing Units (GPUs) supplied by players like Nvidia. These chips can cost tens of thousands of dollars to purchase and several dollars per hour via cloud access.

OpenAI, like its competitors, depends on tens of thousands of these GPUs running continuously in massive data centers. According to some estimates,the necessary investments will reach hundreds of billions by the end of this decade.

As early as the late 2010s, it became clear that a purely non-profit model could not meet such capital intensity. This is why OpenAI adopted a hybrid status in 2019, allowing it to raise funds while maintaining control through a foundation. It was a first foray into the market economy, albeit one tempered by the ambition to resist investor demands.

Brutal acceleration with ChatGPT

However, at the end of 2022, the chatbot ChatGPT radically changed the game, attracting 100 million users in just two months, before surpassing 900 million weekly users by early 2026.

OpenAI’s revenue surged from approximately $200 million (€173.15 million) in 2022 to over $10 billion (€8.65 billion) in 2025 – a sixty-fold increase in three years.

This exponential growth was accompanied by the implementation of a business model with multiple revenue streams. For individuals, OpenAI offers paid subscriptions (ranging from $20 to $200 per month). However, the bulk of the revenue comes from enterprises, via subscriptions priced between $25 and $60 per user per month. A company with 10,000 employees thus represents several million dollars in annual revenue.

Corporate money

OpenAI additionally bills for the use of its models by companies that integrate them directly into their own solutions. Every use is metered, often on a massive scale. An application processing a million queries a day can generate tens of thousands of dollars in monthly billing.

Finally, a growing portion of revenue comes from strategic agreements, notably with Microsoft, which integrates OpenAI technologies into its products under the Copilot brand.

It is the sum of these flows – subscriptions, licences, third-party usage, and partnerships – that allowed OpenAI to reach approximately $1 billion in monthly revenue in 2025. Yet, this commercial rise masks an intrinsic economic fragility.

A gigantic cash-burning machine

Despite sharply rising revenues, OpenAI remains structurally loss-making. In the first half of 2025, the company reportedly generated approximately $4.3 billion in revenue while recording losses between $7 billion and $13 billion – more than $2 billion in losses every month. In total, cumulative losses could exceed $140 billion (€121.19 billion) between 2024 and 2029.

This drift is explained by the very nature of OpenAI’s business model, where every interaction incurs a cost alongside gargantuan necessary investments. Beyond infrastructure, Research and Development (R&D) is a major expense. To stay in the technological race against an increasingly competitive environment, OpenAI reportedly invested nearly $16 billion in R&D in 2025 alone.

To this is added the cost of human resources, which is sometimes extraordinary. While base salaries for the most in-demand AI experts range from $250,000 – $700,000 per year, their total compensation – including stock and bonuses – frequently exceeds $1 million. In some cases, annual compensation even exceeds $10 million. Here again, bidding wars from competitors like Meta force OpenAI to match these offers for fear of seeing its key talent vanish.

Nearing bankruptcy?

In short, OpenAI’s business is not enough to cover its costs, to the point that some analysts suggest that at this rate, it could be forced to file for bankruptcy as early as 2027. Recourse to external financing is therefore indispensable to cover these losses.

To sustain its growth, OpenAI has already raised approximately $58 billion since its inception, including more than $13 billion from Microsoft. In 2025, an exceptional funding round reportedly raised up to $40 billion more, pushing its valuation to several hundred billion dollars.

At the end of March 2026, a new $122 billion funding round – notably involving Amazon ($50 billion), Nvidia, and SoftBank ($30 billion each) – brought the valuation to $852 billion (€737.6 billion). Yet, these amounts remain insufficient given the requirements.

Industrial Dependency

Dependency on industrial partners appears particularly problematic. Microsoft provides OpenAI with its cloud infrastructure via Azure, while Nvidia plays a key role upstream by providing GPUs. Much like the Gold Rush era, when shovel sellers grew rich at the expense of prospectors, it is the infrastructure providers in the AI sector making a fortune, not the model designers.

In practice, every AI query generates revenue for infrastructure providers, amounting to a form of “invisible tax” captured upstream.

In 2025, Nvidia generated nearly $73 billion in net profit on approximately $130 billion in revenue, and its stock market valuation is 1.5 times higher than the entire Paris stock exchange!

Governance missteps

OpenAI’s economic tensions have spilled over into its corporate governance. The hybridisation of a public interest mission with private financing mechanisms resulted in a complex structure. A non-profit foundation controls a for-profit “public benefit corporation”, which is funded by investors and tasked with raising capital and developing activities – all while theoretically remaining subordinate to the foundation’s public interest mission. This construction, designed to avoid purely financial logic, quickly fuelled tensions between different stakeholders.

Elon Musk’s departure in 2018 was the first signal of a strategic disagreement. In 2020, several researchers left OpenAI to found Anthropic, citing differences over safety and governance. However, it was primarily the crisis of November 2023 that fully revealed the system’s fragilities, when the board of directors suddenly announced the firing of Sam Altman, citing a lack of transparency in his communications.

Within hours, the situation spiralled into an open crisis. Nearly all employees threatened to leave the company if Altman was not reinstated. Microsoft, the main partner and investor, publicly supported Altman and even discussed the possibility of hiring him and his teams. Faced with this pressure, the board was forced to reverse its decision within days. Sam Altman was reinstated, and the board’s composition was profoundly overhauled. This episode highlighted internal tensions, specifically the difficulty of making divergent logics coexist within the same company: ethical posturing, industrial imperatives, and investor demands.

Intensifying Competition

In addition to these internal constraints, competitive intensity is particularly fierce.

Google, the inventor of generative AI, is making rapid progress with Gemini. Anthropic, with Claude, has established itself in certain segments, particularly programming, while emphasising safety.

China’s DeepSeek has claimed to use less expensive processors. France’s Mistral AI advocates for a frugal approach and European digital sovereignty. In a sign of this shifting landscape, Apple which initially partnered with OpenAI to include ChatGPT for certain Siri features – has chosen to replace it with Gemini.

In this context of ecosystem reorganisation, OpenAI’s position, while still central, is being challenged. Intensifying competition reinforces the need for ever-greater financial resources.

The stock market: lifeline or mirage?

OpenAI’s Initial Public Offering (IPO) is presented as a response to these constraints: a way to fund massive investments and consolidate a weakened competitive position. An IPO could raise between $50 billion and $100 billion by selling 10% to 20% of the capital. Such an operation would constitute one of the largest in the history of financial markets.

However, this transformation involves delicate trade-offs. A listed company is subject to profitability and transparency requirements that may clash with the experimental nature of artificial intelligence. Added to this is the persistent dependence on Microsoft and Nvidia, which limits the company’s strategic autonomy.

Most importantly, there is no indication that an IPO would suffice to resolve OpenAI’s structural problems. At best, without a significant shift in the business model, it would only delay its bankruptcy by a few years. The economic model of generative AI remains fundamentally unstable today.

A Question Beyond OpenAI

Beyond the case of OpenAI, one can legitimately question the current functioning of an economy dominated by tech giants. Artificial intelligence is establishing itself as an essential infrastructure whose effects far exceed the economic sphere. For some analysts, control over AI now carries the same geostrategic importance link please as the possession of nuclear weapons.

Consequently, a civilisational question arises: can we entrust the development and direction of such a technology solely to financial markets? Can we imagine Elon Musk or Mark Zuckerberg personally owning the equivalent of one or more atomic bombs? OpenAI’s IPO will not provide the answer alone. However, it will constitute one of the first large-scale tests.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Frédéric Fréry ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. OpenAI gets set to go public: can we entrust the financial markets with ChatGPT and AI? – https://theconversation.com/openai-gets-set-to-go-public-can-we-entrust-the-financial-markets-with-chatgpt-and-ai-280943

L’IA générative ne détruira pas votre emploi mais elle va le changer profondément votre métier

Source: The Conversation – France (in French) – By Hugo Spring-Ragain, Doctorant en économie / économie mathématique, Centre d’études diplomatiques et stratégiques (CEDS)

L’intelligence artificielle ne détruit pas tant des emplois qu’elle modifie profondément les compétences nécessaires pour les accomplir. De cette confusion entre emploi et compétences risquent de naître des erreurs dans les politiques d’accompagnement des mutations en cours.


Chaque grande vague technologique a produit son lot de prédictions contradictoires sur l’emploi. L’intelligence artificielle (IA) ne fait pas exception. Mais avant de savoir combien d’emplois l’IA va créer ou détruire, il faudrait s’accorder sur ce qu’elle automatise réellement. La réponse oblige à distinguer trois notions que le débat public confond régulièrement : l’emploi, la compétence et la tâche.

Les grandes vagues d’automatisation ont suivi une logique remarquablement stable en deux siècles : vapeur, électricité, robotique industrielle ont déplacé les tâches physiques répétitives et épargné le travail cognitif non routinier. Cette régularité empirique a été formalisée par Autor, Levy et Murnane dès 2003 sous le nom d’« hypothèse de polarisation des tâches ».

Une illusion persistante

L’automatisation ronge les emplois intermédiaires, ceux des cols bleus qualifiés et employés de bureau exécutant des tâches routinières, mais épargne les deux extrémités. D’un côté, les tâches manuelles non routinières, comme la plomberie ou les soins, de l’autre, les tâches cognitives non routinières, comme l’analyse, le conseil ou la rédaction experte. Ces dernières constituaient le cœur des professions du tertiaire qualifié, et la conviction s’était solidement installée qu’elles resteraient hors d’atteinte.




À lire aussi :
Pourquoi l’IA oblige les entreprises à repenser la valeur du travail


Cette conviction reposait sur une confusion conceptuelle qu’il faut dissiper avant tout. Ce n’est pas l’emploi de juriste ou d’analyste financier qui était protégé, c’est un ensemble de tâches précises qui composaient cet emploi et qui résistaient jusqu’ici à l’automatisation. La distinction entre ces trois niveaux est fondamentale.

Un emploi désigne un poste occupé dans une organisation, avec un contrat, un salaire, une fiche de poste. Une compétence est une capacité cognitive ou technique mobilisable dans plusieurs contextes professionnels. Une tâche est une action précise, délimitable, dont on peut évaluer si elle est ou non automatisable à un coût donné. C’est à ce troisième niveau que se joue réellement la transformation en cours, et c’est précisément ce niveau que le débat public ignore.

Rupture dans la longue histoire du capitalisme industriel

L’IA générative constitue une rupture dans cette longue histoire. Pour la première fois depuis l’industrialisation, les tâches cognitives qualifiées, rédaction, analyse documentaire, synthèse, production de premiers jets, se retrouvent directement exposées. Eloundou, Manning, Mishkin et Rock estiment qu’environ 80 % de la population active états-unienne pourrait voir au moins 10 % de ses tâches affectées par les grands modèles de langage, et que cette exposition croît avec le niveau de salaire. C’est l’exact inverse du schéma observé lors de toutes les vagues précédentes.

Le cadre analytique développé par Acemoglu et Restrepo permet d’aller plus loin. Leur modèle distingue deux effets opposés produits par toute vague d’automatisation :

  • l’effet de déplacement, d’abord : des travailleurs perdent des tâches au bénéfice de la machine, ce qui réduit mécaniquement la demande de travail et pèse sur les salaires des groupes affectés ;

  • l’effet de réintégration, ensuite : l’automatisation produit de nouvelles tâches où la valeur humaine est décisive, générant une demande compensatrice.

L’histoire longue du capitalisme industriel peut se lire comme une succession de ces deux effets, le second finissant généralement par compenser le premier.

Le cas de la traduction permet de voir très concrètement comment déplacement et réintégration se combinent, l’IA générative peut produire en quelques secondes un premier jet dans une autre langue, ce qui déplace une partie du travail auparavant effectué par des traducteurs humains vers la machine. Mais cette automatisation réintègre simultanément d’autres tâches ou renforce leur importance, telles que la vérification des contresens, l’adaptation au contexte culturel, l’harmonisation de la terminologie, le contrôle de la qualité et la validation finale.

Potentiel déséquilibre

Ce qui est préoccupant avec l’IA générative, c’est le déséquilibre potentiel entre ces deux dynamiques. Le déplacement s’opère à une vitesse que les marchés du travail et les institutions de formation peinent à absorber, tandis que la réintégration reste encore largement à construire.

Cependant, le phénomène le plus important n’est pas sectoriel, mais il est interne aux métiers eux-mêmes. Dans ses « Perspectives de l’emploi », l’OCDE met en évidence que les professions les plus exposées à l’IA générative sont précisément celles à forte densité cognitive : finance, droit, conseil, enseignement supérieur. Contrairement aux vagues précédentes qui frappaient les zones rurales et les bassins industriels, l’exposition est désormais plus forte dans les grandes métropoles et chez les travailleurs hautement qualifiés, un renversement géographique et social inédit.

Redistribuer les tâches

Ce renversement se joue concrètement au niveau de la tâche.

Dans un même poste d’analyste financier ou de juriste, certaines tâches migrent vers l’IA (produire un résumé exécutif, générer une première analyse de contrat, synthétiser une revue de littérature), tandis que d’autres se revalorisent mécaniquement : définir le cadre d’analyse pertinent, évaluer la qualité d’un raisonnement automatisé, détecter une erreur factuelle dans un output, assumer la responsabilité juridique ou éthique d’une décision. Ce ne sont pas des emplois qui disparaissent. Ce sont des bouquets de tâches qui se redistribuent entre humains et machines, transformant de l’intérieur ce qu’un employeur attend d’un salarié qualifié.

Cette redistribution des tâches a une implication directe sur les compétences qui seront réellement valorisées dans les années à venir, et elle renverse une partie des évidences habituelles sur la formation professionnelle.

Former les travailleurs à utiliser l’IA au sens instrumental, maîtriser un outil, rédiger des prompts efficaces, s’approprier une interface, est utile à court terme, mais c’est insuffisant si la compétence réellement demandée demain n’est pas de produire avec l’IA, mais de superviser et de critiquer ce qu’elle produit.

Un enjeu de formation

Or, superviser efficacement un output d’IA requiert exactement ce que les formations courtes et techniques peinent à développer : une culture générale solide permettant de détecter une erreur de fond, une capacité argumentative pour évaluer la cohérence d’un raisonnement, une connaissance des biais cognitifs pour identifier les angles morts d’une analyse automatisée. Ce sont des compétences que les sciences de l’éducation regroupent sous le terme de métacompétences : apprendre à apprendre, à exercer un jugement critique, à mobiliser des savoirs dans des situations inédites.

Arte, 2025.

Le paradoxe devient alors le suivant. À mesure que l’IA automatise les tâches routinières de la connaissance, elle valorise précisément ce que les formations généralistes et les cursus de sciences humaines cultivent de longue date et que les débats sur l’employabilité ont eu tendance à déconsidérer au profit de compétences techniques plus immédiatement mesurables.

Non par nostalgie des humanités, mais par logique économique pure. Si la machine produit le texte, l’analyse et la synthèse, la valeur marginale de l’humain réside dans sa capacité à juger si ce texte dit vrai, si cette analyse est pertinente au regard du contexte réel, si cette synthèse sert l’objectif poursuivi.

The Conversation

Hugo Spring-Ragain ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. L’IA générative ne détruira pas votre emploi mais elle va le changer profondément votre métier – https://theconversation.com/lia-generative-ne-detruira-pas-votre-emploi-mais-elle-va-le-changer-profondement-votre-metier-279911

Chiens, chats et santé mentale : à quel point les Français sont-ils attachés à leurs animaux de compagnie ?

Source: The Conversation – France in French (3) – By Tiphaine Blanchard, enseignante en gériatrie et nutrition vétérinaire, École Nationale Vétérinaire de Toulouse; Inrae

De plus en plus d’études montrent que vivre avec un chien ou un chat peut avoir des effets positifs sur la santé physique et le bien-être mental. Ces effets sont notamment dûs à l’attachement fort des propriétaires pour leurs animaux de compagnie. Une étude originale s’est intéressée à ce lien et à ses principales caractéristiques.


En France, les animaux de compagnie ne sont pas de simples compagnons : ils participent activement à notre bien-être. Mais que révèle ce lien sur notre santé mentale et nos modes de vie ?

Une étude récente menée à l’École nationale vétérinaire de Toulouse a permis, pour la première fois, de mesurer l’attachement des Français à leurs chiens et à leurs chats.

Les animaux, alliés de notre santé physique et mentale

Les bienfaits de la présence d’un animal sur la santé humaine ne sont plus à démontrer. De nombreuses études mettent en évidence qu’elle est associée à une réduction du risque cardiovasculaire et qu’elle peut aider à diminuer le stress, en particulier chez les personnes entretenant un lien affectif fort avec leur animal de compagnie.

Les propriétaires de chiens, par exemple, marchent davantage, ont une vie sociale plus active et présentent un risque moindre de dépression. Chez les personnes âgées, les études suggèrent que la présence d’un animal aide à préserver des capacités cognitives, telles que la mémoire, ainsi que le moral, tandis que chez les enfants, elle favorise l’apprentissage de l’empathie et des responsabilités.

Ce lien n’est pas seulement comportemental : il touche aussi nos besoins émotionnels. Dans une société marquée par la solitude, l’anxiété et le vieillissement de la population, le chien ou le chat devient parfois un véritable soutien psychologique, capable de créer un sentiment de stabilité et d’utilité au quotidien.

Cependant, cette relation, bénéfique dans bien des cas, peut aussi devenir source de détresse émotionnelle. Certaines personnes développent un attachement anxieux à leur animal, caractérisé par une inquiétude excessive à l’idée de la séparation ou lorsque l’animal tombe malade.

Chez les personnes âgées, même sans hyperattachement, la séparation forcée avec leur animal lors d’une hospitalisation ou d’une entrée en Ehpad représente souvent un véritable traumatisme, tant l’animal fait partie de leur équilibre affectif et leur quotidien.




À lire aussi :
Quand les chats améliorent la qualité de vie au sein des Ehpad


La relation humain-animal comme outil thérapeutique

Les effets positifs du lien humain-animal sont aujourd’hui mis à profit dans plusieurs programmes hospitaliers et médico-sociaux.

La présence d’animaux en établissements médico-sociaux (type Ehpad) peut favoriser les échanges, susciter des souvenirs et contribuer à rompre temporairement le sentiment de solitude chez les résidents. Proposer une médiation animale dans des psychothérapies destinées à des adolescents se révèle aussi bénéfique. Enfin, dans certaines unités pédiatriques, notamment en oncologie, des animaux spécialement formés accompagnent les patients pendant les soins pour diminuer l’anxiété et améliorer le bien-être lors de l’hospitalisation.

Plus récemment, plusieurs commissariats français ont introduit la présence de chatons pour apaiser les victimes de violences, une approche inspirée de dispositifs déjà mis en œuvre à l’étranger. Ainsi, aux États-Unis, des chiens spécialement formés sont intégrés dans certains commissariats et tribunaux afin d’accompagner les victimes lors des auditions. À ce jour, il n’existe pas de données scientifiques évaluant leur impact dans ce contexte spécifique, mais les témoignages sont positifs. Par ailleurs, des bénéfices ont été rapportés chez les professionnels : une étude menée auprès de policiers canadiens a montré que la présence de chiens dans leur environnement de travail était perçue comme réduisant le stress et améliorant le bien-être.

Cette thématique mériterait d’être explorée par des travaux de recherche spécifiques pour étudier dans quelle mesure le contact avec un animal aide à restaurer un sentiment de sécurité après un traumatisme.

Ces initiatives, de plus en plus répandues, reposent toutes sur la même idée : renforcer la santé humaine en s’appuyant sur la relation avec l’animal. Comprendre les liens complexes, entre bien-être, dépendance et vulnérabilité, nécessite de disposer d’un outil fiable, qui n’existait pas encore en version française jusqu’à récemment.

Une première échelle pour mesurer l’attachement à son animal en France

Pour mieux comprendre ces relations, peu étudiées en France, un outil de référence international a été traduit en français : la Lexington Attachment to Pets Scale (LAPS). Cet outil permet de quantifier l’attachement émotionnel entre un propriétaire et son animal à travers 23 items (par exemple : « Mon animal comprend quand je suis triste »).

Près de 1 900 propriétaires français de chiens et de chats ont répondu à cette enquête.

Comment mesure-t-on l’attachement à son animal ?

L’échelle LAPS attribue un score d’attachement de 0 à 69, un score élevé traduisant un attachement plus fort du propriétaire à son animal.

En France, les propriétaires de chiens ont obtenu un score médian de 58,5 contre 52 pour les chats. C’est plus élevé qu’en Angleterre, au Danemark ou en Autriche !

Des différences marquées selon le profil des propriétaires

L’étude met en évidence plusieurs facteurs influençant le score d’attachement :

  • Les femmes ont un score plus élevé que les hommes, un résultat observé aussi dans les autres pays.

  • Les personnes vivant sans enfants présentent également un score plus élevé, leurs animaux pouvant parfois jouer le rôle de figures familiales de substitution.

  • Les propriétaires de chiens ont un score plus élevé que les propriétaires de chats, peut-être en raison d’une interaction plus active.

  • Les personnes avec un niveau d’études plus élevé affichent des scores plus faibles, peut-être parce qu’elles ont tendance à moins exprimer leur attachement émotionnel.

Ces tendances reflètent des réalités sociales profondes. Dans une société où la solitude augmente, où les familles se recomposent et où le travail à distance se généralise, l’animal occupe un rôle affectif grandissant. Il apaise, structure le quotidien et comble un besoin de lien que les relations humaines ne satisfont pas toujours.

Quand nos chiens et nos chats deviennent nos figures d’attachement

En psychologie, la théorie de l’attachement décrit notre besoin fondamental de sécurité et de réassurance auprès d’une « figure d’attachement », souvent un parent, un partenaire, ou… un animal.

Les chiens, plus démonstratifs, offrent une interaction émotionnelle proche de celle d’un enfant : ils sollicitent, réagissent, expriment de la joie. Les chats, plus indépendants, demandent parfois une forme d’attachement plus « projective », où le propriétaire interprète leurs signes d’affection.

Ces différences expliquent pourquoi les chiens obtiennent des scores d’attachement plus élevés : ils répondent activement au besoin humain de lien et de réciprocité. Mais chez tous les propriétaires, l’attachement est bien réel.

Et maintenant ? Quand la santé de l’animal influence celle du maître

L’échelle française validée du LAPS sert déjà à d’autres travaux de recherche.

L’un d’eux s’intéresse à l’impact de l’arthrose du chien sur la vie quotidienne de ses propriétaires. Quand un animal souffre, c’est souvent tout le foyer qui en subit les conséquences. Vous pouvez participer à cette nouvelle étude en répondant à ce questionnaire en ligne :

Le questionnaire s’adresse à tous les propriétaires de chiens, qu’ils soient concernés ou non par l’arthrose, afin de mieux comprendre comment la santé des chiens affecte celle de ses propriétaires et d’à améliorer la prise en charge conjointe du chien et de sa famille.

The Conversation

Tiphaine Blanchard ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Chiens, chats et santé mentale : à quel point les Français sont-ils attachés à leurs animaux de compagnie ? – https://theconversation.com/chiens-chats-et-sante-mentale-a-quel-point-les-francais-sont-ils-attaches-a-leurs-animaux-de-compagnie-280326

3 reasons the war between the US, Israel and Iran is headed for a frozen conflict

Source: The Conversation – Global Perspectives – By Jessica Genauer, Academic Director, Public Policy Institute, UNSW Sydney

With a shaky ceasefire in place between the US, Israel and Iran – and little progress on talks to resolve the complex issues at the heart of the war – where is this conflict going?

The most likely scenario is a frozen conflict.

A frozen conflict is not static, but is an unresolved war that continues at a low-level below the threshold of full-scale combat.

This typically occurs when a comprehensive political agreement cannot be reached, such as the fighting in eastern Ukraine from 2014 until Russia’s full-scale invasion in 2022. This conflict was considered frozen despite the deaths of some 14,000 military personnel and civilians and persistent cyber and information warfare.

Even if negotiations resume this week in Pakistan and an eventual agreement is reached, there are still three reasons we believe this is headed towards a frozen conflict, not a comprehensive peace agreement.

1) Trump equates ceasefires with an end to war

US President Donald Trump’s approach to foreign policy has shown he does not treat ceasefires as pauses for negotiations to agree on substantive political issues. Rather, he declares a ceasefire as a US success, then moves on to the next global issue.

Trump claims to have ended ten wars, including the current conflict with Iran and Israel’s war in Lebanon. A closer look reveals that in most of these conflicts, a shaky ceasefire has held while substantive issues remain unresolved.

This has left frozen conflicts in place with ongoing tensions. In India and Pakistan, which engaged in a brief armed conflict last year, for example, there is a continued risk of renewed hostilities. And a lasting peace between Thailand and Cambodia after last year’s border spats remains elusive.

Yet, Trump has walked away from these conflicts and claimed an end to war as soon as a cessation of major hostilities was in place.

2) Asymmetric wars are difficult to resolve

The current war is asymmetric because of the huge difference in military strength between the US and Israel on one side, and Iran on the other.

Iran has intentionally used asymmetric tactics to counter the US’ overwhelming military power, including targeting infrastructure in Persian Gulf countries not involved in the war and closing the Strait of Hormuz to commercial shipping traffic to disrupt the global economy.

Research shows asymmetric wars are inherently protracted and often open-ended. As a result, they are more likely to end in a frozen conflict than an enduring political settlement.

The reason for this is simple. The weaker actor cannot win a conventional military battle against the stronger actor. So, it tries to exhaust the more powerful nation with political, economic and psychological pressure, forcing a withdrawal and cessation of hostilities.

This is what we are seeing now between the US and Iran. Trump is feeling these rising pressures and is pursuing a ceasefire, while trying to claim a US victory.

Iran, meanwhile, has agreed to a ceasefire in a bid for survival as the weaker actor, rather than a commitment to an enduring end to the conflict.

This is reminiscent of the Taliban in Afghanistan, who survived 20 years in a frozen conflict with the US before taking back control of the country when the US withdrew.

3) There’s been no focus on the more complex issues

Neither the US nor Iran appears committed to any long-term resolution of the underlying tensions at the root of the conflict. Key among these is the question of Iran’s nuclear program.

For Washington, the first round of peace talks in Pakistan on April 11–12 were aborted because Iran refused to compromise on its nuclear program. And Iran has long argued it has an inalienable right to enrich uranium for civilian purposes.

The negotiations that led to the multilateral 2015 deal on Iran’s nuclear program – the Joint Comprehensive Plan of Action – took 20 months to conclude. Trump withdrew from the agreement three years later, calling it a “horrible one-sided deal”.

Given this history, a quick and clear resolution to this complex dispute is unlikely.

Some analysts believe the US and Iran could announce a partial agreement that would leave many of the technical aspects to be ironed out later.

But Trump is now facing an opponent that is unlikely to become more accommodating with respect to its long-term “nuclear rights”. In fact, Iran has already shown its resolve by asserting a new geostrategic normal, closing the Strait of Hormuz and disrupting the global economy.

What a frozen conflict means for the region

The Iran-US war may conclude with a series of ceasefires, but will likely remain a frozen conflict due to these underlying tensions. This means more threats from both sides over Iran’s nuclear program and periodic flare-ups of violence between Israel and Iran, the US and Iran, or both.

This is much like the frozen situation in Gaza. Last October, Israel and Hamas agreed to a ceasefire under Trump’s 20-point peace plan. The first phase of the plan was then largely implemented, leading to a hostage-prisoner exchange, a decrease in Israel’s heavy bombardments of Gaza and a resumption of aid into the strip.

However, there has since been no progress on the more complex questions of the post-war governance of Gaza, redevelopment of the strip and – crucially – the disarmament of Hamas fighters. As a result, Israel has refused to completely withdraw its troops and violence has continued.




Read more:
Israel and Lebanon have a ceasefire, but global attention shouldn’t move on. This isn’t a tidy end to the war


From a historical perspective, the frozen conflict in the Koreas is also instructive. The war ended with an armistice in 1953 and no peace treaty, effectively leaving North and South Korea at war to this day. This led to the North developing an underground nuclear weapons program that continues to pose a threat to the world.

Similarly, the decades-long frozen India-Pakistan conflict has led to an arms race (including the development of nuclear weapons on both sides), instability in South Asia and periodic flare-ups of violence.

A frozen conflict between the US, Israel and Iran will no doubt create similar long-term instability in the Middle East, including a possible arms race in the Middle East and more flare-ups of violence, particularly around control of the Strait of Hormuz.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. 3 reasons the war between the US, Israel and Iran is headed for a frozen conflict – https://theconversation.com/3-reasons-the-war-between-the-us-israel-and-iran-is-headed-for-a-frozen-conflict-280996

From Fleabag to Vladimir: why has breaking the fourth wall has become so common?

Source: The Conversation – Global Perspectives – By Alex Munt, Associate Professor, Media Arts & Production, University of Technology Sydney

Netflix

In the opening moments of Vladimir, Netflix’s new erotic drama series, the protagonist M (Rachel Weisz) is sprawled on a couch in her negligee, writing in her notepad. She leans towards the camera, then stares into the lens to address you, the viewer, on your couch.

In film and television, this is called “breaking the fourth wall”. It is a ploy of metafiction: a kind of self-aware mode of storytelling.

The fourth wall is the invisible plane through which the camera observes the action. To break the fourth wall is to play with – or sever – audiences’ suspension of disbelief, and abandon the norms of screen narration.

The history of breaking the fourth wall is almost as long as the history of cinema itself. Edwin S. Porter’s film The Great Train Robbery ends with an outlaw firing his gun directly towards the camera. Back in 1903, audiences ducked for cover.

Nearly a century later, director Martin Scorsese paid homage to Porter in Goodfellas (1990) in a scene where Mobster Tommy DeVito (Jo Pesci) fires his gun directly at the screen. Here, the fourth wall break is used in an existential moment for Henry Hill (Ray Liotta) – rather than for pure shock.

In fact, the shock value of the technique has depleted over time, as audiences have become more media literate.

Making the invisible visible

The fourth wall breaks from early cinema fast disappeared with the industrialisation of the medium. The rise of the American studio system privileged some film techniques over others.

The “Classical Hollywood” style – think Casablanca (1942) – was built on a premise of invisibility, from the carefully directed eye-lines of actors, to “continuity” editing that stitched together different camera angles.

In Breathless (À bout de souffle, 1959) Jean-Luc Godard opted for jump-cuts and “direct address”. This is when a character speaks to, or looks directly at, the viewer.

Today, direct address is used widely across genres, from Barbie (2023), to Marvel’s Deadpool films (2016, 2018, 2024), and Jane Austen adaptations such as Persuasion (2022).

On television, we’ve seen women creators and characters explore the power of direct address in a re-calibration of the “male gaze”.

One example is Phoebe Waller-Bridges’ confessions to the camera in Fleabag (2016–19). Cinematographer Tony Miller notes how creative camera choices work in conjunction with direct address to make viewers “complicit in her [character’s] journey”.

The direct gaze

A fourth wall break is not always dialogue-driven. In Persona (1966) film auteur Ingmar Bergman directed his actors to stare deep into the abyss of the camera lens, delivering existential malaise.

This direct gaze has been remediated for streaming programs, including in the
intense close-up shots of Carmy (Jeremy Allen-White) in the final season of The Bear (2025), and knowing glances from the troubled Rue (Zendaya) in Euphoria (2019–26).

Fourth wall breaks can also be graphic. In Pulp Fiction, Mia Wallace (Uma Thurman) traces a square of light on the screen with her finger instead of calling Vincent Vega (John Travolta) a “square”.

And in Michael Haneke’s films Funny Games (1997, 2007) a home invader literally “rewinds” the story when a victim kills his accomplice. These kind of wall breaks call attention to the invisible membrane of the screen.

As filmmaker Mark Cousins attests in The Story of Film: An Odyssey, the medium has advanced over time through innovation and the recycling of techniques such as fourth wall breaks.

Is breaking the fourth wall back in vogue?

With the dominance of literary adaptations for the screen (and IP-driven screen stories in general) we’re likely to see more cases of direct address, as screenwriters seek to creatively refashion texts for the screen. Vladimir, for instance, is an adaptation of Julia May Jonas’ 2022 novel of the same name.

While breaking the fourth wall may have lost its shock value, it remains a bold storytelling device which, if done well, can set apart one screen production from another.

Actor Matt Damon recently pointed out how streamers such as Netflix are discussing the potential to reiterate “the plot three or four times in the dialogue” of a film, to account for people who scroll on their phone while listening to “background TV”.

Having a character speak directly to a distracted audience may be one way to return their gaze to the bigger screen.

Hyper-reality in unscripted TV

Breaking the fourth wall sits within a wider envelope of “metafictional” storytelling.

As screen culture becomes increasingly aware of its own machinery, unscripted genres such as reality TV are not merely breaking the fourth wall, but abandoning the conceit of separation entirely. The boundaries between cast, camera, story producers and audience have become increasingly porous.

Alex Baskin, executive producer of the long-running series Vanderpump Rules (2013–25), describes this as “hyperreality”. In the wake of Scandoval, the cheating scandal of Tom Sandoval, the reality TV cast started to intervene in the producers’ narrative arcs by speaking on camera about audience feedback, and providing meta commentary on their own “edits”.

When Ariana Madix (Sandoval’s ex) refused to film with him, it disrupted plans for a neat season finale based on his apology. Madix left the set, effectively ending the entire show. Fellow cast member Tom Schwartz called it a “plot twist”. Unsurprisingly, Scorsese is a fan of the show.

Meta and hyperreal storytelling will continue to be on the rise as screen creators seek to imbue a point-of-difference in a congested market – serving an ever-distracted audience.

The Conversation

Alex Munt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. From Fleabag to Vladimir: why has breaking the fourth wall has become so common? – https://theconversation.com/from-fleabag-to-vladimir-why-has-breaking-the-fourth-wall-has-become-so-common-280716

New plastic film covered in thousands of tiny pillars can tear apart viruses on contact

Source: The Conversation – Global Perspectives – By Elena Ivanova, Distinguished Professor, Physics, RMIT University

Transparent acrylic samples coated in the new material. RMIT

Think of how many surfaces you touch every day, from your kitchen bench to the hand rail on the bus or train, your work desk and your phone screen.

A range of nasty viruses and other germs can easily spread via these surfaces. The typical route of infection involves touching a contaminated surface – and then touching your eyes, nose or mouth.

Of course, it’s possible to clean surfaces with chemical products. But these can wear off, harm the environment or contribute to antimicrobial resistance, where germs no longer respond to medicines because of repeated exposure.

In our new study, published in Advanced Science, colleagues and I created a thin plastic surface with tiny nanoscale features, billionths of a metre in size, that mimic the nanotextured surface of insect wings and can physically rupture viruses – specifically human parainfluenza virus type 3 (hPIV-3).

This new material offers a cheap, scalable way to make surfaces such as phones and hospital equipment far less likely to spread disease.

The downsides of disinfectants

Current methods for combating the spread of viruses via surfaces usually involves cleaning to remove dirt and disinfection to remove hidden contaminants.

Disinfectant must remain wet for some time to kill germs. This can be challenging in some real-world settings.

Surfaces can also be recontaminated quickly when other people touch them. And disinfection often involves the use of harsh chemicals which can damage equipment and the environment.

Scientists have previously developed antiviral surface modifications. These strategies often involve incorporating materials such as graphene or tannic acid and other natural agents into personal protective equipment such as masks, gloves, goggles, hard hats, and respirators.

These coatings are efficient. But they can pose a risk to human health. They can also be environmental hazards due to chemical leaching and have declining effectiveness over time as the potency of the active ingredients weakens.

A decade-long journey

Our journey toward a virus-bursting surface started more than a decade ago.

We initially aimed to engineer a surface so smooth that germs would simply slide off. Surprisingly, we discovered the opposite. Bacteria adhere quite readily to nanoscopically smooth surfaces.

Nature offers examples of bacteria-free surfaces. Take the water-repelling wings of cicadas and dragonflies. While these wings are self-cleaning, they act less by repelling bacteria and more as natural bactericides. That is, they kill bacteria. Natural bactericides are nature-derived “agents” that can kill germs, rather than inhibit their growth.

Experiments my colleagues and I did with gold-coated wings confirmed this bacteria-killing effect is not driven by surface chemistry, but rather by topography.

The physical nanostructures on the surface essentially force bacterial cell membranes to stretch and rupture.

Our earlier work showed that nanospike-covered silicon effectively destroys viruses on contact. But its rigid nature restricts its use on complex objects.

A black-and-white image of a small cell on a bed of spikes.
Microscope image of a virus cell being ruptured by the nanotextured surface.
RMIT

A lightweight, flexible and virus-bursting material

In this new study, we addressed this problem by creating a virus-bursting material that was lightweight, cost-effective and flexible.

This material is a thin acrylic film covered in thousands and thousands of ultra fine pillars. The nanotextured materials are smooth to touch. However, these nanopillars grab and stretch a virus’s outer shell until it ruptures. This kills viruses through mechanical force.

Lab tests with hPIV 3, which causes bronchiolitis and pneumonia, found up to 94% of virus particles were ripped apart or fatally damaged within an hour of contact with this material.

We discovered the distance between nanopillars matters far more than their height, with tightly packed pillars about 60 nanometres apart working best.

The mould we used to create this material can be easily scaled to provide wide-ranging industrial opportunities, from food packaging to public transport systems to hospital equipment and office desks.

Nanostructured surfaces are built for durability. But they are susceptible to the same physical, chemical, and environmental stressors as any other material, and will degrade over time.

Much remains to be discovered in the search for germ-free surfaces. But these nanotextured surfaces have enormous potential in the fight against viruses and provide an alternative to traditional, chemical-based methods.

The Conversation

Elena Ivanova does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. New plastic film covered in thousands of tiny pillars can tear apart viruses on contact – https://theconversation.com/new-plastic-film-covered-in-thousands-of-tiny-pillars-can-tear-apart-viruses-on-contact-280919

Girls in bands: two 90s rock icons on romance, ruthlessness and boring men

Source: The Conversation – Global Perspectives – By Liz Evans, Adjunct Researcher, English and Writing, University of Tasmania

In the 1990s, Melissa Auf der Maur played bass in two of the decade’s most notable rock bands: Hole and Smashing Pumpkins.

Her new book, Even the Good Girls Will Cry: My 90s Rock Memoir, documents this wild chapter in her life, as she navigates the heightened emotions and destructive excesses of Courtney Love and learns to wrangle the controlling influence of Billy Corgan (of Smashing Pumpkins).

Ten years earlier, Kim Gordon’s career began during New York’s post-punk era. Her book, Girl In A Band (2015), recently re-released as a tenth anniversary edition, chronicles her time with Sonic Youth, and charts her role within an alternative scene that shaped and influenced independent music culture across the United States.

By the early 1990s, she was something of a godmother figure for Auf der Maur’s generation of women.


Review: Even the Good Girls Will Cry: My 90s Rock Memoir – Melissa Auf Der Maur (Atlantic); Girl in a Band – Kim Gordon (Faber)


Introverted individuals with distinct perspectives on the peculiar challenges of the rock industry, Gordon and Auf der Maur appear to have benefited from a stability missing in many of their peers.

As bass players, they avoided the spotlight until embarking on their solo projects. And with backgrounds in the visual arts, they each had access to independent creative identities away from the stage, which no doubt minimised the pitfalls of rock stardom.

As a music journalist throughout the 1990s, I interviewed many of the people in their stories, including Courtney Love, Billy Corgan, Dave Grohl, Thurston Moore and Kurt Cobain. I witnessed their complex politics and fierce power plays, some still ongoing.

Once or twice, I was personally impacted.

For example, a very high profile singer tried to persuade other women not to speak to me for my first book because my magazine profile of her was badly altered by a male editor. Another musician blamed me for publishing personal details in an interview after I’d given her full copy approval.

It was, as Auf der Maur says, a time of “messy humanity”, low-level trust, and delicate egos.

It was also, as she points out, the last analogue decade: a time before the music scene was transformed by the internet, when rock culture appeared to be finally embracing powerful women and female agency. But in my experience, and as each of these books reveals, it was never that straightforward.

Musical callings and romantic dreams

An artistic free spirit raised in Montreal by unorthodox, creative parents, Melissa Auf der Maur first saw Hole and Smashing Pumpkins within a fortnight of each other in July 1991. Both bands played at the legendary punk club, Les Foufounes Électriques, where she worked part-time while studying photography.

More impressed by Hole’s calm, centred bassist, Jill Emery, than the band’s infamous, volatile frontwoman, Auf der Maur was truly starstruck by Corgan. She introduced herself to him after he was bottled on stage by her roommate. Watching him play, she experienced a “new musical calling”. Four months later, she travelled to a Pumpkins show in Vermont and spent the night “soul fucking” him in his motel room.

“I am you and you are me,” she remembers Corgan saying to her, in what sounds like a rock-starry show of narcissism towards an impressionable fan. But for Auf der Maur, who occasionally veers into grandiose claims, the encounter was a “romantic dream come true” and “a turning point […] musically, personally and cosmically”.

More tellingly perhaps, though she describes Corgan as eventually exerting “more influence on my life than anyone other than my parents”, Auf der Maur didn’t question his patriarchal power dynamic for many years – despite being in one of rock’s most notorious female-fronted bands.

But Corgan’s hold extended to his former girlfriend, Courtney Love, long after she left him for Kurt Cobain. When Hole’s second bassist, Kristen Pfaff, died from an overdose, it was Corgan who decided Auf der Maur should be the replacement.

The Hole drama

Life in Hole was nothing if not dramatic – and Auf der Maur’s account harbours no illusions about the difficulty of working with a grieving, traumatised widow.

But her empathy and compassion keep her story from collapsing into the critical terrain so often provoked by the outspoken, uncontained Love who attracted considerable vitriol, particularly after becoming involved with Kurt Cobain.

Auf der Maur is also more forgiving than drummer Patty Schemel, who paints a harsher picture of the ambitious, tempestuous singer in her brilliant memoir, Hit So Hard. But she was very aware of her marginalised position as Love’s “good girl” in the autocratic Hole. She had no artistic freedom in the band and eventually grew frustrated with her unfulfilling situation.

After five years in Love’s orbit, Auf der Maur wanted out. By 1998, the singer’s Hollywood film career had catapulted her into a different stratosphere of celebrity culture, further widening the existing chasm between her and her band members.

And the glamour and excitement of big festival billings and hit records were not enough to prevent the bass player from feeling ultimately “disillusioned and disconnected”.

Her decision to quit was compounded when she fell in love with ex-Nirvana drummer Dave Grohl, now with the Foo Fighters. His long-running rift with Love had previously made him “off-limits”.

But before she was released from her restrictive contract with Hole, Corgan was back in touch, asking her to replace D’arcy Wretzky in Smashing Pumpkins for a year of intensive touring. Wretzky’s sudden departure is glossed over in the book as a “touchy subject”, though she played with the Pumpkins for 11 years, and was reputedly a friend of Auf der Maur.

I remember Wretzky as a quietly intelligent individual with a striking stage presence, but Corgan’s domineering personality and punishing work ethic apparently proved too much for her.

And Auf der Maur makes no secret of Corgan’s ruthlessness. At her first rehearsal, he issued her with three rules: “One, you can’t make a mistake. Two, you can’t get sick. And three, there are no days off.”

Away from Grohl, who was also on the road with his band, she was bound to a gruelling schedule at the hands of a man she now saw as a moody overachiever. In response, she began to change her perspective.

Corgan’s partner at the time was the gifted photographer Yelena Yemchuk, who, Auf der Maur notes, had become “a bit of a kept woman”. Knowing Grohl wanted marriage and children, she witnessed Yemchuk with “her beautiful talent trapped in the bell jar of Billy’s world” with growing alarm.

As the two women became close, together they realised they needed to “step out of the shadows of these bigger, more successful men” and forge their own paths.

With the culmination of the Pumpkins world tour in 2001, Auf der Maur was 29 and finally ready for a new direction. She left her relationship with Grohl and turned down Corgan’s invitation to collaborate on a new project. She finishes her book with a glimpse into her next chapter: motherhood, and a grounded life of artistic ventures in upstate New York.

It’s more of a beginning than an end.

Feminism and challenges with men

The first time I interviewed Kim Gordon was over the phone in 1990. At the time, she was the bass player with Sonic Youth, the seminal no wave band she co-founded with her husband, singer/guitarist Thurston Moore, in 1981. Hinting at what I suspected was sometimes a lonely situation, she told me that while the band’s relationship was essentially a beautiful one, her male colleagues could be “so non-communicative”.

Three years later, I had a second, longer conversation with Gordon in her New York apartment for my aforementioned book, during which she elaborated on her original theme. Being in a band with men could be challenging, she said, because “there are some really boring aspects to it” and “no matter how much of a new man someone thinks they are, they’re just not!”

Gordon’s experience is summed up by both the content and title of her acclaimed memoir. With a new foreword by her friend, celebrated American writer, Rachel Kushner, and an additional closing chapter where Gordon reflects on the intervening decade, the latest version of the book is testament to its ongoing relevance for feminism, popular culture and music history.

Infused with the visceral, embodied sensuality of her artistic perspective, Gordon’s memoir details her upbringing in Los Angeles with her schizophrenic brother, Keller, whose moods clouded her early life, and whose death in 2023, aged 74, she recounts in the new edition.

It charts her pivotal move to New York as a 27-year-old in 1980, her involvement with the city’s post punk arts and music scene, her relationship with Moore and their resulting career with Sonic Youth.

Crucially, it details her influence in the Riot Grrrl movement, and her side projects, Free Kitten, with best friend Julie Cafritz, and fashion label, X-Girl, with Daisy von Furth, all of which afforded her the female companionship she lacked in Sonic Youth.

‘Painfully protracted’ marriage breakdown

It also tells the more universal story of a painfully protracted marriage breakdown and a couple’s failed attempts to save their relationship, following Gordon’s discovery of Moore’s affair. The book refrains from specifying dates, but by the time she found out through texts and emails, her husband had been unfaithful for several years.

The woman in question, who is not named in the book, was Eva Prinz, who became Moore’s second wife in 2020. At the time of the affair, Prinz was married to her second husband. She had previously been involved with one of Sonic Youth’s collaborators.

An editor for an independent publisher, she had initially approached Gordon about a potential book project in the early 2000s, but Gordon had passed it onto Moore, with fateful consequences.

Sickened by Moore’s long-concealed infidelity with someone well known to their inner circle, Gordon was left to navigate the devastating impact on her family, her career and her sense of self. Given the pivotal nature of this episode, it seems fitting that she starts her story here, at the end of a significant personal and professional era, with Sonic Youth’s final performance in 2011.

According to Gordon, this last appearance in Sao Paulo, Brazil “was all about the boys”. Struggling to hide her misery, anxiety and anger on stage, while her ex regressed into an adolescent display of “rock star showboating”, she was tempted to verbalise her fury on stage. But she didn’t want to follow the unboundaried example of Courtney Love, who was then ranting and raving her way around South America on tour with Hole.

“I would never want to be seen as the car crash she is,” writes Gordon. “I didn’t want our last concert to be distasteful when Sonic Youth meant so much to so many people; I didn’t want to use the stage for any kind of personal statement, and what good would it have done anyway?”

Distance as power

Gordon is highly adept at balancing between strong emotion and careful restraint. Throughout her book, she considers herself honestly, but thoughtfully. She conveys a quiet self-possession and enigmatic presence, writing as she speaks: with intelligence and a guarded openness. It’s how I remember her: warm enough to gift me a pair of John Fluevog sandals straight from her own closet, yet somehow always slightly removed. As Kushner says in her introduction to the memoir, “distance is the power of her performance”.

Now 72, Kim Gordon has been a touring musician for almost 40 years. Having made multiple forays into the worlds of fashion, art and film, since Sonic Youth she has launched two experimental bands with male collaborators, Body/Head and Glitterbust, been nominated for two Grammy awards, and released three highly acclaimed solo albums as a formidable frontwoman with an all-girl band.

These days, Gordon performs as if her life depends on it. With her second chapter well underway, she’s on fire – and cooler than ever. Let’s hope a second memoir is in the works.

The Conversation

Liz Evans does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Girls in bands: two 90s rock icons on romance, ruthlessness and boring men – https://theconversation.com/girls-in-bands-two-90s-rock-icons-on-romance-ruthlessness-and-boring-men-275942

Both the US and Iran are firing on commercial ships in the Strait of Hormuz. Are both sides acting lawfully?

Source: The Conversation – Global Perspectives – By Justin Bergman, International Affairs Editor, The Conversation

Over the past several days, there have been conflicting reports about the Strait of Hormuz. It’s difficult to know what’s happening from one moment to the next.

Iran said the waterway was open to commercial shipping again, then turned around and said it was closed.

Iran then fired at two Indian-flagged ships going through the strait, forcing them to turn around.

The next day, the US fired on an Iranian cargo vessel, which Tehran called a violation of the two countries’ temporary ceasefire and threatened retaliation.

What’s actually happening in the strait? Are both sides acting lawfully? We asked naval expert Jennifer Parker to explain.

What happened over the weekend?

There have been several key developments over the last 48 hours.

The first was the statement from US President Donald Trump and the Iranian foreign minister on social media that the Strait of Hormuz remained open. It was an interesting announcement because it was consistent with what the foreign minister had said at the beginning of the ceasefire a week and a half ago.

On Saturday, we saw a large number of tankers and cargo vessels move towards the top of the strait to follow what Iran has designated as a new passageway. Some ships that are clearly desperate to get out of the strait were obviously more confident they were safe to transit through at that point.

The Joint Maritime Information Centre in Bahrain said 18 ships were able to transit through, at least ten through the new Iranian-designated transit route, which is north of the normal transit route.

However, Iranian Revolutionary Guard Corps Navy then reportedly attacked a number of
civilian merchant vessels. One was an Indian tanker that was on an approved list with the IRGC to travel through the strait.

This suggests the Iranian military may have been disagreeing with the statement of the Iranian foreign minister, saying the strait remains closed.

Is the US blockade legal?

Then, on Sunday, the US fired on an Iranian-flagged cargo ship in the Arabian Sea.

The US is blockading Iranian ports through what’s called a distant blockade. This means US Navy ships are not sitting right off Iran’s ports to stop vessels. Rather, they are positioned further back in the Gulf of Oman and the northern Arabian Sea, with a blockade line effectively drawn between the Iranian-Pakistan border to around the Omani-UAE border.

The US Central Command has reported turning away a number of ships – at least 23 as of April 18.

When a ship approaches the blockade line en route to or from an Iranian port, the US Navy will radio the vessel and say it is not free to go through. Most ships will then turn around.

This is allowed in a lawful blockade under the law of naval warfare. Once a conflict has started, a blockade is a lawful if it complies with certain provisions:

  • the blockade must be declared

  • it must be impartial, meaning it needs to apply to all ships

  • humanitarian goods must be permitted to go through

  • it must be effective, meaning you can’t declare a blockade, start doing it, and then not actually enforce it

  • it can’t close off neutral ports.

Many news reports have said the US is blockading the Strait of Hormuz. But it is actually blockading Iranian ports, not the strait. A blockade of the strait would be illegal because this would affect neutral ports in the Persian Gulf. Ships in an international strait enjoy unimpeded transit passage, which cannot be hampered or suspended by the coastal state.

Is the US permitted to fire on a cargo vessel?

The US says it warned the Touska, the Iranian-flagged vessel, to stop over a six-hour period.

If a vessel doesn’t comply with warnings like this, warning shots can then be fired, depending on your country’s rules of engagement. The country maintaining the blockade may also use “disabling fire” against the ship.

This is what the US claims happened – the US Navy destroyer fired on the Touska’s engine room to make it stop. My assessment is this is consistent with the law of naval warfare because the US Navy is enforcing an effective blockade. It also appears to have adhered to the principles of proportionality and necessity under international law.

The US also seized the ship, which is consistent with the law. In terms of the crew, the US has not announced what it intends to do with them. If the crew is non-Iranian, they would likely be released and repatriated. If the crew is Iranian, or if some of the crew are linked to the IRGC, they could be detained.

By contrast, based on current reporting, the ships fired on by Iran appear to have been neutral merchant vessels transiting an international strait. On the information publicly available, there is no indication they had become lawful military objectives.

This is not a lawful use of force because these vessels are not a lawful military objective.

Neutral merchant vessels are generally considered civilian objects under the law unless, by their nature, location, purpose or use, they make an effective contribution to military action. Therefore, it’s not lawful to attack them.

There are some exceptions to that, including a merchant vessel seeking to breach a lawful blockade.

Where do things go from here?

The US is not saying it’s in control of the strait, it’s saying it’s in control of the vessels going in and out of Iran, which is different.

Iran has claimed it’s in control of the strait since the war began. It has been attacking and threatening civilian, predominantly neutral vessels since then.

What I think we are seeing is a tussle for leverage to supercharge the negotiations between the US and Iran, should they continue this week in Pakistan.

The Conversation

ref. Both the US and Iran are firing on commercial ships in the Strait of Hormuz. Are both sides acting lawfully? – https://theconversation.com/both-the-us-and-iran-are-firing-on-commercial-ships-in-the-strait-of-hormuz-are-both-sides-acting-lawfully-281008

Slanguage: Why AI’s stylistic negation — ‘it’s not X, it’s Y’ — is both annoying and doesn’t work

Source: The Conversation – Canada – By Joshua Gonzales, PhD, Management, Lang School of Business and Economics, University of Guelph

If you spend any amount of time on LinkedIn, you’ll have certainly come across this type of phrasing: “This isn’t a job, it’s a calling” or “This isn’t marketing, it’s a movement” or “This isn’t a tool, it’s a paradigm shift.”

This sentence structure is saturating posts on the platform. It’s become one of the most recognizable patterns of AI-generated text: “It’s not X, it’s Y.”

If you’re like me, you find it annoying and scroll past as soon as you read it. Your exasperation is warranted. Negation can be a powerful literary device when used thoughtfully, but when unearned, it feels hollow.

That’s what AI slop — low-quality digital content generated by artificial intelligence, often with little or no human oversight — does: it turns previously useful markers into gobbledygook.

For most AI tropes currently in circulation, it’s enough to just ignore them. The negation form of AI slop, however, isn’t just annoying, it distorts how people process and remember information. Before you get the chance to absorb something meaningful, your attention is already anchored to what is not.


Learning a language is hard, but even native speakers get confused by pronunciation, connotations, definitions and etymology. The lexicon is constantly evolving, especially in the social media era, where new memes, catchphrases, slang, jargon and idioms are introduced at a rapid clip.
The Conversation Canada’s series Slanguage dives into how language shapes the way we see the world and what it reveals about culture, power and belonging. Welcome to the wild and wonderful world of linguistics.


How the brain processes negation

There’s a reason this structure feels off. Cognitive psychologists have known for decades that negation doesn’t work the way speakers intend it to. When someone tells you what something isn’t, your brain doesn’t skip to the alternative. It processes the negated concept first.

This was demonstrated in a 2003 study. After reading negated information, readers’ mental models still retained the negated concept at short processing intervals. Negation didn’t function as an eraser. The concept entered the reader’s mind, and only with additional processing time and contextual support could the reader move past it.

Every time you read “This isn’t marketing,” for example, you process marketing before you can get to whatever the writer claims it actually is.

That would be manageable if it happened once, but that cognitive load compounds with repetition.

‘Don’t think about the white bear’

In a classic 1987 experiment, psychologist Daniel Wegner asked participants not to think about a white bear. They couldn’t.

Those told to suppress the idea mentioned it more than once per minute. Worse, participants who had first tried to suppress the thought later showed a rebound effect, thinking about white bears significantly more than participants who had been free to think about them from the start.

The effort of pushing a concept away made it stick even harder.

When your LinkedIn feed delivers dozens of posts built on the same negation-reframe structure, each one is a new instruction not to think about the thing the writer wanted you to forget.

The consequences go beyond annoyance. In a 2004 social psychology study examining how people encode negated information, researchers explained why some negations fail more than others.

When a negated phrase has an obvious, commonly inferred alternative, readers mentally replace it. For example, they can substitute “not guilty” for “innocent” or “not cold” for “warm.” Without the alternative, the original concept remains active with a negation tag attached, like a mental sticky note reading “not this.”

That sticky note can fall off quite easily. In the study, participants lost it more than a third of the time for concepts without clear alternatives, remembering the affirmed version instead.

Consider what that means for “This isn’t marketing, it’s a movement.” Marketing has no ready-made substitute for our mind to consider. What readers store is “marketing” with a tag that may or may not survive their scroll to the next post.

Scaling a cognitive problem

The problem is scale. A 2024 study on generative AI by economics and strategy researchers found that when people write with AI assistance, their outputs converge. Individual pieces may be more polished, but the collective pool of writing becomes more similar. AI-assisted texts were found to be roughly 10 per cent more alike than those written by humans alone.

Their study examined creative fiction, but the results have obvious implications for other forms of writing. When a rhetorical formula saturates an entire platform, it stops being one person’s stylistic habit and becomes a default frame through which ideas enter public conversation.

Right now, that frame often starts from a deficit. It emphasizes what something fails to be rather than what it offers.

The alternative is straightforward. Say what it is. Say what you built, what you believe, what you offer. It’s a better cognitive strategy.

Readers who encounter “I am a movement builder” store “movement builder.” Readers who encounter “This isn’t marketing” store “marketing” with a sticky note that’s already peeling off.

One formulation gives people something to remember. The other gives them something to forget, and psychology suggests it won’t work.

The Conversation

Joshua Gonzales does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Slanguage: Why AI’s stylistic negation — ‘it’s not X, it’s Y’ — is both annoying and doesn’t work – https://theconversation.com/slanguage-why-ais-stylistic-negation-its-not-x-its-y-is-both-annoying-and-doesnt-work-278967

Don’t just plant trees, plant forests to restore biodiversity for the future

Source: The Conversation – USA (2) – By John Parker, Senior Scientist in Community Ecology, Smithsonian Institution

A long-running experiment is testing tree mixes to develop the healthiest forests. Mickey Pullen/Smithsonian Environmental Research Center

Around the world, people plan to plant more than 1 trillion trees this decade in an ambitious effort to slow climate change and reduce biodiversity loss. But if the past is prologue, many of those planted trees won’t survive. And if they do, they could end up as biological deserts that lack the richness and resilience of healthy forests.

It doesn’t have to be this way.

The United Nations declared 2021-2030 the Decade on Ecosystem Restoration to encourage efforts to repair degraded ecosystems. Tree planting has become a centerpiece of that effort, championed by initiatives such as the Bonn Challenge and the Trillion Trees Campaign.

However, many tree-planting commitments have a critical flaw: They rely too heavily on monoculture plantations – vast areas planted with just a single tree species.

Rows of white birch trees with low grasses below and not much else.
A grove of commercially grown poplar trees, planted in lines with not much active beneath them.
Mint Images via Getty Images

Monoculture plantations are generally one-way tickets to producing wood. But these high-yield plantations are high risk and can be surprisingly fragile. When drought, pests, or forest fires strike, entire monoculture plantations can fail at once. In one example, nearly 90% of 11 million saplings planted in Turkey died within three months due to drought and lack of maintenance.

Forests are more than just timber factories. They regulate water, store carbon, provide habitat for wildlife, cool the landscapes around them and even provide human health benefits.

Rather than gambling on a single species and hoping for the best, science now points to a smarter path that captures both ecological and economic benefits while minimizing risk: mixed-species plantings that mirror the biodiversity of a natural forest, ultimately creating forests that grow faster and are more resilient in the face of constant threats.

An artist's rendering of the diversity found in mixed-species plots compared with monoculture shows larger trees, more shade and cooling and more species below.
The long-running BiodiversiTREE study compares forest plots containing several tree species with single-species monocultures. The results, illustrated here, show that mixed-species plots (right) produce 80% larger trees compared with monocultures (left), resulting in denser canopy growth that creates cooler understory microclimates, leading to more abundant and species-rich communities of insects, spiders and birds.
Sergio Ibarra/Smithsonian Environmental Research Center

We are community and landscape ecologists at the Smithsonian Environmental Research Center. Since 2013, we and our colleagues have been rigorously testing this idea in a large, ecosystem-scale experiment called BiodiversiTREE. The verdict is striking: Trees in mixed forests don’t just survive – they outgrow their monoculture counterparts and support dramatically more biodiversity.

Trees with diverse neighbors grow larger

Thirteen years ago, we teamed up with volunteers to plant nearly 18,000 tree seedlings on 60 acres of fallow fields on the Smithsonian Environmental Research Center campus near the Chesapeake Bay.

We didn’t plant just a single species. We planted 16 different native species from all walks of tree-life. Some species were fast-growing timber species, some were mid-story species, and some were slow-growing species that might not reach full size for a century or more.

Some plots we planted with just a single species – homogenous rows of the same species over and over again. But others were planted with random allotments of four and 12 species, reflecting the middle and upper ends of tree diversity in similar-sized areas of our local forests.

We asked a simple question: What would happen if we tried to mirror nature and plant a mixture of species instead of a monoculture?

A photo of tree plots with dashed lines show the diversity in mixed plots.
A drone image shows some of the BiodiversiTREE plots, including monocultures, outlined in white, and mixture plantings, outlined in green.
Mickey Pullen/Smithsonian Environmental Research Center

The differences over a decade later are striking.

The monoculture plots – those that survived – resemble traditional plantation forestry that historically has dominated rural lands in the Southeast and Pacific Northwest in the U.S. They contain rows of tall, narrow trees with sparse canopies and little life below.

The mixed-species plots, by contrast, are layered, complex and dynamic, with foliage filling the canopy and a diversity of plants and animals thriving underneath.

These visual contrasts reflect real ecological gains. Trees grown in mixtures, including important timber species like poplar and red oak, are up to 80% larger than the same species when grown alone. Mixed plots supported fewer leaf pathogens, more abundant caterpillar communities that provide food for birds, and increased phytochemical diversity in their leaves. We hypothesize that these leaf chemicals, some of which deter animals from eating them, reduced browsing damage from hungry deer, ultimately leading to higher tree growth in the mixed plots.

Plots with several tree species also had much fuller, denser leaf canopies, leading to cooler, shadier conditions that help understory plants flourish and support up to 50% more insects, spiders and birds.

An area that looks like a natural forest, with trees of different sizes, some undergrowth and a canopy of tree cover to keep conditions cooler.
The fuller canopy of 12-species forest plots like the one above supports more insects and birds than the monoculture plots.
John Parker/Smithsonian Environmental Research Center
Trees all of the same species in a line with little canopy to provide shade or cover for birds, insects and other wildlife.
A sycamore monoculture plot at the BiodiversiTREE project provides little canopy cover.
John Parker/Smithsonian Environmental Research Center

This pattern isn’t unique to our site. The BiodiversiTREE project is part of TreeDivNet, a global network of large-scale experiments spanning more than 1.2 million trees and hundreds of species. Across continents and climates, the results are consistent: Forests with a mix of species tend to grow larger, store more carbon and better withstand stress from drought, pests and disease.

So why are monocultures still common?

Despite decades of evidence, mixed-species plantings remain relatively rare in practice. Most commercial forestry operations still rely on monocultures, and these plantations are counted toward international planting campaigns aimed at slowing climate change and reversing biodiversity loss.

The reasons are generally practical: Mixed plantings can be more complex to design, more expensive to establish and harder to manage. Crucially, until recently, there has been limited evidence that they can match or exceed the economic returns of conventional plantations.

A woman holds a tall pole as she walks through a field with trees on one side.
Technician Shelley Bennett uses high-resolution GPS to lay out plots for an experiment at the Smithsonian Environmental Research Center in Maryland.
Regan Todd/Smithsonian Environmental Research Center

A new experiment at the Smithsonian Environmental Research Center called “Functional Forests” aims to bridge some of the gaps between science and practice. We’re developing intentionally designed combinations of trees to test whether specific mixtures of species can contribute ecological benefits while also providing timber and other services that humans need to support a thriving, sustainable economy.

Each of the 20 tree species in the Functional Forests project was chosen to provide one or more benefits, including timber, wildlife habitat, food for people, resistance to deer and climate resilience. But no single species provides all of these benefits.

Some of the nearly 200 plots will contain a single species, while others include carefully selected combinations of five species assembled based on the functions they provide. Some plots are protected from deer browsing, while others are left exposed.

A tree with large green fruit.
The Functional Forests project includes trees with edible fruits like the pawpaw (Asimina triloba), one of 20 different tree species being planted there.
Jamie Pullen/Smithsonian Environmental Research Center

By comparing these approaches, we can test how different planting strategies perform across a range of goals, from timber production to food production and from biodiversity to climate resilience.

Landowners and communities have different priorities, whether that’s producing wood, supporting wildlife or creating forests that can withstand a changing climate. The idea behind Functional Forests is to design plantings that can deliver these multiple benefits all at once, rather than optimizing for just one, essentially leveraging the positive effects of biodiversity to achieve real-world goals.

Planting 1 trillion trees wisely

The stakes are high. Restoration has become a major global investment, with hundreds of billions of dollars already being spent annually. Getting it wrong means wasted resources and missed opportunities to address some of the most pressing environmental challenges of our time.

If the world is going to plant a trillion trees, we believe it needs to do more than just put seedlings in the ground. It needs to rethink what a forest should be.

The goal isn’t just to grow trees. It’s to grow forests that last.

The Conversation

John Parker receives funding from the National Science Foundation and the United States Department of Agriculture. He is affiliated with TreeDivNet, a global network of tree diversity experiments.

Justin Nowakowski receives funding from the National Aeronautics and Space Administration, Department of Defense, and through Maryland Environmental Service.

ref. Don’t just plant trees, plant forests to restore biodiversity for the future – https://theconversation.com/dont-just-plant-trees-plant-forests-to-restore-biodiversity-for-the-future-275803