Source: The Conversation – in French – By Frédéric Pugniere-Saavedra, Maître de conférences en sciences du langage, Université Bretagne Sud (UBS)
Convoquer la linguistique pour évoquer sa fonction d’aidante ou d’aidant – sa place auprès du proche qu’on accompagne, la façon dont on nomme la maladie qui l’affecte, les liens qui nous unissent… –, c’est l’objet d’une recherche participative originale menée par des linguistes auprès d’aidantes de malades d’Alzheimer.
Un projet de recherche émerge parfois à la suite de conversations informelles, de concours de circonstances, de besoins pratiques, de questions ou de problèmes non résolus qui nous tiennent à cœur.
Tel a été le cas d’une recherche sur les aidants de malades diagnostiqués Alzheimer menée par une équipe de linguistes qui, par ce projet, sortent de leurs sujets de prédilection (étudier la langue en elle-même et pour elle-même) pour faire un pas de côté vers des problématiques démographiques et sociales sensibles qui questionnent la prise en charge et la fin de vie.
Près de 9,3 millions d’aidants en France
Au regard du vieillissement de la population, une place de plus en plus grande sera nécessairement accordée au proche aidant pour assurer les activités de la vie quotidienne.
En France, 9,3 millions de personnes déclarent apporter une aide régulière à un proche en situation de handicap ou de perte d’autonomie. Un Français adulte sur six, mais aussi un mineur sur vingt, est concerné (chiffres 2023 de la Drees pour l’année 2021).
Et, selon les projections, en 2030, on comptera 1 Français sur 5 qui accompagnera son proche en situation de dépendance, en raison de son âge, d’une maladie ou d’un handicap. L’aide la plus fréquemment déclarée est le soutien moral, suivi par l’aide dans les actes de la vie quotidienne et enfin l’aide financière.
Quelques chiffres clés pour comprendre qui sont les aidants (Source: Drees, février 2023.)
8,8 millions d’adultes et 0,5 million de mineurs âgés de 5 ans, ou plus, peuvent être qualifiés de proches aidants ;
Entre 55 et 65 ans, près d’une personne sur quatre se déclare proche aidant ;
Les femmes sont surreprésentées (jusqu’à l’âge de 75 ans) : parmi les adultes, elles sont 56 % à se déclarant proches aidantes.
De la sociologie de la santé à la linguistique
Ce champ de recherche est très majoritairement occupé par la sociologie de la santé pour étudier, par exemple, les questions du partage des tâches à travers le genre, les relations entre le malade, le médecin et l’aidant, etc., les aidants devenant ainsi des acteurs du « soin négocié », ainsi que par les psycho-sociologues qui pensent l’aidance à travers des notions telles que l’épuisement, le fardeau, le burn-out…
Trois faits nous ont convaincus que les linguistes avaient toute leur légitimité pour travailler sur cette problématique :
les aidantes et aidants sont exposés à un certain nombre de risques (surmortalité dans les années qui suivent le début de la maladie de leur proche, décès avant leurs aidés…) ;
l’aidante ou l’aidant familial assure un travail conséquent, parfois même davantage que ne le ferait une ou un aidant professionnel ;
l’aidante ou l’aidant peut concilier activité professionnelle et aide du proche plusieurs heures par jour. Au-delà de deux à trois heures, l’aidante ou l’aidant peut être amené à modifier son organisation personnelle et professionnelle pour passer davantage de temps avec sa ou son proche dépendant.
En tant que linguistes, nous avons souhaité saisir la manière dont les aidants se reconnaissent (ou non) derrière cette désignation, la manière dont ils nomment (ou non) la maladie, les procédés qu’ils mobilisent pour contourner par exemple le terme Alzheimer qui charrie des représentations négatives quand ils accompagnent des proches atteints de cette maladie, les injonctions dont ils font l’objet dans l’espace médiatique, institutionnel, assurantiel…
D’une recherche diachronique à une recherche participative
Méthodologiquement, cette recherche initiée en 2017 consistait à faire des entretiens d’une heure trente tous les six mois pour répondre aux questions de recherche. Nous avons ainsi été amenés à évoquer les situations familiales intimes, certains non-dits entre parents et enfants ou entre conjoints, la maladie et les conséquences qu’elle a causées dans la systémique familiale. Chaque entretien consistait à revenir sur ces points mais également à parler des faits marquants qui se sont produits depuis le dernier entretien.
Peu à peu, des liens de proximité se sont tissés entre le chercheur et l’aidant. Les aidants mettaient le rendez-vous à leur agenda et nous rappelaient quand nous dépassions le délai prévu. Une relation de confiance s’est tissée et les aidants nous ont alors donné accès aux outils qu’ils ont confectionnés pour optimiser leur quotidien et celui de leur proche :
agenda où l’aidant consigne à la fois les rendez-vous médicaux, le passage d’aide à domicile, la livraison des repas… mais aussi ses états d’âme, sa charge mentale, les faits marquants de sa journée, ses émotions ;
photos et films de famille à Noël où l’on constate, année après année, les effets de l’accélération de la maladie ;
différents dispositifs tels que des Post-its de couleurs différentes pour distinguer, dans le réfrigérateur, ce qui doit être consommé au petit-déjeuner, au déjeuner ou au dîner ; pour distinguer les vêtements d’été, d’hiver ou de mi-saison…
accès aux conversations sur le groupe WhatsApp de l’aidante qui informe son entourage des signes montrant l’accélération de la maladie de son conjoint.
Face à cette demande croissante de participation à la recherche (en creux dans le discours de certains aidants, et explicite pour d’autres), nous avons fait évoluer le protocole en demandant à un photographe professionnel de travailler avec nous pour capturer, avec la photo ou la vidéo, certains moments de l’aidance et participer ainsi au renouvellement de l’iconographie du grand âge et de l’accompagnement.
Tous ont accepté la démarche et ont permis au professionnel de photographier une part de leur intimité qui a été importante dans leur parcours d’aidant (un objet cher au malade, une pièce de la maison, un dispositif créé par l’aidant, un espace dans le jardin…).
Au-delà de ces prises de vue, d’autres aidants nous ont proposé de modéliser leur expérience, via une carte mentale, qui jalonne les faits marquants de la famille sur une durée de près de dix ans (de 2014 à 2023).
Infographie – États d’âme d’un long parcours d’aidantes
Ce que raconte l’infographie
– De 2014 à 2021 : il s’agit de la confrontation des sœurs (aidantes) de leur mère (aidée) au corps médical avec des discours et émotions (parfois) contradictoires. Elles prennent alors conscience qu’elles deviennent aidantes ;
– Décembre 2021 : entrée en Ehpad de leur mère avec le sentiment de déresponsabilisation alors qu’elles ont une habilitation familiale ;
– Décembre 2023 : l’état de santé de leur mère s’aggrave et remise en cause de certains actes de la prise en charge.
Une approche singulière ?
Rappelons qu’une recherche participative, selon le rapport Houllier, est définie comme « les formes de production de connaissances scientifiques auxquelles des acteurs non scientifiques-professionnels, qu’il s’agisse d’individus ou de groupes, participent de façon active et délibérée ».
Les aidantes ont expliqué leur cheminement pour arriver à ce niveau de modélisation.
Décryptage de l’infographie
« On avait noté sur notre agenda des moments de travail, deux dates je crois au moins un mois avant notre rendez-vous et on a surtout travaillé la veille au soir et l’après-midi avant votre arrivée, et quand on a posé ces mots, on a pensé qu’il y avait nécessité d’utiliser des polices et des couleurs différentes. Par exemple, tout ce qui est médical est encadré. » […]
« Quand on a matérialisé des relations par des flèches, c’est qu’on pensait à quelqu’un en particulier, il y avait donc ces émotions positives ou négatives qui apparaissaient, elles sortent du cœur […] donc tous les métiers, on les a rencontrés, expérimentés et puis derrière, on a des gens, des visages. » […]
« Dans le premier graphe, on était dans la découverte de beaucoup de choses qu’on a dû mettre en place, apprendre, comprendre. Aujourd’hui, on est plus dans une routine, dans quelque chose qui s’est installé et on découvre comme dans plein de sujets qu’il faut tout le temps remettre son énergie sur l’ouvrage, tout le temps être vigilant, patient. »
Cette recherche participative a permis de visibiliser les aidants et leur a donné la possibilité, dans une démarche introspective, de réfléchir à ce qu’ils font au quotidien.
D’ailleurs, les sœurs aidantes à l’origine de l’infographie nous ont rapporté :
« Ça nous a ouvert aussi les yeux sur pas mal de choses et d’avoir fait ce bilan dans un sens, ça permet de finir une première étape, de mesurer tout ce qu’on avait parcouru et même si on le savait, c’était bien de repenser à tout cela, on l’aurait sans doute pas fait si vous nous aviez pas demandé de le faire. »
Et pour la linguistique, ce projet montre combien il est enrichissant de travailler sur le rapport entre langage et problématique du vieillissement et de la dépendance.
Cette recherche a bénéficié de l’aide de la CNSA dans le cadre de l’appel à projets « Handicap et perte d’autonomie – session 10 » lancé par l’iresp/Inserm.
L’opéra, forme artistique et architecturale d’origine européenne, s’est diffusé au Moyen-Orient à partir du XIXe siècle, comme signe de modernité importée, avant de devenir un symbole culturel réinventé et un instrument de rayonnement international. Ce glissement d’un modèle occidental à une acclimatation locale s’opère selon les pays et les initiatives de leurs dirigeants alors que l’opéra, initialement perçu comme un art exogène, se mue progressivement en objet identitaire et diplomatique.
À partir des années 2010, des initiatives comme Balcony Opera permettent des actions « hors les murs » et mêlant répertoires arabes et occidentaux. Le rayonnement contemporain de chanteurs, comme Farrah El Dibany, alternant œuvres européennes et adaptations en arabe, illustre un va-et-vient entre traditions locales et programmation internationale standardisée.
Ce n’est pas le cas de l’Iran qui offre l’image d’une modernité lyrique interrompue. Depuis la fin du XIXe siècle, la formation d’une scène théâtrale moderne, nourrie des traductions de Molière mais aussi d’influences russes et caucasiennes, s’institutionnalise progressivement jusqu’à l’âge d’or des années 1960–1970 avec la Tehran Opera Company et l’inauguration du Vahdat Hall en 1967. La Révolution islamique de 1979 entraîne l’arrêt des activités lyriques. Depuis 2013, une renaissance partielle semble se dessiner sous la forme d’adaptations en persan d’œuvres importées, témoignant d’une résilience artistique.
Dans le cas d’Israël, l’histoire du lyrique se conjugue avec la construction culturelle nationale comme le montre la programmation, en 1923, de la Traviata, chantée en hébreu à Tel-Aviv, sous l’impulsion du chef d’orchestre et musicologue Mordecai Golinkin. La compagnie Israel Opera fonctionnera par la suite, de 1945 à 1984, avant que le New Israeli Opera s’installe en 1994 dans le Tel-Aviv Performing Arts Center.
La programmation mélange un répertoire international, mais aussi des œuvres abordant des thématiques juives et bibliques commandées à des compositeurs israéliens. Des productions comme The Passenger en 2012, dont l’histoire évoque l’Holocauste, inscrivent la mémoire et l’identité au cœur de l’activité lyrique locale associée à des politiques de médiation menée par Children Opera Hours, Magical Sounds.
Des opéras symboles de pouvoir
La vague d’intérêt pour l’art lyrique qui a touché les pays du Golfe procède d’une autre logique, où l’opéra devient un dispositif de soft power, d’attractivité touristique et d’urbanisme. À Mascate (capitale d’Oman), l’Opéra Royal inauguré en 2011 couronne une stratégie initiée par le sultan Qabous, dès les années 1980, avec la création d’un orchestre symphonique national. Inséré dans un ensemble de 80 000 mètres carrés, incluant jardins, hall d’exposition, galerie d’exposition d’instruments de musique et espaces commerciaux, le site déploie une programmation éclectique qui associe répertoires occidentaux, créations arabes et musiques du monde en coproduction avec de grandes maisons européennes.
À Dubaï, l’ouverture en 2016 d’une salle modulable de 2 000 places, conçue par Janus Rostock, en forme de boutre (un voilier arabe) et implantée dans le quartier Downtown, associe une forte identité architecturale à une flexibilité d’usage. Elle joue aussi le rôle de locomotive pour l’hôtellerie, les commerces et le tourisme événementiel. Cette même logique opère au Qatar ou au Koweit. L’ambition de s’inscrire comme capitale culturelle s’y exprime à travers l’accueil de productions et d’orchestres internationaux.
De son côté, l’Arabie saoudite incarne une accélération singulière de la structuration de son territoire lyrique. Longtemps imperméable à l’opéra du fait de restrictions de nature religieuse (présence de femmes et musique occidentale non autorisées), le royaume s’y ouvre progressivement à partir des années 2010 avant de créer, en 2020, la Theater and Performing Arts Commission dans le cadre de Vision 2030. Son objectif est d’ouvrir le royaume à la modernité en renforçant notamment l’offre destinée aux touristes et aux hommes d’affaires.
En avril 2024, la création de Zarqa Al Yamama, premier grand opéra national de langue arabe, marque une étape symbolique avec le recours à un livret du poète et dramaturge saoudien Saleh Zamanan.
L’intrigue se déroule dans une Arabie préislamique et raconte l’histoire d’une femme extralucide pressentant une attaque d’ennemis, dont la tribu ignore les avertissements. Présentée au Centre culturel Roi Fahd, elle a montré la nécessité de bâtir des équipements spécifiquement conçues pour l’acoustique lyrique, la sonorisation de l’œuvre, riche d’influences musicales arabes, s’étant avérée indispensable dans cette salle de 2 700 places. Le territoire lyrique saoudien s’est progressivement enrichi d’un festival d’opéra, de programmes de collaboration internationale avec la programmation d’œuvres occidentales à Riyad et à Al-Ula tandis que des projets de nouveaux bâtiments sont envisagés à Riyad, à Djeddah ou à Diriyah.
Dans d’autres pays du golfe, la diplomatie culturelle et la patrimonialisation jouent un rôle structurant même sans maison d’opéra. Ainsi, la Jordanie a instauré le premier festival d’opéra du monde arabe à Amman, associant des artistes jordaniens à des partenaires italiens et projetant des créations à Pétra, croisant langues arabe, anglaise et nabatéenne.
Au Liban, le Festival international de Baalbek, fondé en 1956, a produit intégralement en 2025 et pour la première fois une œuvre (Carmen) alors que se profile un projet d’opéra national à Dbayeh, d’abord envisagé avec un soutien omanais puis porté par la Chine. Au-delà des infrastructures, deux figures emblématiques, Fairuz (âgée de 89 ans) et Sabah (disparue en 2014, divas de la musique arabe, façonnent l’imaginaire vocal libanais au-delà des frontières.
Des logiques territoriales différentes
On peut identifier plusieurs processus de territorialisation. Le premier, que l’on pourrait nommer « opéra-modernité » correspond à l’usage de la maison d’opéra comme signal d’entrée dans le modernisme avec un phénomène de patrimonialisation et d’adaptation, à l’image de l’Égypte du XIXe siècle. Le second que l’on pourrait qualifier d’« opéra-vitrine » se déploie surtout dans les pays du Golfe, associé généralement à des objectifs de soft power, de branding territorial, d’attractivité touristique et de requalification urbaine. Dépendant encore de compétences lyriques étrangères, l’enjeu à venir est d’arriver à concilier exposition à l’international et écosystème local et autonome. Le troisième processus reposerait davantage sur des stratégies de médiation et d’ancrage patrimonial.
À ces processus de territorialisation s’adossent des enjeux transversaux. L’intégration au lyrique de la langue comme de la musique arabes est centrale. Promouvoir un opéra en arabe nécessite d’enrichir la dramaturgie par des récits traditionnels, des références musicales – rythme de la poésie, maqâm – en dialogue avec les styles musicaux européens et leurs esthétiques. Les actions menées par Opera for Peace, par l’intermédiaire de ses master class, contribuent à structurer ce capital humain qui s’appuie sur des politiques de démocratisation, d’hybridation et de collaborations, mais aussi des financements pérennes. En outre, l’adossement à des sites patrimoniaux, très efficace pour unir spectacle, tourisme et récit identitaire, requiert des exigences techniques et administratives singulières.
En conclusion, l’art lyrique au Moyen-Orient est bien passé d’un modèle importé à un symbole culturel réinventé. L’opéra et son écosystème composé d’architectures emblématiques, de programmations premium et de festivals fonctionne, même s’il n’attire pas encore un public local conséquent. Si certains pays ont la capacité de financer seuls le déploiement de ce symbole de prestige, on note qu’après le Japon, la Chine investit désormais dans l’aide à la construction de maisons d’opéra dans cette région. Reste à savoir si la forme artistique pourra s’affranchir de son image de seule vitrine pour participer à la fabrique de récits, où l’arabe chanté, la musique traditionnelle et la mémoire des lieux contribueront à créer une modernité régionale à travers un processus de patrimonialisation propre à ces territoires.
Frédéric Lamantia ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Proposer un progrès social crédible exige de tenir compte des contraintes actuelles : faiblesse de la productivité, nécessité de stabiliser la dette publique, concurrence fiscale internationale. Cela suppose de hiérarchiser les priorités en privilégiant la croissance par l’emploi et en réorientant les arbitrages budgétaires vers les services publics.
Les responsables politiques doivent donner des perspectives de progrès social pour répondre aux attentes des citoyens. Pour proposer des orientations crédibles, ces perspectives doivent tenir compte des contraintes économiques auxquelles la France fait actuellement face : la faiblesse des gains de productivité, la nécessité de stabiliser la dette publique et la concurrence fiscale internationale. Cela suppose de renoncer, au moins temporairement, à certains objectifs et d’accepter de hiérarchiser les priorités. À l’horizon de cinq années, les orientations à privilégier sont la croissance économique et la priorisation des services publics dans les arbitrages budgétaires.
En attendant la décroissance
La France a besoin de croissance économique. La décroissance, entendue comme une diminution de la production des biens et services et des revenus distribués, peut être une forme de progrès social, certainement la plus efficace pour la préservation de l’environnement. Accompagnée par un changement culturel sur la place de la consommation dans nos vies, elle pourrait se faire sans dégradation de la qualité de vie. Toutefois, la société ne semble pas prête aujourd’hui à suivre ce chemin.
Au niveau individuel, le pouvoir d’achat est l’une des priorités exprimées par les Français. Au niveau collectif, les besoins d’investissement sont massifs : pour l’adaptation au changement climatique, comme cela a été documenté dans le rapport Pisani-Ferry et Mahfouz, mais aussi la préservation de la souveraineté militaire dans le nouveau contexte géopolitique. Satisfaire ces deux niveaux nécessite la production de biens et services qu’apporte la croissance économique.
Cette croissance ne peut pas se réaliser en travaillant moins. La baisse du temps de travail est incontestablement un objectif majeur du progrès social. Mais la limite essentielle, pour aller dans cette direction, est la faiblesse des gains de productivité actuels. Si la productivité n’augmente pas, travailler moins signifie produire moins, ce qui ne permettrait pas d’atteindre les objectifs individuels et collectifs.
La France décroche
Or, l’Europe connaît actuellement un décrochage des gains de productivité par rapport aux États-Unis particulièrement marqué en France. Depuis la crise Covid, les États-Unis ont connu une croissance de la productivité du travail de 5 % entre 2019 et 2024, la zone euro (hors France) a retrouvé en 2024 son niveau de productivité de 2019 alors que la France a connu une perte de productivité de 4,5 %.
Des gains de productivité peuvent être créés par des investissements dans l’éducation et l’innovation, mais même si ces investissements étaient réalisés dès aujourd’hui, il faudrait plusieurs années pour qu’ils se concrétisent. À l’horizon de cinq années, la diminution du travail – durée légale ou âge de la retraite – ne peut pas être la priorité. Cette conclusion est importante pour concentrer l’effort des politiques économiques sur l’accès du plus grand nombre à des emplois de qualité.
Trois crises en 15 ans
La contrainte extérieure est souvent invoquée pour justifier la réduction du déficit public. Mais cette réduction doit avant tout être défendue par ceux qui veulent préserver des marges de manœuvre pour l’action publique.
La France a connu une période de relative stabilité entre 1980 et 2008. Après une décennie de crises liées aux chocs pétroliers, nous avons bénéficié de vingt-cinq années sans crises majeures. Ce temps semble révolu. En quinze années seulement, l’économie française a été impactée par trois crises internationales majeures : financière (2008-2009), sanitaire (2020) et énergétique (2021-2022). À chaque fois, l’État a joué son rôle d’amortisseur, absorbant par un surcroît de dette publique une partie des conséquences pour limiter le coût pour les ménages et entreprises. Par exemple, lors de la crise énergétique, le gouvernement a mis en place un bouclier tarifaire certes coûteux pour les finances publiques, mais qui a permis de protéger les entreprises et les ménages de la hausse des prix de l’énergie et ainsi de soutenir la croissance et contenir l’inflation, comme nous le montrons dans un article à paraître (1).
Le vrai risque n’est pas une crise de dette souveraine à la grecque, mais l’incapacité d’agir lors de la prochaine crise économique. Le contexte international incertain doit inciter à reconstituer des marges budgétaires en réduisant le déficit public, pour que l’État préserve sa possibilité d’action future.
De l’urgence de choisir
Pour réduire le déficit, il faut affronter la difficile question du choix des dépenses à privilégier. Plusieurs solutions avancées visent à contourner cette question. La première solution, de facilité, invoque des dépenses « inutiles » à supprimer : agences d’État, réorganisations administratives. Ces économies,rarement chiffrées, sont d’une ampleur limitée face aux sommes nécessaires pour réduire durablement le déficit. La seconde solution, plus sérieuse, consiste à augmenter les prélèvements obligatoires, par exemple, par la mise en place d’une taxation sur les très grandes fortunes.
France 24 – 2025.
La réduction des inégalités est un objectif du progrès social. Il est tentant de concilier les deux objectifs : réduire le déficit en taxant le patrimoine des plus fortunés. Trois points sont à souligner. Premièrement, la France dispose déjà d’un puissant mécanisme de redistribution des 43 % des ménages les plus favorisés vers les 57 % les moins favorisés en tenant compte des transferts monétaires et en services publics. Elle affiche ainsi des inégalités plus faibles qu’ailleurs et qui progressent moins.
Un pari risqué
Deuxièmement, la mondialisation financière permet aux plus riches d’échapper à l’impôt par l’optimisation fiscale. Cette injustice rend le système fiscal régressif pour le 1 % des plus fortunés, tandis que les revenus du travail ont moins de possibilités d’évasion que ceux du capital. L’échelon international, ou du moins européen est pertinent pour résoudre ce problème. La France peut jouer un rôle moteur, mais ne devrait pas agir seule.
Troisièmement, bâtir une stratégie de réduction du déficit sur la seule taxation des grandes fortunes est insuffisant et risqué. Les estimations les plus favorables conduisent à vingt milliards d’euros de recettes supplémentaires, soit la moitié de l’effort nécessaire. Ces estimations comportent une marge d’incertitude importante pour des impôts nouveaux. La réaction prévisible des consommateurs à une hausse de TVA, est mieux connue que celle des grandes fortunes à un nouvel impôt sur leur patrimoine. Agir uniquement sur cette source fiscale serait un pari risqué.
Changer les priorités
Il faut donc accepter de devoir choisir parmi des dépenses qui ont toutes leur utilité et leur légitimité. Au vu des arbitrages récents, il convient d’inverser les priorités. Les décisions budgétaires ont protégé le pouvoir d’achat des retraités par exemple via l’indexation des pensions sur l’inflation, au détriment des autres dépenses publiques. Les retraités ne sont pas une catégorie économique homogène. Comme le reste de la population, elle est marquée par de fortes inégalités de revenus et de patrimoine. Il faut certes protéger les plus fragiles, qu’ils soient actifs ou retraités, mais au-delà, compte tenu des contraintes économiques, réallouer des dépenses des pensions de retraite vers les services publics présente une efficacité macroéconomique.
Investir dans l’éducation, la santé et la transition énergétique peut également être bénéfique à moyen terme par ses effets sur la productivité. Ces services publics sont aussi précieux pour la redistribution. Les travaux de l’Insee sur les comptes nationaux distribués montrent l’importance des transferts en nature dans le système français. Ne pas investir dans l’éducation et la santé dégraderait la qualité de ces services publics avec un risque d’amplification des inégalités, les ménages favorisés se tournant vers l’offre privée de ces services.
Accepter de hiérarchiser les priorités du progrès social, en raison de ces contraintes de court terme, ne signifie pas qu’il ne faille pas s’engager dans des projets visant à s’en libérer à long terme. À nouveau, plusieurs options sont possibles. La priorité est-elle de retrouver des gains de productivité, de changer le modèle de consommation ou de porter au niveau européen des projets d’harmonisation fiscale et de financements européens des dépenses publiques d’intérêt commun ?
Le débat est ouvert, le point essentiel est que les responsables politiques donnent des perspectives de progrès social intégrant ces différents horizons, les contraintes de court terme et les perspectives de long terme, pour répondre de manière crédible et cohérente aux attentes des citoyens.
(1) Langot, F., S. Malmberg, F. Tripier. & J.O. Hairault, 2025. The Macroeconomic and Redistributive Effects of the French Tariff Shield, Journal of Political Economy Macroeconomics, à paraître.
Fabien Tripier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
This is not the first time political arguments over health care policy have instigated a government shutdown. In 2013, for example, the government shut down due to disputes over the Affordable Care Act.
This time around, the ACA continues to play a central role, with Democrats demanding, among other things, an extension of subsidies for ACA plan insurance premiums that are set to expire at the end of 2025. Democrats are also holding out to roll back cuts to the Medicaid program that President Donald Trump signed into law on July 4, as part of what he called his “One Big Beautiful Bill.”
Even as Democrats stage their battle over access to health care, the shutdown itself could also make it harder for Americans to get the care they need. Meanwhile, Trump has threatened to use the crisis to permanently cut federal jobs on a mass scale, including ones in the health care sector, which could substantially reshape federal health agencies and their ability to protect Americans’ health.
ACA subsidies are a major bone of contention in the standoff between Democrats and Republicans.
Most contentiously, these rollbacks to Medicaid cuts would reverse restrictions that made immigrants who are generally present in the country legally, such as refugees and asylum-seekers, ineligible for Medicaid and ACA coverage. These restrictions, which were included in the budget bill, could lead to the loss of insurance for about 1.4 million lawfully present immigrants, the Congressional Budget Office has estimated.
Most obviously, large-scale staff reductions would interfere with a wide range of health-related services not considered essential during the shutdown. This includes everything from surveying and certifying nursing homes to assisting Medicaid and Medicare beneficiaries and overseeing contracts or extra payments to rural ambulance providers.
The Centers for Medicare and Medicaid Services has indicated that there is enough funding for Medicaid, the government program that primarily provides health services to low-income Americans, to support the program through the end of the calendar year. If the shutdown lasts beyond that, states may have to decide whether to temporarily fund the program on their own or whether to reduce or delay provider payments. However, no previous shutdown has ever lasted more than 34 days.
Whether this threat is simply a bargaining tactic remains to be seen, and it’s unclear whether health-related workers and agencies are in the crosshairs. But given that previous layoffs specifically targeted health programs, more permanent reductions in programs that affect health care may be on the way.
Simon F. Haeder does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Research by charity End Sexism in Schools has found that over half of history lessons delivered to children aged 11 to 14 in England feature no women at all. With the government set to allocate funding to boost the provision of school libraries, here are some books – for a range of ages – to open young eyes to women’s lives, experiences and marginalisation in our past.
Books that strike a balance between being age appropriate, featuring rich, well-researched context and capturing the attention are top of my list. If they focus on lesser-known women in history, all the better.
For primary school children, biographical collections dominate the field. Take Kate Pankhurst’s Fantastically Great Women Who Changed the World. This book introduces young historians to a host of inspiring women from different ethnicities and backgrounds, while carefully setting out the circumstances and barriers that each woman faced in her time and place. The cartoon illustrations and accessible format of the text are a sure-fire classroom pleaser.
Kay Woodward’s What Would She Do? Advice from Iconic Women in History does a similar job for children aged around nine upwards, but with an added participatory element. It presents readers with the real-life dilemmas that the iconic women faced and encourages empathetic problem solving – what would she do? It also underscores the importance of resilience.
Vashti Harrison’s Little Leaders. Bold Women in Black History is an excellent choice. Meanwhile Rachel Ignotofsky’s Women in Science introduces young readers to women of diverse backgrounds, from antiquity to the 20th century, who have made their mark in maths, science and technology.
Biography anthologies introduce children to a wide range of historical figures. Twinsterphoto/Shutterstock
Along with the books compiling sketches of notable women’s lives, there are growing numbers of detailed biographies for primary school children that illuminate women’s place in the past. In the mainstream, there’s the Little People, Big Dreams series. My favourites feature architect Zaha Hadid, singer Aretha Franklin and artist Louise Bourgeois.
Particularly engaging historical biographies for children include Counting on Katherine, the story of Katherine Johnson, an African-American mathematician whose orbital calculations were instrumental in early US space missions. Kathleen Krull’s inspirational story of American lawyer and Supreme Court justice Ruth Bader Ginsburg also deserves a mention, along with Haydn Kaye’s book on the British suffragist pioneer Emmeline Pankhurst.
Women’s rights
My own research includes a focus on early 20th-century feminist activism. I’ve read Kay Barnham’s Women’s Rights and Suffrage with my six year old. It examines women’s historical legal status and political resistance from a global perspective. Then there’s David Roberts’ beautifully illustrated Suffragette: The Battle for Equality and Susan Campbell Bartoletti’s How Women Won the Vote. These books document the key ideologies and objectives of Edwardian British suffragism.
But what about the ordinary women of history, those of us who did not crusade or trailblaze – or at least not in public? Sadly, few books aimed at primary school children address this question head on. However, there is hope on the horizon for teens.
My 13-year-old daughter’s current favourite book is the teen edition of Philippa Gregory’s Normal Women. Making History for 900 Years. Gregory gives a detailed account of the lives of a diverse array of women over this broad time period in English history, highlighting the role of patriarchy and women’s subjugation in everyday life. With accessible language, relatable stories and illustrations, Normal Women is a surefire hit with older children trying to make sense of their place in the world.
Kate Mosse’s Feminist History for Every Day of the Year is also a captivating read, supplementing the multi-biography of notable women format with relative unknowns, for an older audience.
This kind of work, which goes beyond celebrating the famous few and sets out to write women back into the past, represents real progress in historical works for the next generation.
Rachael Attwood does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Christopher Watson, Professor, Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast
On October 6 1995, at a scientific meeting in Florence, Italy, two Swiss astronomers made an announcement that would transform our understanding of the universe beyond our solar system. Michel Mayor and his PhD student Didier Queloz, working at the University of Geneva, announced they had detected a planet orbiting a star other than the Sun.
The star in question, 51 Pegasi, lies about 50 light years away in the constellation Pegasus. Its companion – christened 51 Pegasi b – was unlike anything written in textbooks about how we thought planets might look. This was a gas giant with a mass of at least half that of Jupiter, circling its star in just over four days. It was so close to the star (1/20th of Earth’s distance from the Sun, well inside Mercury’s orbit) that the planet’s atmosphere would be like a furnace, with temperatures topping 1,000°C.
The instrument behind the discovery was Elodie, a spectrograph that had been installed two years earlier at the Haute-Provence observatory in southern France. Designed by a Franco-Swiss team, Elodie split starlight into a spectrum of different colours, revealing a rainbow etched with fine dark lines. These lines can be thought of as a “stellar barcode”, providing details on the chemistry of other stars.
What Mayor and Queloz spotted was 51 Pegasi’s barcode sliding rhythmically back-and-forth in this spectrum every 4.23 days – a telltale signal that the star was being wobbled back and forth by the gravitational tug of an otherwise unseen companion amid the glare of the star.
After painstakingly ruling out other explanations, the astronomers finally decided that the variations were due to a gas giant in a close-in orbit around this Sun-like star. The front page of the Nature journal in which their paper was published carried the headline: “A planet in Pegasus?”
The discovery baffled scientists, and the question-mark on Nature’s front cover reflected initial skepticism. Here was a purported giant planet next to its star, with no known mechanism for forming a world like this in such a fiery environment.
While the signal was confirmed by other teams within weeks, reservations about the cause of the signal remained for almost three years before being finally ruled out. Not only did 51 Pegasi b become the first planet discovered orbiting a Sun-like star outside our Solar System, but it also represented an entirely new type of planet. The term “hot Jupiter” was later coined to describe such planets.
This discovery opened the floodgates. In the 30 years since, more than 6,000 exoplanets (the term for planets outside our Solar System) and exoplanet candidates have been catalogued.
Their variety is staggering. Not only hot but ultra-hot Jupiters with a dayside temperature exceeding 2,000 °C and orbits of less than a day. Worlds that orbit not one but two stars, like Tatooine from Star Wars. Strange “super-puff” gas giants larger than Jupiter but with a fraction of the mass. Chains of small rocky planets all piled up in tight orbits.
The discovery of 51 Pegasi b triggered a revolution and, in 2019, landed Mayor and Queloz a Nobel prize. We can now infer that most stars have planetary systems. And yet, of the thousands of exoplanets found, we have yet to find a planetary system that resembles our own.
The quest to find an Earth twin – a planet that truly resembles Earth in size, mass and temperature – continues to drive modern-day explorers like us to search for more undiscovered exoplanets. Our expeditions may not take us on death-defying voyages and treks like the past legendary explorers of Earth, but we do get to visit beautiful, mountain-top observatories often located in remote areas around the world.
We are members of an international consortium of planet hunters that built, operate and maintain the Harps-N spectrograph, mounted on the Telescopio Nazionale de Galileo on the beautiful Canary island of La Palma. This sophisticated instrument allows us to rudely interrupt the journey of starlight which may have been travelling unimpeded at speeds of 670 million miles per hour for decades or even millennia.
Each new signal has the potential to bring us closer to understanding how common planetary systems like our own may (or may not) be. In the background lies the possibility that one day, we may finally detect another planet like Earth.
The origins of exoplanet study
Up until the mid-1990s, our Solar System was the only set of planets humanity ever knew. Every theory about how planets formed and evolved stemmed from these nine, incredibly closely spaced data-points (which went down to eight when Pluto was demoted in 2006, after the International Astronomical Union agreed a new definition of a planet).
All of these planets revolve around just one star out of the estimated 10¹¹ (roughly 100 billion) in our galaxy, the Milky Way – which is in turn one of some 10¹¹ galaxies throughout the universe. So, trying to draw conclusions from the planets in our Solar System alone was a bit like aliens trying to understand human nature by studying students living together in one house. But that didn’t stop some of the greatest minds in history speculating on what lay beyond.
The ancient Greek philosopher Epicurus (341-270BC) wrote: “There is an infinite number of worlds – some like this world, others unlike it.” This view was not based on astronomical observation but his atomist theory of philosophy. If the universe was made up of an infinite number of atoms then, he concluded, it was impossible not to have other planets.
Epicurus clearly understood what this meant in terms of the potential for life developing elsewhere: “We must not suppose that the worlds have necessarily one and the same shape. Nobody can prove that in one sort of world there might not be contained – whereas in another sort of world there could not possibly be – the seeds out of which animals and plants arise and all the rest of the things we see.”
In contrast, at roughly the same time, fellow Greek philosopher Aristotle (384-322 BC) was proposing his geocentric model of the universe, which had the Earth immobile at its centre with the Moon, Sun and known planets orbiting around us. In essence, the Solar System as Aristotle conceived it was the entire universe. In On the Heavens (350BC), he argued: “It follows that there cannot be more worlds than one.”
Such thinking that planets were rare in the universe persisted for 2,000 years. Sir James Jeans, one of the world’s top mathematicians and an influential physicist and astronomer at the time, advanced his tidal hypothesis of planet formation in 1916. According to this theory, planets were formed when two stars pass so closely that the encounter pulls streams of gas off the stars into space, which later condense into planets. The rareness of such close cosmic encounters in the vast emptiness of space led Jeans to believe that planets must be rare, or – as was reported in his obituary – “that the solar system might even be unique in the universe”.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
But by then, understanding of the scale of the universe was slowly changing. In the “Great Debate” of 1920, held at the Smithsonian Museum of Natural History in Washington DC, American astronomers Harlow Shapley and Heber Curtis clashed over whether the Milky Way was the entire universe, or just one of many galaxies. The evidence began to point to the latter, as Curtis had argued for. This realisation – that the universe contained not just billions of stars, but billions of galaxies each containing billions of stars – began to affect even the most pessimistic predictors of planetary prevalence.
In the 1940s, two things caused the scientific consensus to pivot dramatically. First, Jeans’ tidal hypothesis did not stand up to scientific scrutiny. The leading theories now had planet formation as a natural byproduct of star formation itself, opening up the potential for all stars to host planets.
Then in 1943, claims emerged of planets orbiting the stars 70 Ophiuchus and 61 Cygni c – two relatively nearby star systems visible to the naked eye. Both were later shown to be false positives, most likely due to uncertainties in the telescopic observations that were possible at the time – but nonetheless, it greatly influenced planetary thinking. Suddenly, billions of planets in the Milky Way was considered a genuine scientific possibility.
For us, nothing highlights this change in mindset more than an article written for the Scientific American in July 1943 by the influential American astronomer Henry Norris Russell. Whereas two decades earlier, Russell had predicted that planets “should be infrequent among the stars”, now the title of his article was: “Anthropocentrism’s Demise. New Discoveries Lead to the Probability that There Are Thousands of Inhabited Planets in our Galaxy”.
Strikingly, Russell was not merely making a prediction about any old planets, but inhabited ones. The burning question was: where were they? It would take another half-century to begin finding out.
The Harps-N spectrograph is mounted on the Telescopio Nazionale de Galileo (left) in La Palma, Canary Islands. lunamarina/Shutterstock
How to detect an exoplanet
When we observe myriad stars through La Palma’s Italian-built Galileo telescope using our Harps-N spectrograph, it is amazing to consider how far we have come since Mayor and Queloz announced their discovery of 51 Pegasi b in 1995. These days, we can effectively measure the masses of not just Jupiter-like planets, but even small planets thousands of light years away. As part of the Harps-N collaboration, we have had a front-row seat since 2012 in the science of small exoplanets.
Another milestone in this story came four years after the 51 Pegasi b discovery, when a Canadian PhD student at Harvard University, David Charbonneau, detected the transit of a known exoplanet. This was another hot Jupiter, known as HD209458b, also located in the Pegasus constellation, about 150 light years from Earth.
Transit refers to a planet passing in front of its star, from the perspective of the observer, momentarily making the star appear dimmer. As well as detecting exoplanets, the transit technique enables us to measure the radius of the planet by taking many brightness measurements of a star, then waiting for it to dim due to the passing planet. The extent of blocked starlight depends on the radius of the planet. For example, Jupiter would make the Sun 1% dimmer to alien observers, while for Earth, the effect would be a hundred times weaker.
In all, four times more exoplanets have now been discovered using this transit technique compared with the “barcode” technique, known as radial velocity, that the Swiss astronomers used to spot the first exoplanet 30 years ago. It is a technique that is still widely used today, including by us, as it can not only find a planet but also measure its mass.
A planet orbiting a star exerts a gravitational pull which causes that star to wobble back and forth – meaning it will periodically change its velocity with respect to observers on Earth. With the radial velocity technique, we take repeated measurements of the velocity of a star, looking to find a stable periodic wobble that indicates the presence of a planet.
These velocity changes are, however, extremely small. To put it in perspective, the Earth makes the Sun change its velocity by a mere 9cm per second – slower than a tortoise. In order to find planets with the radial velocity technique, we thus need to measure these small velocity changes for stars that are many many trillions of miles away from us.
The state-of-the-art instruments we use are truly an engineering feat. The latest spectrographs, such as Harps-N and also Espresso, can accurately measure velocity shifts of the order of tenths of centimetres per second – although still not sensitive enough to detect a true Earth twin.
But whereas this radial velocity technique is, for now, limited to ground-based observatories and can only observe one star at the time, the transit technique can be employed in space telescopes such as the French Corot (2006-14) and Nasa’s Kepler (2009-18) and Tess (2018-) missions. Between them, space telescopes have detected thousands of exoplanets in all their diversity, taking advantage of the fact we can measure stellar brightness more easily from space, and for many stars at the same time.
Despite the differences in detection success rate, both techniques continue to be developed. Applying both can give the radius and mass of a planet, opening up many more avenues for studying its composition.
To estimate possible compositions of our discovered exoplanets, we start by making the simplified assumption that small planets are, like Earth, made up of a heavy iron-rich core, a lighter rocky mantle, some surface water and a small atmosphere. Using our measurements of mass and radius, we can now model the different possible compositional layers and their respective thickness.
This is still very much a work in progress, but the universe is spoiling us with a wide variety of different planets. We’ve seen evidence of rocky worlds being torn apart and strange planetary arrangements that hint at past collisions. Planets have been found across our galaxy, from Sweeps-11b in its central regions (at nearly 28,000 light years away, one of the most distant ever discovered) to those orbiting our nearest stellar neighbour, Proxima Centauri, which is “only” 4.2 light years away.
Illustration of Proxima b, one of the exoplanets orbiting the nearest star to our Sun, Proxima Centauri. Catmando/Shutterstock
Searching for ‘another Earth’
In early July 2013, one of us (Christopher) was flying out to La Palma for my first “go” with the recently commissioned Harps-N spectrograph. Keen not to mess up, my laptop was awash with spreadsheets, charts, manuals, slides and other notes. Also included was a three-page document I had just been sent, entitled: Special Instructions for ToO (Target of Opportunity).
The first paragraph stated: “The Executive Board has decided that we should give highest priority to this object.” The object in question was a planetary candidate thought to be orbiting Kepler-78, a star a little cooler and smaller than our Sun, located about 125 light years away in the direction of the constellation Cygnus.
A few lines further down read: “July 4-8 run … Chris Watson” with a list of ten times to observe Kepler-78 – twice per night, each separated by a very specific four hours and 15 minutes. The name above mine was Didier Queloz’s (he hadn’t been awarded his Nobel prize yet, though).
This planetary candidate had been identified by the Kepler space telescope, which was tasked with searching a portion of the Milky Way to look for exoplanets as small as the Earth. In this case, it had identified a transiting planet candidate with an estimated radius of 1.16 (± 0.19) Earth radii – an exoplanet not that much larger than Earth had potentially been spotted.
I was in La Palma to attempt to measure its mass which, combined with the radius from Kepler, would allow the density and possible composition to be constrained. I wrote at the time: “Want 10% error on mass, to get a good enough bulk density to distinguish between Earth-like, iron-concentrated (Mercury), or water.”
In all, I took ten out of our team’s total of 81 exposures of Kepler-78 in an observing campaign lasting 97 days. During that time, we became aware of a US-led team who were also looking for this potential planet. In true scientific spirit, we agreed to submit our independent findings at the same time. On the specified date. Like a prisoner swap, the two teams exchanged results – which agreed. We had, within the uncertainties of our data, reached the same conclusion about the planet’s mass.
Its most likely mass came out as 1.86 Earth masses. At the time, this made Kepler-78b the smallest extrasolar planet with an accurately measured mass. The density was almost identical to that of Earth’s.
But that is where the similarities to our planet ended. Kepler-78b has a “year” that lasts only 8.5 hours, which is why I had been instructed to observe it every 4hr 15min – when the planet was at opposite sides of its orbit, and the induced “wobble” of the star would be at its greatest. We measured the star wobbling back and forth at about two metres per second – no more than a slow jog.
Kepler-78b’s short orbit meant its extreme temperature would cause all rock on the planet to melt. It may have been the most Earth-like planet found at the time in terms of its size and density, but otherwise, this hellish lava world was at the very extremes of our known planetary population.
Illustration of the Kepler-78b ‘lava world’ – similar in size and density to Earth. simoleonh/Shutterstock
In 2016, the Kepler space telescope made another landmark discovery: a system with at least five transiting planets around a Sun-like star, HIP 41378, in the Cancer constellation. What made it particularly exciting was the location of these planets. Where most transiting planets we have spotted are closer to their star than Mercury is to the Sun (due to our detection capabilities), this system has at least three planets beyond the orbital radius of Venus.
Having decided to use our Harps-N spectrograph to measure the masses of all five transiting planets, it became clear after more than a year of observing that one instrument would not be enough to analyse this challenging mix of signals. Other international teams came to the same conclusion and, rather than compete, we decided to come together in a global collaboration that holds strong to this day, with hundreds of radial velocities gathered over many years.
We now have firm masses and radii for most of the planets in the system. But studying them is a game of patience. With planets much further away from their host star, it takes much longer before there is a new transit event or the periodic wobble can be fully observed. We thus need to wait multiple years and gather lots of data to gain insight in this system.
The rewards are obvious, though. This is the first system that starts resembling our Solar System. While the planets are a bit larger and more massive than our rocky planets, their distances are very similar – helping us to understand how planetary systems form in the universe.
The holy grail for exoplanet explorers
After three decades of observing, a wealth of different planets have emerged. We started with the hot Jupiters, large gas giants close to their star that are among the easiest planets to find due to both deeper transits and larger radial velocity signals. But while the first tens of discovered exoplanets were all hot Jupiters, we now know these planets are actually very rare.
With instrumentation getting better and observations piling up, we have since found a whole new class of planets with sizes and masses between those of Earth and Neptune. But despite our knowledge of thousands of exoplanets, we still have not found systems truly resembling our solar system, nor planets truly resembling Earth.
It is tempting to conclude this means we are a unique planet in a unique system. While this still could be true, it is unlikely. The more reasonable explanation is that, for all our stellar technology, our capabilities of detecting such Earth-like planets are still fairly limited in a universe so mind-bogglingly vast.
The holy grail for many exoplanet explorers, including us, remains to find this true Earth twin – a planet with a similar mass and radius as Earth’s, orbiting a star similar to the Sun at a distance similar to how far we are from the Sun.
While the universe is rich in diversity and holds many planets unlike our own, discovering a true Earth twin would be the best place to start looking for life as we know it. Currently, the radial velocity method – as used to find the very first exoplanet – remains by far the best-placed method to find it.
Thirty years on from that Nobel-winning discovery, pioneering planetary explorer Didier Queloz is taking charge of the very first dedicated radial velocity campaign to go in search of an Earth-like planet.
A major international collaboration is building a dedicated instrument, Harps3, to be installed later this year at the Isaac Newton Telescope on La Palma. Given its capabilities, we believe a decade of data should be enough to finally discover our first Earth twin.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Christopher Watson receives funding from the Science and Technology Facilities Council (STFC).
Annelies Mortier receives funding from the Science and Technology Facilities council (STFC) and UK Research and Innovation (UKRI).
Source: The Conversation – UK – By Rod Thornton, Senior Lecturer in International Studies, Defence and Security., King’s College London
Five Nato countries neighbouring Russia or its ally, Belarus, have announced that they are to opt out of the Ottawa treaty of 1997.
This treaty bans the use by signatories of anti-personnel (AP) landmines. These states – Poland, Finland, Lithuania, Estonia and Latvia – now have plans to create a 2,000-mile stretch of mined areas as part of a defensive effort against any possible attack from Russia.
The move to create such minefields comes as the result of both a recognition of the perceived growing threat from Russia and of the important defensive effect – as proved during the current Ukraine war – that both AP and anti-tank (AT) landmines can generate.
AT mines are not covered by the Ottawa treaty and all countries are free to use them. AT mines target only vehicles (the weight of a human cannot set them off). The main issue with AP mines, which target humans, is that they can be set off by civilians as well as soldiers.
As such, they are deemed to be not only indiscriminate weapons but also those whose “persistence” means that they can remain a danger long after any conflict is over. Their banning is seen by many as an “ethical imperative”.
In the current era of military development dominated by the introduction of high-tech weapons systems, it appears that the low-tech, unsophisticated and relatively cheap landmine – which can be laid in their millions – can have a significant role to play in modern warfare.
Minefields have proved very effective as a defensive tool in the current Ukraine war because of their ability to disrupt enemy assaults. This recognition has, for these five Nato states, meant that their adherence to the Ottawa treaty had to end, despite its grounding in humanitarian concerns.
The Narva bridge forms the border between Estonia and Russia. Estonia is one of the countries planning to add more fortifications along its border. Alexandre.ROSA/Shutterstock
These five states have been criticised by human rights organisations for withdrawing from the treaty. The UK was also a signatory in 1997 and still remains bound by its stipulations. The US, Russia and China didn’t sign in the first place.
The role of landmines
Landmines have proved a significant defensive tool in the Ukraine war. In the initial days of Russia’s full-scale invasion in February 2022, the Ukrainian side was very quick to deploy some of its stockpile of Soviet-era AT mines.
These were very effective in restricting the early advance of Russian armoured columns (the term “armour” covering both tanks and other armoured vehicles) on Kyiv. These mines created disruption as Russian forces were either stopped or had to find other routes around the minefields.
The delays allowed time for Ukrainian forces to set up firm defensive positions that eventually halted the Russian columns and led to their being turned back before reaching Kyiv.
Ukrainian forces then launched their own armoured offensive in the summer of 2023. These forces, by now trained and equipped by Nato states and using trademark Nato combined arms manoeuvre warfare techniques, were also held up in dense Russian minefields. Their advance ground to a halt.
The presence of vast fields of both AP and AT mines meant that the supposedly war-winning principal of “manoeuvre warfare”, which relies on movement, initiative and surprise, and which the Ukrainians had been taught by Nato instructors, became impossible to conduct. The Russians call their defensive minefields “insurmountable”.
Given the power of minefields, both sides came ultimately to understand that their presence had to mean a rethink of how the war should be conducted. Mines led to a change in tactics.
Both sides had to adopt much more attritional approaches. Outcomes would now largely be dictated by the weight of artillery fire and not by manoeuvre. It is minefields that form the basis for the Ukrainian forces’ “fortress belt” across much of the Donbas region.
Russian use of landmines slowed down a Ukrainian counter attack.
Despite Kyiv having itself signed the Ottawa treaty in 2005, it was clear that its forces were making considerable use of banned AP mines along with the “legal” AT mines.
The Russian defensive arrangements like those of Ukrainian forces make considerable use of mines. The Russian side is able to draw on what is perceived to be the world’s largest stockpile of, in particular, AP mines (said to be amount to some 26.5 million). Zelensky has accused Russia of using AP mines “with extreme cynicism”, (referring to the alleged booby trapping of dead Russian soldiers with AP mines).
Old tech with big impact
What is interesting here is that the very old technology of landmines is being combined with the far newer one of drones. Minefields can now be laid far more efficiently by using drones to plant them rather than, as has been the norm, by hand. The drones have changed how mine warfare is carried out.
Given what is happening in Ukraine, it is now well understood that mines can do more than help decide the course of mere tactical military engagements; they can create strategic outcomes. They can, in essence, decide the outcome of wars.
It is with this understanding in mind that these five Nato states have withdrawn from the Ottawa treaty. AP mines are patently needed on today’s battlefields. They are seen as an essential addition to the AT mines. Each type has their defensive role to play.
AP and AT mines have both proved themselves to be essential tools of modern warfare. Today, the war in Ukraine is characterised and dominated, due to the presence of mines, by defence and not offence. Frontlines are largely static. Humble, cheap and simple they may be, but landmines do, it seems, have a crucial role to play in modern warfare.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Part of the ancient lake delta in Jezero Crater on Mars.JPL-Caltech
Recently, Nasa revealed exciting details of new findings from Mars. Scientists have
discovered tiny patterns of unusual minerals in the clay-rich rocks on the edge of
Jezero Crater – an ancient lake once fed by Martian river systems, and the
exploration site of the Nasa Perseverance Rover.
The jury is still out on whether these are actually signs of life, but this discovery has reignited the discussion about the previous existence of life on Mars, and the possibility that it could still survive there today.
We’ll need many different lines of evidence to answer this question, but there is precedence for considering certain Martian environments as currently habitable.
Early Earth and early Mars were relatively similar, but this similarity didn’t last long. Both had atmospheres and magnetic fields that offered some protection from harmful radiation originating from the Sun, along with bodies of liquid water on their surface. We know that these conditions led to the origin of life on Earth, so it is possible that the same could have happened on Mars.
While life on Earth was beginning to thrive, Mars lost its magnetic field as its core cooled. This exposed the planet to harmful solar rays which began to erode the
atmosphere. As the atmosphere disappeared, the Martian surface became colder
and drier, eventually becoming the freezing desert we know today.
This is why many scientists don’t expect to find living organisms on the surface of
Mars – it is simply too inhospitable for life as we know it. Instead, the hope lies in uncovering microbial life hidden in protected underground or icy regions.
Where could life survive on Mars?
Possible locations for Martian microbial life include caves, inside or underneath ice sheets at the poles, or deep underground. All of these environments have analogues (environments with certain similarities) on Earth that host microorganisms. So it is not much of a stretch to consider that if life began on Mars, it could still be holding on in these extreme niches.
Perhaps the most plausible of these is underground – the Martian subsurface. Extending from a few metres to several kilometres deep, it is thought to be the planet’s most stable and long-lived potential habitat.
While the surface has been cold, dry, and generally inhospitable for much of Martian history, the deep subsurface may have offered more favourable conditions. On Earth, the deep biosphere – the life that survives beneath the surface – provides a useful comparison.
A substantial amount of Earth’s microbial life exists underground, surviving in cracks within rocks. These ecosystems are dominated by lithoautotrophs – microbes that get energy by feeding on those rocks. Methane, a potential byproduct of some
lithoautroph feeding habits, has even been detected on Mars. But there are many
ways to generate methane underground without life, so right now this doesn’t tell us much.
The potential for a deep biosphere hinges on factors including the availability of
liquid water, a source of energy, space to live in, and tolerable temperatures. There is possible evidence for the existence of liquid water below the surface of Mars, but this is still under debate.
This would facilitate chemical reactions known as water-rock reactions which generate energy for microbes to live on. Because of its weaker gravity, rocks on Mars may be less compressed than those on Earth and remain more porous at depth, providing space for microbes to live in.
At the same time, Mars produces less heat from its interior, which means temperatures suitable for life could extend nearly twice as deep underground as they do on Earth.
Scientists spend a lot of time analysing places on Earth – Mars analogues – to try to understand the possibilities for past and present life on Mars. These environments are not identical to Mars, but they share at least one important feature such as extreme dryness, high salt levels, or high UV exposure.
Earth’s deep subsurface is one example, and others include the Atacama Desert in South America, sediments at Lake Salda in Turkey, and salts found in Utah’s Pilot Valley. Researchers around the world are investigating these sites on Earth to better understand how Martian conditions might affect life and its preservation. As no one location on Earth could possibly match all Martian conditions, scientists also run controlled laboratory experiments.
An example of this is the use of specialised “Mars chambers” to reproduce Martian environmental conditions such as its atmosphere, radiation exposure, and temperature. All of these investigations combined help us to better understand the potential for life to exist on Mars.
The Mars chamber at Nasa’s Goddard Space Flight Center.
Signs of life today?
Right now there is no conclusive evidence of life on Mars past or present. Nasa’s
“leopard spots” are the most promising signs we have, but these are still
inconclusive. If life exists on Mars today, it is almost certainly not widespread like on Earth – our probes and rovers would have seen it.
However, important opportunities lie ahead. The upcoming European Space Agency (Esa) ExoMars Rosalind Franklin rover will be able to drill up to two metres below the Martian surface. This will give us a chance to study the shallow subsurface of Mars which may contain living microorganisms. But this is only the start—most scientists agree that we will need to go deeper.
Drilling deep on Earth is still a huge challenge and there is so much we don’t know about our own subsurface life. Probing the deep subsurface of Mars will be a major scientific and engineering challenge, but one that may hold the key to finding existing Martian life.
Seán Jordan receives funding from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement No 1101114969) and from Research Ireland (Pathway award 22/PATH-S/10692). He is affiliated with the Research Ireland Centre for Applied Geosciences (iCRAG).
Devyani Jambhule receives funding from the Research Ireland Pathway Award ((22/PATH-S/10692). She is affiliated with the Origin of Life Early-career Network (OoLEN).
Source: The Conversation – UK – By Ben Garrod, Professor of Evolutionary Biology and Science Engagement, University of East Anglia
The pant-hoot of a chimpanzee is one of the most visceral sounds in nature – a rolling call that rises to a crescendo. I once heard the call cutting through the heavy silence of the evening air. The cacophony trailed off and ended with the two apes patting one another, in reassurance and reconciliation.
Unlike most chimpanzee hoots performed in dense African forests, the echoes of this one bounced off the towering sandstone pillars of a cathedral. There were no chimpanzees in sight, just two humans in front of an audience of hundreds, at a science festival. As my heart rate returned to normal, I sat back down to resume my interview with the legendary Dr Jane Goodall.
News of her death, at the age of 91, is being felt around the globe. The grief is both personal and collective. For countless biologists, naturalists, conservationists and animal-lovers, she was a constant presence – a guiding light who shifted how we see the natural world and our place in it.
Having progressed from a secretarial course straight into a doctorate at Cambridge, Jane was no stranger to facing challenges head on. She lived in a tent in rural Tanzania, accompanied by her incredibly supportive mother, to study the behaviour of wild chimpanzees.
Her mentor was the renowned anthropologist Louis Leakey, who believed invaluable insights into our own evolutionary history could be gleaned from the study of orangutans, gorillas and chimpanzees. Many doubted her methods, but Jane was the first to record detailed evidence of hunting and even tool use in chimpanzees. Her groundbreaking work paved the way for identifying culture in non-human animals and, more importantly, helped shatter assumptions about the divide between humans and animals.
Following in her footsteps
Jane changed the way we view and understand animals and hundreds, if not thousands, of academics have followed in her footsteps to carry on and further her work. Many of us academics see the world in a laser focus singularity, at times. It’s what we are trained to do and is often seen as a gold standard. But Jane was always a fan of the wider picture, a more holistic approach. She left active academia to focus on protecting her beloved chimpanzees through community-driven conservation and education.
She took on the seemingly impossible task to engage, support and empower children and young people around the world, setting up “Roots & Shoots” programme through the Jane Goodall Institute. It’s now active in more than 100 countries, with millions of young people having taken part. Her aim was simple but radical: to empower the next generation to act with compassion and knowledge, whatever path they chose.
Moving between worlds
What made Jane extraordinary was not just her science, but her voice. She forged a path in that very grey area between high-level science, political discourse and public engagement. She was plain-speaking and never lacked integrity. She was a calm and trusted voice in a clamouring crowd of increasingly lying politicians and clickbait influencers. Jane brought science, conservation and advocacy to the millions.
She made us all part of the dialogue and equipped us, through patient and diligent explanation, to be able to contribute meaningfully. Her work and her approach meant no one was excluded from having a voice or be unable to offer ideas, advice or solutions. We are rarely very good at doing that in science, but Jane made it her modus operandi. Her calm and trusted voice brought often complex and emotive scientific concepts and challenges to a level where we could all become stakeholders. She made us realise that our actions had global impacts and that what happens across the world can affect us all.
The fact she was so at ease being met by world leaders, sitting on the couch on prime time entertainment shows, in academic conferences or in rural schools in the global south, demonstrated a skill and ability to engage with us all. If we had even a few more voices like Jane’s, perhaps there wouldn’t be such a disconnect between science and society.
There will be countless ways we can carry on with Jane’s legacy, but one of the most powerful is to encourage more of us to make science accessible for all of us. One of her most poignant quotes was: “What do you do makes a difference, and you have to decide what kind of difference you want to make.” We can only make the differences we need to make if we are more compassionate and better scientifically informed.
Ben Garrod does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The bladder is easy to overlook – until it starts causing trouble. This small, balloon-like organ in the lower urinary tract quietly stores and releases urine, helping the body eliminate waste and maintain fluid balance.
But just like your heart or lungs, your bladder needs care. Neglect it and you risk discomfort, urinary tract infections and, in some cases, serious conditions such as incontinence (involuntary leakage of urine) or even cancer.
The good news: many bladder problems are preventable and linked to everyday habits. Here are six common habits that can sabotage bladder health.
1. Holding in urine too long
Delaying a bathroom visit allows urine to build up and stretches the bladder muscles. Over time this can weaken their ability to contract and empty the bladder completely, leading to urinary retention. Research shows that holding urine gives bacteria more time to multiply, raising the risk of urinary tract infections (UTIs).
Experts recommend emptying your bladder every three to four hours. In severe cases, chronic retention can even damage the kidneys. When you do go, relax – women in particular should sit fully on the toilet seat rather than hovering, so the pelvic muscles can release. Take your time and consider double voiding: after you finish, wait 10–20 seconds and try again to ensure the bladder is fully emptied.
2. Not drinking enough water
Dehydration makes urine more concentrated, which irritates the bladder lining and increases infection risk. Aim to drink six-to-eight glasses of water (about 1.5 to 2 litres) a day, more if you’re very active or in hot weather. If you have kidney or liver disease, check with your doctor first.
Too little fluid can also lead to constipation. Hard stools press on the bladder and pelvic floor, making bladder control harder.
3. Too much caffeine and alcohol
Caffeine and alcohol can irritate the bladder and act as mild diuretics, increasing urine production. A study found that people consuming over 450mg of caffeine per day – roughly four cups of coffee – were more likely to experience incontinence than those drinking less than 150mg.
Another study showed men who drank six-to-ten alcoholic drinks per week were more likely to develop lower urinary tract symptoms than non-drinkers. Heavy alcohol use may also increase bladder cancer risk, although the evidence is mixed. Cutting back can ease bladder symptoms and reduce long-term risk.
4. Smoking
Smoking is a major cause of bladder cancer, responsible for about half of all cases. Smokers are up to four times more likely to develop the disease than non-smokers, especially if they started young or smoked heavily for years – cigars and pipes included.
Tobacco chemicals enter the bloodstream, are filtered by the kidneys and stored in urine. When urine sits in the bladder, these carcinogens, including arylamines, can damage the bladder lining.
5. Poor bathroom hygiene
Improper hygiene can introduce bacteria into the urinary tract. Wiping from back to front, using harsh soaps or neglecting hand-washing can all upset the body’s natural microbiome and increase UTI risk.
What you eat and how active you are affects your bladder more than you might expect. Excess weight puts pressure on the bladder and increases the likelihood of leakage. Regular exercise helps maintain a healthy weight and prevents constipation, which otherwise presses on the bladder.
Certain foods and drinks – including fizzy drinks, spicy meals, citrus fruits and artificial sweeteners – can irritate the bladder and worsen symptoms for those already prone to problems. Aim for a fibre-rich diet with plenty of whole grains, fruit and vegetables to protect both digestive and bladder health.
Bladder health is shaped by everyday choices. Staying well-hydrated, avoiding irritants, practising good hygiene and listening to your body can all help prevent long-term problems. If you notice persistent changes such as frequent urination, difficulty emptying the bladder, pain or burning when you pee, cloudy or smelly urine, or any sign of blood, see a healthcare professional. Your bladder will thank you.
Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.