Cent ans après sa sortie, « Le Cuirassé Potemkine » d’Eisenstein continue de fasciner

Source: The Conversation – France (in French) – By Dušan Radunović, Associate Professor/Director of Studies (Russian), Durham University

L’escalier d’Odessa, où a lieu cette scène de massacre, s’est invité dans le cinéma mondial à de nombreuses reprises en cent ans. Goskino / WIkipedia, CC BY

S’il ne s’en tient que partiellement à la réalité des événements de cette mutinerie de 1905, « Le Cuirassé Potemkine » montre comment un film historique peut façonner la mémoire collective et le langage cinématographique.


Film phare du cinéma russe, Le Cuirassé Potemkine de Sergei Eisenstein a été présenté pour la première fois à Moscou le 24 décembre 1925. Mais les nombreux hommages rendus par les cinéastes tout au long du siècle qui a suivi démontrent s’il le fallait qu’il est toujours pertinent aujourd’hui. Qu’est-ce qui a donc fait de cette œuvre, connue pour sa manière audacieuse de traiter les événements historiques, l’un des films historiques les plus influents jamais réalisés ?

L’histoire de la réalisation du film apporte quelques éléments de réponse. Après le succès de son premier long-métrage en 1924, La Grève, Eisenstein fut chargé en mars 1925 de réaliser un film commémorant le 20e anniversaire de la révolution russe de 1905. Cette vaste révolte populaire, déclenchée par des conditions de travail déplorables et un profond mécontentement social, secoua l’Empire russe et constitua un défi direct à l’autocratie impériale. L’insurrection échoua, mais sa mémoire perdura.

Initialement intitulé L’Année 1905, le projet d’Eisenstein s’inscrivait dans un cycle national d’événements publics commémoratifs à travers toute l’Union soviétique. L’objectif était alors d’intégrer les épisodes progressistes de l’histoire russe d’avant la Révolution de 1917 – dans laquelle la grève générale de 1905 occupait une place centrale – au tissu de la nouvelle vie soviétique.

Le scénario original envisageait le film comme la dramatisation de dix épisodes historiques marquants, mais sans lien direct entre eux, survenus en 1905 : le massacre du Dimanche rouge et la mutinerie sur le cuirassé impérial Potemkine, entre autres.

La fameuse scène de l’escalier d’Odessa, tiré du film « Le Cuirassé Potemkine ».

Tourner la mutinerie, recréer l’histoire

Le tournage principal débuta à l’été 1925, mais rencontra peu de succès, ce qui poussa un Eisenstein de plus en plus frustré à déplacer l’équipe vers le port méridional d’Odessa. Il décida alors d’abandonner la structure épisodique du scénario initial pour recentrer le film sur un seul épisode.

La nouvelle histoire se concentrait exclusivement sur les événements de juin 1905, lorsque les marins du cuirassé Potemkine, alors amarré près d’Odessa, se révoltèrent contre leurs officiers après avoir reçu l’ordre de manger de la viande pourrie infestée de vers.

La mutinerie et les événements qui suivirent à Odessa devaient désormais être dramatisés en cinq actes. Les deux premiers actes et le cinquième, final, correspondaient aux faits historiques : la rébellion des marins et leur fuite réussie à travers la flotte des navires loyalistes.

Quant aux deux parties centrales du film, qui montrent la solidarité du peuple d’Odessa avec les mutins, elles furent entièrement réécrites et ne s’inspiraient que très librement des événements historiques. Et pourtant curieusement, cent ans après sa sortie, sa réputation de récit historique par excellence repose principalement sur ces deux actes. Comment expliquer ce paradoxe ?

La réponse se trouve peut-être dans les deux épisodes centraux eux-mêmes, et en particulier dans le quatrième, avec sa représentation poignante d’un massacre de civils désarmés – dont la célèbre scène du bébé dans une poussette incontrôlable, dévalant les marches – qui confèrent au film une forte résonance émotionnelle et lui donnent une assise morale.

De plus, bien que presque entièrement fictive, la célèbre séquence des escaliers d’Odessa intègre de nombreux thèmes, historiquement fondés, issus du scénario original, notamment l’antisémitisme généralisé et l’oppression exercée par les autorités tsaristes sur leur peuple.

Ces événements sont ensuite rendus de manière saisissante grâce à l’usage particulier du montage par Eisenstein, qui répète les images de la souffrance des innocents pour souligner la brutalité impersonnelle de l’oppresseur tsariste.

Le Cuirassé Potemkine peut être perçu comme une œuvre de mémoire collective, capable de provoquer et de diriger une réaction émotionnelle chez le spectateur, associant passé et présent pour les redéfinir d’une manière particulière. Mais, un siècle plus tard, la manière dont Eisenstein traite le passé, insistant pour établir un lien émotionnel avec le spectateur tout en recréant l’histoire, reste étroitement liée à nos propres façons de nous souvenir et de construire l’histoire.

Avec le recul de 2025, Le Cuirassé Potemkine, avec son idéalisme révolutionnaire et sa promesse d’une société meilleure, a perdu une grande part de son attrait, après la trahison de ces mêmes idéaux, des purges staliniennes des années 1930 jusqu’à la dévastation actuelle de l’Ukraine. Les spectateurs contemporains auraient besoin que le message originel du film soit revitalisé dans de nouveaux contextes, invitant à résister au pouvoir et à l’oppression, et à exprimer la solidarité avec les marginalisés et les opprimés.

Échos dans le cinéma moderne

Il n’est pas surprenant que le British Film Institute ait choisi cette année pour publier une version restaurée du film, car il a exercé une influence si profonde et omniprésente sur la culture visuelle occidentale que de nombreux spectateurs ne réalisent peut-être pas à quel point son langage est ancré dans le cinéma grand public.

Comment Hollywood a réinterprété la fameuse scène des escaliers.

Alfred Hitchcock a repris avec succès les techniques de montage frénétique et chaotique d’Eisenstein dans la scène de la douche de Psychose (1960), où l’horreur naît moins de ce qui est montré que de ce qui est suggéré par le montage. Il rend également un hommage explicite à Eisenstein lors du deuxième meurtre du film, qui se déroule sur l’escalier de la maison des Bates.

Ce schéma a ensuite été repris dans de nombreux films, notamment dans le Batman (1989) de Tim Burton avec le Joker interprété par Jack Nicholson. Nicholson lui-même avait déjà joué une confrontation violente sur un escalier dans Shining (1980) de Stanley Kubrick, tandis que Joker (2019) de Todd Phillips est devenu emblématique pour une séquence de danse controversée sur un escalier public.

L’Exorciste (1973) de William Friedkin semble lui aussi beaucoup devoir au style d’Eisenstein, avec deux morts qui se déroulent au bas de l’escalier désormais iconique de Georgetown. Brazil (1985) de Terry Gilliam fait lui aussi un clin d’œil parodique à Eisenstein, mais c’est Les Incorruptibles (1987) de Brian De Palma qui constitue l’hommage le plus explicite à la séquence des escaliers d’Odessa, notamment avec la scène du bébé dans la poussette incontrôlable, plaçant ainsi l’influence d’Eisenstein au cœur du cinéma hollywoodien contemporain.

The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

ref. Cent ans après sa sortie, « Le Cuirassé Potemkine » d’Eisenstein continue de fasciner – https://theconversation.com/cent-ans-apres-sa-sortie-le-cuirasse-potemkine-deisenstein-continue-de-fasciner-272594

No, nuestro cerebro no se “desarrolla completamente” de golpe a los 25 años: esto es lo que dice la neurociencia

Source: The Conversation – (in Spanish) – By Taylor Snowden, Post-Doctoral Fellow, Neuroscience, Université de Montréal

Si navega por TikTok o Instagram durante el tiempo suficiente, inevitablemente se encontrará en algún momento con la frase: “Tu lóbulo frontal aún no está completamente desarrollado”. Se ha convertido en una explicación habitual de la neurociencia para las malas decisiones, como pedir una copa de más en el bar o enviar un mensaje de texto a ese ex al que juraste no volver a escribir nunca jamás.

Es cierto que el lóbulo frontal desempeña un papel fundamental en funciones de alto nivel como la planificación, la toma de decisiones y el juicio. Y es fácil encontrar consuelo en la idea de que existe una excusa biológica para explicar por qué a veces nos sentimos inestables, impulsivos o como un trabajo en progreso: la inmadurez del lóbulo frontal. La vida entre los 20 y los 30 años es impredecible, y aferrarse a que muchas cosas suceden porque el cerebro no ha terminado de desarrollarse puede resultar extrañamente tranquilizadora.

Pero la idea de que el cerebro, en particular el lóbulo frontal, deja de desarrollarse a los 25 años es un mito. Que como muchos mitos, tiene su origen en hallazgos científicos reales, pero simplificados en exceso. De hecho, las últimas investigaciones sugieren que el desarrollo del lóbulo frontal se prolonga hasta los 30 años.

¿De dónde viene el “mito de los 25 años”?

El número mágico proviene de estudios de imágenes cerebrales realizados a finales de la década de 1990 y principios de la de 2000. En un estudio de 1999, los investigadores realizaron un seguimiento de los cambios cerebrales mediante repetidas exploraciones en niños y adolescentes. Analizaron la materia gris, que puede considerarse como el componente “pensante” del cerebro.

Los investigadores descubrieron que, durante la adolescencia, la materia gris pasa por un proceso denominado “poda”. Es decir, en las primeras etapas de la vida, el cerebro establece una enorme cantidad de conexiones neuronales; pero a medida que envejecemos, va recortando gradualmente las que se utilizan con menos frecuencia y fortaleciendo las que permanecen.

El crecimiento y la posterior la pérdida de volumen de la materia gris son fundamentales para el desarrollo del cerebro.

El cerebro madura por fases

En una investigación dirigida por el neurocientífico Nitin Gogtay, se escaneó el cerebro de una serie de niños de tan solo cuatro años, iniciando un seguimiento de su evolución cada dos. Fue así como los científicos descubrieron que, dentro del lóbulo frontal, las regiones maduran de atrás hacia adelante.

Las regiones más primitivas, como las áreas responsables del movimiento muscular voluntario, se desarrollan primero, mientras que las regiones más avanzadas, importantes para la toma de decisiones, la regulación emocional y el comportamiento social, no habían madurado completamente cuando cumplieron 20 años y terminó el seguimiento.

Dado que la obtención de dato se interrumpió a los 20 años, los investigadores no pudieron determinar con precisión cuándo finalizó el desarrollo. La edad de 25 años se convirtió en la mejor estimación del supuesto punto final.

Lo que revelan las investigaciones más recientes

Desde aquellos primeros estudios, la neurociencia ha avanzado considerablemente. En lugar de examinar regiones individuales de forma aislada, los investigadores ahora estudian la eficiencia con la que las diferentes partes del cerebro se comunican entre sí.

Un importante estudio reciente evaluó la eficiencia de las redes cerebrales, esencialmente cómo está conectado el cerebro, a través de la topología de la materia blanca. La materia blanca está formada por largas fibras nerviosas que conectan diferentes partes del cerebro y la médula espinal, lo que permite que las señales eléctricas viajen en ambos sentidos.

Los investigadores analizaron escáneres de más de 4200 personas, desde la infancia hasta los 90 años, y encontraron varios periodos clave de desarrollo, incluido uno entre los 9 y los 32 años, al que denominaron “adolescencia”.

Para cualquier persona que haya alcanzado la edad adulta, puede resultar chocante que le digan que su cerebro sigue siendo “adolescente” a los 30. Pero este término solo implica que su cerebro se encuentra en una etapa de cambios clave.

Según este estudio, parece que, durante la adolescencia cerebral, el cerebro equilibra dos procesos clave: la segregación y la integración. La segregación consiste en construir “barrios” de pensamientos relacionados. La integración equivale a construir “autopistas” para conectar esos barrios. La investigación sugiere que esta construcción no se estabiliza en un patrón que podemos considerar “adulto” hasta cumplidos los treinta.

El estudio también descubrió que la “pequeña escala” –una medida de la eficiencia de la red– era el mayor predictor para identificar la edad cerebral en este grupo. Si lo comparamos con un sistema de transporte público, e imaginamos rutas que requieren paradas y transbordos, aumentar la «pequeña escala» es como añadir carriles rápidos. Básicamente, los pensamientos más complejos cuentan con rutas más eficientes a través del cerebro.

Sin embargo, esta infraestructura cerebral no dura para siempre. Después de los 32 años, hay un punto de inflexión en el que estas tendencias de desarrollo cambian de dirección. El cerebro deja de dar prioridad a las “autopistas” y vuelve a la segregación para fijar las vías que más utiliza.

En otras palabras, durante la adolescencia y los 20 años se el cerebro se conecta, y cumplidos los 30 se dedican a asentarse y mantener las rutas más utilizadas.

Aprovechar al máximo un cerebro en construcción

Si nuestro cerebro sigue en construcción durante toda la veintena, ¿cómo nos aseguramos de que estamos construyendo la mejor estructura posible? Una respuesta reside en potenciar la neuroplasticidad, la capacidad del cerebro para reconfigurarse.




Leer más:
What is brain plasticity and why is it so important?


Aunque el cerebro sigue siendo cambiante a lo largo de toda la vida, el periodo comprendido entre los 9 y los 32 años representa una oportunidad única para el crecimiento estructural. Las investigaciones sugieren que hay muchas formas de fomentar la neuroplasticidad.

El ejercicio aeróbico de alta intensidad, aprender nuevos idiomas y practicar aficiones que exigen un gran esfuerzo cognitivo, como el ajedrez pueden reforzar las capacidades neuroplásticas de tu cerebro, mientras que el estrés crónico puede obstaculizarlas.

Para quienes pretendan tener un cerebro de alto rendimiento a los 30 años, es útil desafiarlo a los 20, si bien nunca es demasiado tarde para empezar.

The Conversation

Taylor Snowden no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. No, nuestro cerebro no se “desarrolla completamente” de golpe a los 25 años: esto es lo que dice la neurociencia – https://theconversation.com/no-nuestro-cerebro-no-se-desarrolla-completamente-de-golpe-a-los-25-anos-esto-es-lo-que-dice-la-neurociencia-272593

Deepfakes leveled up in 2025 – here’s what’s coming next

Source: The Conversation – USA – By Siwei Lyu, Professor of Computer Science and Engineering; Director, UB Media Forensic Lab, University at Buffalo

AI image and video generators now produce fully lifelike content. AI-generated image by Siwei Lyu using Google Gemini 3

Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people.

For many everyday scenarios — especially low-resolution video calls and media shared on social media platforms — their realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions.

And this surge is not limited to quality. The volume of deepfakes has grown explosively: Cybersecurity firm DeepStrike estimates an increase from roughly 500,000 online deepfakes in 2023 to about 8 million in 2025, with annual growth nearing 900%.

I’m a computer scientist who researches deepfakes and other synthetic media. From my vantage point, I see that the situation is likely to get worse in 2026 as deepfakes become synthetic performers capable of reacting to people in real time.

Just about anyone can now make a deepfake video.

Dramatic improvements

Several technical shifts underlie this dramatic escalation. First, video realism made a significant leap thanks to video generation models designed specifically to maintain temporal consistency. These models produce videos that have coherent motion, consistent identities of the people portrayed, and content that makes sense from one frame to the next. The models disentangle the information related to representing a person’s identity from the information about motion so that the same motion can be mapped to different identities, or the same identity can have multiple types of motions.

These models produce stable, coherent faces without the flicker, warping or structural distortions around the eyes and jawline that once served as reliable forensic evidence of deepfakes.

Second, voice cloning has crossed what I would call the “indistinguishable threshold.” A few seconds of audio now suffice to generate a convincing clone – complete with natural intonation, rhythm, emphasis, emotion, pauses and breathing noise. This capability is already fueling large-scale fraud. Some major retailers report receiving over 1,000 AI-generated scam calls per day. The perceptual tells that once gave away synthetic voices have largely disappeared.

Third, consumer tools have pushed the technical barrier almost to zero. Upgrades from OpenAI’s Sora 2 and Google’s Veo 3 and a wave of startups mean that anyone can describe an idea, let a large language model such as OpenAI’s ChatGPT or Google’s Gemini draft a script, and generate polished audio-visual media in minutes. AI agents can automate the entire process. The capacity to generate coherent, storyline-driven deepfakes at a large scale has effectively been democratized.

This combination of surging quantity and personas that are nearly indistinguishable from real humans creates serious challenges for detecting deepfakes, especially in a media environment where people’s attention is fragmented and content moves faster than it can be verified. There has already been real-world harm – from misinformation to targeted harassment and financial scams – enabled by deepfakes that spread before people have a chance to realize what’s happening.

AI researcher Hany Farid explains how deepfakes work and how good they’re getting.

The future is real time

Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a human’s appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips.

Identity modeling is converging into unified systems that capture not just how a person looks, but how they move, sound and speak across contexts. The result goes beyond “this resembles person X,” to “this behaves like person X over time.” I expect entire video-call participants to be synthesized in real time; interactive AI-driven actors whose faces, voices and mannerisms adapt instantly to a prompt; and scammers deploying responsive avatars rather than fixed videos.

As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications. It will also depend on multimodal forensic tools such as my lab’s Deepfake-o-Meter.

Simply looking harder at pixels will no longer be adequate.

The Conversation

Siwei Lyu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Deepfakes leveled up in 2025 – here’s what’s coming next – https://theconversation.com/deepfakes-leveled-up-in-2025-heres-whats-coming-next-271391

New materials, old physics – the science behind how your winter jacket keeps you warm

Source: The Conversation – USA – By Longji Cui, Assistant Professor of Mechanical Engineering, University of Colorado Boulder

Modern winter jackets use a few time-honored physics principles to keep you warm. Magda Indigo/Moment via Getty Images

As the weather grows cold this winter, you may be one of the many Americans pulling their winter jackets out of the closet. Not only can this extra layer keep you warm on a chilly day, but modern winter jackets are also a testament to centuries-old physics and cutting-edge materials science.

Winter jackets keep you warm by managing heat through the three classical modes of heat transfer – conduction, convection and radiation – all while remaining breathable so sweat can escape.

A diagram showing a fireplace in a room. heat radiating off the fire is labeled 'radiation,' heat moving through the floor is labeled 'conduction' and heat moving up through hot air is 'convection'
In a fireplace, heat transfer occurs by all three methods: conduction, convection and radiation. Radiation is responsible for most of the heat transferred into the room. Heat transfer also occurs through conduction into the room’s floor, but at a much slower rate. Heat transfer by convection also occurs through cold air entering the room around windows and hot air leaving the room by rising up the chimney.
Douglas College Physics 1207, CC BY

The physics has been around for centuries, yet modern material innovations represent a leap forward that let those principles shine.

Old science with a new glow

Physicists like us who study heat transfer sometimes see thermal science as “settled.” Isaac Newton first described convective cooling, the heat loss driven by fluid motion that sweeps thermal energy away from a surface, in the early 18th century. Joseph Fourier’s 1822 analytical theory of heat then put conduction – the transfer of thermal energy through direct physical contact – on mathematical footing.

Late-19th-century work by Josef Stefan and Ludwig Boltzmann, followed by the work of Max Planck at the dawn of the 20th century, made thermal radiation – the transfer of heat through electromagnetic waves – a pillar of modern physics.

All these principles inform modern materials design. Yet what feels new today are not the equations but the textiles. Over the last two decades, engineers have developed extremely thin synthetic fibers that trap heat more efficiently and treatments that make natural down repel water instead of soaking it up. They’ve designed breathable membranes full of tiny pores that let sweat escape, thin reflective layers that bounce your body heat back toward you, coatings that store and release heat as the temperature changes, and ultralight materials.

Together, these innovations give designers far more control over warmth, breathability and comfort than ever before. That’s why jackets now feel warmer, lighter and drier than anything Newton or Fourier could have imagined.

Trap still air, slow the leak

Conduction is the direct flow of heat from your warm body into your colder surroundings. In winter, all that heat escaping your body makes you feel cold. Insulation fights conduction by trapping air in a web of tiny pockets, slowing the heat’s escape. It keeps the air still and lengthens the path heat must take to get out.

A close up of a down puffer jacket.
The puffy segments in a jacket are filled with down.
Victoria Kotlyarchuk/iStock via Getty Images

High-loft down makes up the expansive, fluffy clusters of feathers that create the volume inside a puffer jacket. Combined with modern synthetic fibers, the down immobilizes warm air and slows its escape. New types of fabrics infused with highly porous, ultralight materials called aerogels pack even more insulation into surprisingly slim layers.

Tame the wind, protect the boundary layer

A good winter jacket also needs to withstand wind, which can strip away the thin boundary layer of warm air that naturally forms around you. A jacket with a good outer shell blocks the wind’s pumping action with tightly woven fabric that keeps heat in. Some jackets also have an outer layer of lamination that keeps water and cold air out, and a woven pattern that seals any paths heat might leak through around the cuffs, hems, flaps and collars.

The outer membrane layer on many jacket shells is both waterproof and breathable. It stops rain and snow from getting in, and it also lets your sweat escape as water vapor. This feature is key because insulation, such as down, stops working if it gets wet. It loses its fluff and can’t trap air, meaning you cool quickly.

a diagram showing a jacket, with a zoomed in window showing a variety of fabric layers.
How modern jackets manage heat: Left, a typical insulated shell; right, layers that trap air, block wind, and reflect infrared heat without adding bulk.
Wan Xiong and Longji Cui

These shells also block wind, which protects the bubble of warm air your body creates. By stopping wind and water, the shell creates a calm, dry space for the insulation to do its job and keep you warm.

New tricks to reflect infrared heat

Even in still air, your body sheds heat by emitting invisible waves of heat energy. Modern jackets address this by using new types of cloth and technology that make the jacket’s inner surface reflect your body’s heat back toward you. This type of surface has a subtle space blanket effect that adds noticeable warmth without adding any bulk.

However, how jacket manufacturers apply that reflective material matters. Coating the entire material in metallic film would reflect lots of heat, but it wouldn’t allow sweat to escape, and you might overheat.

Some liners use a micro-dot pattern: The reflective dots bounce heat back while the gaps between them keep the material breathable and allow sweat to escape.

Another approach moves this technology to the outside of the garment. Some designs add a pattern of reflective material to the outer shell to keep heat from radiating out into the cold air.

When those exterior dots are a dark color, they can also absorb a touch of warmth from the sun. This effect is similar to window coatings that keep heat inside while taking advantage of sunlight to add more heat.

Warmth only matters if you stay dry. Sweat that can’t escape wets a jacket’s layer of insulation and accelerates heat loss. That’s why the best winter systems combine moisture-wicking inner fabrics with venting options and membranes whose pores let water vapor escape while keeping liquid water out.

What’s coming

An astronaut wearing a space suit floating in space.
Thin reflective surfaces bounce infrared heat – similar to the ‘space-blanket’ effect used in aerospace and modern jacket liners.
Vincent Besnault/The Image Space via Getty Images

Describing where heat travels throughout textiles remains challenging because, unlike light or electricity, heat diffuses through nearly everything. But new types of unique materials and surfaces with ultra-fine patterns are allowing scientists to better control how heat travels throughout textiles.

Managing warmth in clothing is part of a broader heat-management challenge in engineering that spans microchips, data centers, spacecraft and life-support systems. There’s still no universal winter jacket for all conditions; most garments are passive, meaning they don’t adapt to their environment. We dress for the day we think we’ll face.

But some engineering researchers are working on environmentally adaptive textiles. Imagine fabrics that open microscopic vents as the humidity rises, then close them again in dry, bitter air. Picture linings that reflect more heat under blazing sun and less in the dark. Or loft that puffs up when you’re outside in the cold and relaxes when you step indoors. It’s like a science fiction costume made practical: Clothing that senses, decides and subtly reconfigures itself without you ever touching a zipper.

Today’s jackets don’t need a new law of thermodynamics to work – they couple basic physics with the use of precisely engineered materials and thermal fabrics specifically made to keep heat locked in. That marriage is why today’s winter wear feels like a leap forward.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. New materials, old physics – the science behind how your winter jacket keeps you warm – https://theconversation.com/new-materials-old-physics-the-science-behind-how-your-winter-jacket-keeps-you-warm-266877

Who thinks Republicans will suffer in the 2026 midterms? Republican members of Congress

Source: The Conversation – USA – By Charlie Hunt, Associate Professor of Political Science, Boise State University

House Speaker Mike Johnson will have to defend a narrow majority in the 2026 elections. A near-record number of retiring Republicans won’t make that task easier. J. Scott Applewhite/AP

The midterm elections for Congress won’t take place until November, but already a record number of members have declared their intention not to run – a total of 43 in the House, plus 10 senators. Perhaps the most high-profile person to depart, Republican Rep. Marjorie Taylor Greene of Georgia, announced her intention in November not just to retire but to resign from Congress entirely on Jan. 5 – a full year before her term was set to expire.

There are political dynamics that explain this rush to the exits, including frustrations with gridlock and President Donald Trump’s lackluster approval ratings, which could hurt Republicans at the ballot box.

Rather than get swept away by a prospective “blue wave” favoring Democrats – or possibly daunted by the monumental effort it would take to survive – many Republicans have decided to fold up the beach chair and head home before the wave crashes.

As of now, two dozen Republican House members have either resigned from the House or announced their intent to not run for reelection in 2026. With only two exceptions – Republicans in 2018 and 2020 – this is more departures from either party at this point in the election calendar than any other cycle over the past 20 years.

There is also growing concern within the House Republican caucus that Greene’s announcement is a canary in the coal mine and that multiple resignations will follow.

As a political scientist who studies Congress and politicians’ reelection strategies, I’m not surprised to see many House members leaving ahead of what’s shaping up to be a difficult midterm for the GOP. Still, the sheer numbers of people not running tells us something about broader dissatisfaction with Washington.

Why do members leave Congress?

Many planned departures are true retirements involving older and more experienced members.

For example, 78-year-old Democratic congressman Jerry Nadler is retiring after 34 years, following mounting pressure from upstart challengers and a growing consensus among Democrats that it’s time for older politicians to step aside. Nancy Pelosi, the former speaker who will turn 86 in March, is also retiring.

Sometimes, members of Congress depart for the same reasons other workers might leave any job. Like many Americans, members of Congress might find something more attractive elsewhere. Retiring members are attractive hires for lobbying firms and corporations, thanks to their insider knowledge and connections within the institution. These firms usually offer much higher salaries than members are used to in Congress, which may explain why more than half of all living former members are lobbyists of some kind.

Nancy Pelosi, the former House speaker, gestures at a news conference.
Democrat Nancy Pelosi, who was first elected in 1986, will step down at the end of this Congress.
Jose Luis Magana/AP

Other members remain ambitious for elective office and decide to use their position in Congress as a springboard for another position. Members of the House regularly retire to run for a Senate seat, such as, in this cycle, Democratic Rep. Haley Stevens of Michigan. Others run for executive offices, including governor, such as Republican Rep. Nancy Mace of South Carolina.

But some are leaving Congress due to growing frustration with the job and an inability to get things done. Specifically, many retiring members cite growing dysfunction within their own party, or in Congress as a whole, as the reason they’re moving on.

In a statement announcing his departure in June, Sen. Thom Tillis, R-N.C., mused that “between spending another six years navigating the political theater and partisan gridlock in Washington or spending that time with my family,” it was “not a hard choice” to leave the Senate.

What’s unique about 2026?

In addition, there are a few other factors that can help explain why so many Republicans in particular are heading for the exits leading up to 2026.

The shifting of boundaries that has come with the mid-decade redistricting process in several states this year has scrambled members’ priorities. Unfamiliar districts can drive incumbents to early retirement by severing their connection with well-established constituencies.

In Texas, six Republicans and three Democrats – nearly a quarter of the state’s entire House delegation – are either retiring or running for other offices, due in part to that state’s new gerrymander for 2026.

All decisions about retirement and reelection are sifted through the filter of electoral and partisan considerations. A phenomenon called “thermostatic politics” predicts that parties currently in power, particularly in the White House, tend to face a backlash from voters in the following election. In other words, the president’s party nearly always loses seats in midterms.

In 2006 and 2018, for example, Republican members of Congress were weighed down by the reputations of unpopular Republican Presidents George W. Bush and Trump. Republicans had arguably even greater success in midterm elections during Barack Obama’s presidency.

Currently, 2026 looks like it will present a poor national environment for Republicans. Trump remains highly unpopular, according to polls, and Democrats are opening up a consistent lead in the “generic ballot” question, which asks respondents which party they intend to support in the 2026 midterms without reference to individual candidates.

Democrats have already been overperforming in special elections, as well as the general election in November in states such as New Jersey and Virginia, which held elections for governor. Democrats are on average running 13 points ahead of Kamala Harris’ performance in the 2024 election.

As a result, even Republicans in districts thought to be safe for their party may see themselves in enough potential danger to abandon the fight in advance.

Retirement vs. resignation

One final, unique aspect of this election cycle with major consequences is not an electoral but an institutional one.

House conservatives are quietly revolting against Speaker Mike Johnson’s leadership style. That members may be frustrated enough not just to retire but resign in advance, leaving their seats temporarily vacant, is a notable sign of dysfunction in the U.S. House.

This also could have a major impact on policy, given how slim the Republicans’ majority in the lower chamber is already. Whatever the outcome of the midterms in November, these departures clearly matter in Washington and offer important signals about the chaos in Congress.

The Conversation

Charlie Hunt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Who thinks Republicans will suffer in the 2026 midterms? Republican members of Congress – https://theconversation.com/who-thinks-republicans-will-suffer-in-the-2026-midterms-republican-members-of-congress-271285

Resolve to network at your employer’s next ‘offsite’ – research shows these retreats actually help forge new connections

Source: The Conversation – USA (2) – By Madeline Kneeland, Assistant Professor of Management, Babson College

Getting to know new colleagues over a short period of time can pay off later on. Tom Werner/DigitalVision via Getty Images

What do you do when an announcement about an “offsite” hits your work inbox? Chances are you might sigh and begrudgingly add the event to your calendar.

These events, also called retreats, bring colleagues together for a mix of structured activities and free time – freeing them from their regular work obligations. For one or two days, employees take a mandatory break from their normal routines at work and at home. Participants spend a lot of that time making small talk with colleagues, as well as engaging in structured interactions that may include awkward icebreakers.

Although networking is one of these events’ main purposes, some people find that networking for the purpose of meeting professional goals can feel transactional, uncomfortable or even dirty. Unsure about whether it will be worth the time and effort, you might ask: What’s in it for me?

We are management professors who study how professional networks help information and resources move across organizations and create opportunities. Our research findings suggest participating in an offsite could be well worth the time and hassle.

And it might quietly reshape your working relationships in unexpected ways.

Taking time and costing money

While these gatherings have become relatively common, we were surprised to learn how little research there is on whether they work. In particular, few scholars have dug into their effectiveness in helping people forge new connections.

Offsites can help with strategic planning, team development and goal setting. They’re often held once or twice a year. The timing varies from one employer to the next. But the period from December through March is becoming more popular.

They tend to bring people together who rarely interact through their work – particularly at large employers with offices spread across the country or even the world, and in organizations with remote-first work arrangements.

Retreats help people get better acquainted in many informal ways, whether it’s sharing meals, exchanging ideas or chatting in hallways. Those interactions and the more structured ones, such as brainstorming exercises conducted in previously assigned groups, make it easier to connect with colleagues.

After years of remote work when people mainly gathered over Zoom, employers continue to look for ways to rebuild connections and to address a surge in disengagement.

These retreats for professionals have apparently become more popular following the COVID-19 pandemic, as part of the larger rebound in business-related travel. A survey of 2,000 full-time employees from a range of industries found that the percentage of companies hosting no offsites at all fell to 4% in 2024, from 16% in 2019.

Further, many companies are allocating larger budgets for offsites and budgeting more time during off-site retreats for social purposes, the same survey found.

Off-site retreats require planning to make sure there’s time for colleagues to make new connections.

Mapping a law firm’s networking patterns

When we spoke with managers from several large firms about their off-site practices, we were surprised that they simply assumed collaboration was an inevitable outcome.

To test whether that was true, we studied the working relationships of more than 700 partners in a large U.S. law firm, which we agreed not to name to access its data. Over eight years, from 2005 to 2012, these partners attended – or skipped – the firm’s annual retreats.

We tracked the partners’ attendance and their collaborative work for the firm’s clients before and after the offsites. Because lawyers at this firm – and elsewhere – record their work in 6-minute increments, it was possible to analyze billing records for the partners’ collaboration on client projects.

The results of this mapping exercise surprised us. And they may change your feelings about whether retreats are worth your time and energy.

Helping partners get noticed

We found that after participating in an offsite, partners were more likely to reach out to other partners whom they had not worked with previously.

To our surprise, we found that even workers who didn’t attend an offsite acted more collaboratively afterward. Having received the message that collaboration is important to the firm, they made up for missing out by finding other ways to start collaborating with more colleagues.

But building a successful career also depends on something harder to control than whether you reach out to new colleagues and clients: You need your colleagues to think of you when opportunities arise. And that likelihood can increase when you participate in offsites.

Getting 24% more requests to collaborate

We found an increase in newly formed connections across the law firm after these events. New collaborations on billable work increased, generating more revenue for the firm. And the targets of these new collaborations tended to be the people who took part in the offsite.

The partners who attended the offsite became more visible and had 24% more new requests to collaborate on work for a client in the two months following the retreat than those who did not. Importantly, these relationships were not superficial. Almost 17% of these new working relationships continued over the next two years.

While we analyzed only the relationships that formed shortly after the offsite, it is likely that colleagues remember those they meet at these events. The people who attend them continue to reap network-based benefits beyond what we found in the data.

We also found that offsites helped attorneys forge connections with lawyers in the firm’s other practice groups more than with those on their own team.

Overall, lawyers who went to an offsite made more new connections – about one per month – after an offsite than the ones who didn’t go.

Bridging silos at work

In the course of day-to-day work, people tend to interact most with the colleagues they already know.

This pattern seems to be even stronger in remote work. Offsites helped to break that pattern by giving professionals opportunities to engage with colleagues they don’t know. Sometimes, they end up eager to collaborate with people they meet this way.

These more distant connections can help people obtain diverse information, resources and perspectives and create opportunities to productively brainstorm.

When you work for a big employer, it can be hard to meet colleagues on other teams. Offsites may provide a significant opportunity to build networks and stand out among peers.

While offsites may never be your favorite way to spend a few days, our research shows that they can serve an important function for employers and employees alike.

The Conversation

Madeline Kneeland received funding for this research from The Strategic Management Society.

Adam M. Kleinbaum does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Resolve to network at your employer’s next ‘offsite’ – research shows these retreats actually help forge new connections – https://theconversation.com/resolve-to-network-at-your-employers-next-offsite-research-shows-these-retreats-actually-help-forge-new-connections-270762

Midlife weight gain can start long before menopause – but you can take steps early on to help your body weather the hormonal shift

Source: The Conversation – USA (3) – By Vinaya Gogineni, Obesity Medicine Fellow, Vanderbilt University

Hormone changes that begin years before menopause can cause gradual muscle loss and increased insulin resistance. Morsa Images/DigitalVision via Getty Images

You’re in your mid-40s, eating healthy and exercising regularly. It’s the same routine that has worked for years.

Yet lately, the number on the scale is creeping up. Clothes fit differently. A bit of belly fat appears, seemingly overnight. You remember your mother’s frustration with the endless dieting, the extra cardio, the talk about “menopause weight.” But you’re still getting your periods. Menopause should be at least half a decade away.

So what’s really going on?

We are a primary care physician with expertise in medical weight management and an endocrinologist and obesity medicine specialist. We hear this story nearly every day. Women doing everything “right” suddenly feel like their bodies are working against them.

And while lifestyle choices still matter, the underlying cause isn’t willpower. It’s physiology.

Most women expect the weight struggle to begin after menopause. But research suggests the real metabolic shift happens years earlier. During the multiyear transition to menopause, women’s bodies begin processing sugar and carbs less efficiently, while their metabolism slows down at rest. That can drive weight gain – especially around the midsection – even if a person’s habits haven’t changed much.

There are physiological processes that begin long before menopause itself, but weight gain around the menopause transition isn’t necessarily inevitable. Recognizing this early window makes it possible to intervene while your body is still adaptable.

The silent shift before menopause

Menopause is officially defined as 12 months without a period. But the body’s hormonal transition, which comes from changes in signaling between the brain and ovaries, begins years earlier during a stage called perimenopause. This phase is when estrogen and progesterone start to fluctuate unpredictably.

Those hormonal shifts ripple through nearly every metabolic system. Estrogen helps regulate fat distribution, muscle repair and insulin sensitivity. When levels swing wildly, the body begins storing fat differently, moving it from the hips and thighs to the abdomen. Muscle protein synthesis also slows down.

The result is gradual muscle loss and increased insulin resistance, even when habits haven’t changed. At the same time, these hormonal changes can disrupt sleep, influence cortisol levels and alter appetite.

Just as those physiological changes are revving up, intensive caregiving and other demands are often increasing too, leaving less time for exercise, sleep and other basic self-care.

What’s most striking isn’t the number on the scale, but rather the change in body composition. Even if weight stays the same, women often lose muscle and gain belly fat. This deeper fat surrounds vital organs and is linked to inflammation and a higher risk of type 2 diabetes, heart disease, liver disease and sleep disorders.

Why perimenopause is the real turning point

A study called the Study of Women’s Health Across the Nation has been tracking women of different backgrounds in many parts of the U.S. since 1994 to investigate the physiological changes that occur throughout a woman’s midlife years. One of its key findings was that fat mass begins increasing and lean muscle declines during perimenopause, long before periods stop.

A group of women doing kettlebell swings during class in gym
The 30s and 40s can be an opportunity to build metabolic resilience.
Thomas Barwick/DigitalVision via Getty Images

Once this accelerated redistribution plateaus during menopause, reversal becomes much harder, though not impossible.

That’s why perimenopause should be viewed as a window of metabolic opportunity. The body is still adaptable; it’s responsive to strength training, high-quality nutrition and better sleep routines. With the right strategies, women can offset these hormonal effects and set themselves up for a healthier transition through menopause and beyond.

Unfortunately, most health care approaches to the menopause transition are reactive. Symptoms like hot flashes or sleep issues are addressed only after they appear. Rarely are women told that metabolic risk reduction starts years earlier, during this hidden but critical phase of life.

What most women haven’t been told

The usual advice of “eat less, move more” misses the point for women in their 40s. It oversimplifies biology and ignores hormonal context.

For example, for exercise, cardio alone is insufficient for weight management and optimal metabolic health. Strength training, which is too often overlooked, becomes essential to preserve lean muscle and maintain insulin sensitivity. Adequate protein intake supports these changes as well.

Sleep and stress regulation are equally vital. Estrogen fluctuations can disrupt cortisol rhythms, leading to cravings, fatigue and nighttime awakenings. Prioritizing sleep-hygiene practices such as limiting screen time before bed, getting morning sunlight, avoiding late-night eating and exercising earlier in the day helps regulate these hormonal rhythms.

Understanding why these habits matter gives important context for strategizing sustainable modifications that fit each person’s lifestyle.

How women can take action early

The decades of one’s 30s and 40s don’t need to be a countdown to decline, but instead, an opportunity to build metabolic resilience. With awareness, evidence-based strategies and proactive care, women can navigate perimenopause and the menopause transition with confidence and strength. Here are a few strategies to start with:

Lift weights. Aim for two to three sessions of resistance or strength training per week to preserve muscle and boost metabolism. Work on progressive overload, which refers to the gradual increase in stress placed on your muscles.

Prioritize protein. Include adequate protein in every meal to support muscle, increase satiety and stabilize blood sugar. There is a growing body of evidence indicating a need for a higher protein requirement than the current Recommended Dietary Allowance guidelines. Aim for 0.55 to 0.73 grams of protein per pound (1.2 to 1.6 grams of protein per kilogram) of body weight daily to reduce the risk of age-related muscle loss.

Sleep smarter. Sleep hygiene and stress management help regulate cortisol and appetite hormones. Aim for between seven and eight hours of quality sleep each night.

Ask different questions. During annual checkups, talk to your clinician about body composition and metabolic health, not just weight. And preemptively discuss the risks and benefits of menopause hormone therapy.

Your metabolism isn’t broken; it’s adapting to a new stage of your life. And once you understand that, you can work with your body, not against it.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Midlife weight gain can start long before menopause – but you can take steps early on to help your body weather the hormonal shift – https://theconversation.com/midlife-weight-gain-can-start-long-before-menopause-but-you-can-take-steps-early-on-to-help-your-body-weather-the-hormonal-shift-271070

Apongo was a rebel leader in Jamaica – a diary entry sheds light on his west African origins

Source: The Conversation – Africa – By Devin Leigh, Lecturer, Global Studies, University of California, Berkeley

For over three centuries, between 1526 and 1866, at least 10.5 million Africans were forcibly trafficked to the Americas in the transatlantic slave trade. Over half of them (with known places of departure) left from a 3,000km stretch of the west African coast between what are today Senegal and Gabon.

Scholars trying to uncover the lives of these diasporic Africans are forced to work with historical records produced by their European and American enslavers. These writers mostly ignored Africans’ individual identities. They gave them western names and wrote about them as products belonging to a set of supposedly distinct “ethnic” brands.

Now, however, the curious biography of an 18th-century Jamaican rebel confounds this inherited language. The rebel in question is Apongo, also known as Wager. His biography is a 134-word handwritten passage in the diary of an 18th-century enslaver named Thomas Thistlewood.

As a historian of the Atlantic World in the 1700s, I use the life stories and archives of British enslavers to better understand these times.

My recent study uses Thistlewood’s biography of Apongo as a window into the origins of enslaved west Africans, particularly those from what are today the nations of Ghana and Benin.

Apongo’s story offers an opportunity to better understand the complexities of west African identity and to put a more human face on those enslaved.

Who was Apongo, aka Wager?

Like many enslaved Africans, Apongo had two names. Unfortunately, neither of them completely unlocks his backstory. “Apongo” is probably the rendering of his African name into English script according to how it sounded to his enslavers’ ears. “Wager” is a name Apongo was given by his white “master”. It had nothing to do with his African origins. In fact, it was the name of his enslaver’s ship.

Thistlewood was an English migrant to Jamaica who thought of himself as a gentleman scholar. According to one of his diary entries, Apongo led an extraordinary life defined by twists of fate. He was the prince of a west African state that paid tribute to a larger kingdom called “Dorme”. After subjugating the peoples around him, the king of Dorme seems to have sent Apongo on a diplomatic mission to Cape Coast Castle in what is today Ghana. At the time it was the headquarters of Great Britain’s trading operations on the African coast.

While there, Apongo was apparently surprised, enslaved, and trafficked to Jamaica. At the time, Jamaica was the British Empire’s most profitable colony. This was due to its sugar plantation complex based on racial slavery.

Once in Jamaica, Apongo reunited with the governor he had visited at Cape Coast. He tried to obtain his freedom but, after failing for a number of years, led and died in an uprising called Tacky’s Revolt.

Unfolding over 18 months from 1760 and named after another one of its leaders, Tacky’s Revolt left 60 Whites and over 500 Blacks dead. Another 500 Blacks were deported from the island. It was arguably the largest slave insurrection in the British Empire before the 19th century.

The mystery in the diary

To appreciate why Thistlewood’s diary entry is so valuable, we must know something about the lack of biographical information on enslaved Africans. Almost all came from societies with oral rather than literary traditions. They were then almost universally prohibited from learning to read and write by their European and American “masters”.

Enslavers almost never recorded enslaved people’s birth names. Instead, they gave them numbers for the transatlantic passage and westernised names after they arrived. Rather than recording the specific places they came from, they lumped them together into groups based on broad zones of provenance. For example, the British tended to call Africans who came from today’s Ghana “Coromatees”. Those from today’s Republic of Benin were known as “Popo”. So, despite being just one paragraph long, Thistlewood’s diary entry on Apongo is among the most detailed biographical sketches historians have of a diasporic African in the 1700s.

But it also contains a mystery. The word Thistlewood used to describe Apongo’s origins, “Dorme” or perhaps “Dome”, is unfamiliar. Since 1989, when historian Douglas Hall first wrote about Apongo, scholars have assumed it was a reference to Dahomey. This was a militarised west African kingdom in the southern part of today’s Benin.

Yet scholars never defended that assumption. Recently, it was called into question by historian Vincent Brown in Tacky’s Revolt, the first book-length study of the slave uprising Apongo helped lead. Enslaved people from what is today Ghana have a well-documented history of leading slave revolts in the Americas, particularly in British Jamaica. Brown suggested that it made more sense if “Dorme” referred to an unidentified state in that region.

Now, in my study, I have built on this work to make two related arguments. Uncovering three contemporary texts that use variant spellings of the word “Dorme” to refer to Dahomey, I argue that Thistlewood’s term was, indeed, a contemporary word for “Dahomey” in 18th-century Jamaica and that Dahomey was almost certainly the kingdom he had in mind. Moreover, I demonstrate that it was both possible and reasonable for a diplomatic mission to have taken place between Dahomey and Cape Coast in Apongo’s time. In fact, such a mission actually did take place in 1779, when King Kpengla of Dahomey sent one of his linguists to Cape Coast as an emissary.

But none of this resolves the central question. The evidence of “Coromantee” involvement in Tacky’s Revolt and other Jamaican slave rebellions – including the presence of Ghanaian names among rebels and the statements of historians at the time – is overwhelming. Additionally, although Africans from Dahomey made the trip to Cape Coast Castle during the 18th century, visitors from states in today’s Ghana were certainly much more common.

Ultimately, to argue that Apongo had origins in Dahomey, one must explain how a subject of that kingdom came to be a general in a rebellion largely characterised by Ghanaian leadership.

A question of origins

What are we to make of Apongo’s origins? One answer is that Thistlewood was wrong. Apongo was “Coromantee” and we should think of him as Ghanaian. Thistlewood merely associated him with Dahomey because that was the militarised African kingdom best known to Europeans at the time.

Another possibility is that Thistlewood was correct. Apongo was “Popo” and so we should write about him as Beninese. Thistlewood simply relayed a fact of Apongo’s life and was unconcerned with questions that now preoccupy us, such as how Apongo came to lead a rebellion that appears characterised by “Coromantee” leadership.

A third answer is that Apongo’s identity was more complex than this inherited “ethnic” language allows. Perhaps he was someone who traversed and was fluent in the cultural and political worlds of both Ghana and Benin. If that’s the case, then perhaps his story reminds us that at least these two adjacent regions were not as distinct as early-modern writers claimed and later colonial and national borders supposed.

The search for Apongo is just a small part of historians’ larger, ongoing, and collaborative work to recreate the lives of Africans taken in the transatlantic slave trade.

While asking these questions requires us to work with sources written by enslavers, we do so in the hope that we can ultimately see beyond them. Our reward is better understanding how Africans’ forgotten perspectives shaped the history of our world.

The Conversation

Devin Leigh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Apongo was a rebel leader in Jamaica – a diary entry sheds light on his west African origins – https://theconversation.com/apongo-was-a-rebel-leader-in-jamaica-a-diary-entry-sheds-light-on-his-west-african-origins-268014

La femme que Marx n’a jamais voulu rencontrer : Flora Tristán, l’autodidacte qui aurait pu changer l’histoire du socialisme

Source: The Conversation – in French – By María Begoña Pérez Calle, Professor of Economics, Universidad de Zaragoza

Flora Tristán est morte à Bordeaux à l’âge de 41 ans.
Wikimedia Commons, CC BY

Autodidacte et militante infatigable, Flora Tristán a observé la réalité sociale avec un regard de scientifique pour proposer un modèle alternatif de société et de travail.


Pour Flora Tristán (Paris, 1803-Bordeaux, 1844), la transformation de la société devait être intégrale, et la communication avec les masses laborieuses était aussi importante que la diffusion littéraire de son modèle. C’est pourquoi elle ne se contenta pas d’écrire pour ceux qui pouvaient payer un livre et le lire, mais chercha à sensibiliser directement les classes travailleuses.

Sa proposition novatrice impliquait un lien indissociable entre la question ouvrière et la question féminine : il n’y aurait pas de libération prolétarienne sans libération des femmes. L’émancipation était donc la condition nécessaire de la justice universelle. Flora Tristán anticipa ainsi des débats qui, bien des années plus tard, occuperaient une place centrale dans les discours féministes.

Bien qu’elle soit née dans un milieu aristocratique, l’écrivaine, penseuse socialiste et féministe franco-péruvienne, considérée comme l’une des pionnières du féminisme moderne et une précurseure du mouvement ouvrier international, ne reçut pas l’éducation d’institutrices.

À l’âge de quatre ans, le malheur frappa sa famille avec la mort de son père, Mariano Tristán y Moscoso, qui n’avait pas régularisé juridiquement son mariage avec sa mère, Thérèse Laisnay. Or, le droit français ne reconnaissait pas comme légitime un mariage uniquement religieux. La jeune veuve, enceinte d’un autre enfant et privée de patrimoine, partit donc vivre à la campagne avec Flora pendant plusieurs années, et la petite famille connut un vrai déclassement.

Une vie de proscrite

De retour à Paris, alors adolescente et ouvrière, Flora Tristán épousa en 1821 son jeune patron, André Chazal. Quatre ans plus tard, après de nombreuses dissensions conjugales et enceinte de son troisième enfant, elle s’enfuit du domicile conjugal en abandonnant son mari. Le divorce n’existait pas. La séparation des Chazal n’était pas légale. Pendant plusieurs années, Flora vécut comme une proscrite en France et en Angleterre. En 1833, elle traversa l’océan pour réclamer son héritage au Pérou auprès des Tristán. La famille l’accueillit plutôt favorablement et son oncle lui attribua certaines rentes, mais sans lui reconnaître de droit à l’héritage.

Elle revint en Europe deux ans plus tard, ajoutant à son expérience personnelle un important travail de terrain. Elle avait développé une méthodologie pionnière pour décrire et dénoncer les injustices de race, de classe et de genre : voyager, dialoguer, recueillir des données à l’aide du modèle de l’enquête, et élaborer des analyses et des propositions.

Ainsi, Flora Tristán, autodidacte, fit de la véritable science sociale à partir de l’observation de la réalité, développant des travaux innovants qui fusionnaient réflexion théorique et expérience pratique.

Dans des ouvrages tels que Pérégrinations d’une paria (1838) ou Promenades dans Londres (1840), elle dénonça la misère et le manque d’instruction des classes laborieuses, la pauvreté infantile, la prostitution et la discrimination dont étaient victimes les femmes. Elle pointa les inégalités structurelles de la société capitaliste comme la racine de ces problèmes.

Face à cette situation, elle proposa son modèle d’organisation sociale, dont l’élément central était un prolétariat consolidé à travers l’Union ouvrière. Ce prolétariat devait être formé et bénéficier de protection sociale. Il ne s’agissait pas seulement de se constituer en force productive, mais aussi de transformer l’histoire.

Militante d’un socialisme en devenir

À partir de 1835, elle remporta un franc succès littéraire et s’attira l’estime des cercles intellectuels. C’est au sein de l’Union Ouvrière qu’elle choisit de s’engager comme militante d’un socialisme naissant, affirmant un style qui lui était propre et se faisant la porte-parole passionnée de ses idées, de ses modèles et de ses théories.

Elle entama ainsi son tour de France, un exercice harassant de communication de masse. Ce mode de vie était inhabituel pour une femme de son époque. Mais Flora Tristán mit son talent intellectuel au service d’une mission rédemptrice en laquelle elle croyait profondément.

Sa vision se caractérisait également par le rejet de la violence révolutionnaire comme unique voie de salut. Elle reconnaissait l’antagonisme entre travail et capital, mais ses stratégies réformatrices sociales reposaient sur la fraternité. Son but était d’atteindre la justice et l’amour universel.

Sa vocation de femme messie lui donna la force de diriger et d’échanger avec des milliers d’ouvriers et d’ouvrières lors de ses tournées à travers la France. Sa santé était fragile, avec un possible problème tumoral, et tous ces voyages la laissèrent exsangue. Des efforts incessants qui, combinés à un probable typhus, précipitèrent sa mort en 1844, quatre ans avant la publication du Manifeste du Parti communiste.

Après sa mort, sa voix ne fut pas intégrée au socialisme de Karl Marx et Friedrich Engels, que ce dernier qualifiait de scientifique et qui plaçait la lutte des classes presque exclusivement au centre de sa réflexion. Engels qualifiait les approches antérieures d’« utopiques ».

Et pourtant, il suffit de parcourir les biographies de Robert Owen, Charles Fourier, des saint-simoniens et de Tristán elle-même pour constater que le terme « utopie » ne rend pas justice à la portée de leurs pensées et de leurs actions.

À la fin de 1843 à Paris, le philosophe allemand Arnold Rüge conseilla au jeune Marx de rencontrer Flora Tristán, mais celui-ci ne le fit pas. Engels, quant à lui, mentionna avoir connaissance de son œuvre, mais sans se départir d’une certaine indifférence.

Toujours en arrière-plan

Tristán est restée en arrière-plan de l’histoire officielle du socialisme, alors qu’elle avait anticipé de nombreux débats qui allaient plus tard prendre de l’importance. Plus d’un siècle plus tard, la mise en valeur – importante et nécessaire – de la dimension féministe de son discours a éclipsé tous les autres aspects.

Aujourd’hui, il est indispensable de reconnaître pleinement Flora Tristán, cette petite aristocrate déclassée qui n’eut pas de gouvernantes mais finit par apprendre auprès d’Owen et de Fourier.

Il faut également la reconnaître comme une socialiste du romantisme, une pionnière des sciences sociales, une communicante d’un talent extraordinaire et la créatrice d’un modèle alternatif de société, de production et de travail : l’Union Ouvrière.

On peut se demander ce qui se serait passé si elle avait vécu plus longtemps. Elle n’aurait vraisemblablement jamais accepté que son modèle de socialisme soit qualifié d’« utopique ». Si elle avait atteint 1864, elle aurait très probablement participé à la Première Internationale et, malgré sa condition de femme, sa présence impressionnante aurait sans doute influencé d’une manière ou d’une autre le cours du socialisme.

The Conversation

María Begoña Pérez Calle ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. La femme que Marx n’a jamais voulu rencontrer : Flora Tristán, l’autodidacte qui aurait pu changer l’histoire du socialisme – https://theconversation.com/la-femme-que-marx-na-jamais-voulu-rencontrer-flora-tristan-lautodidacte-qui-aurait-pu-changer-lhistoire-du-socialisme-272454

Cómo acaban los plásticos de la agricultura en las profundidades del Mediterráneo

Source: The Conversation – (in Spanish) – By Carmen Morales Caselles, Profesora e investigadora del área de Ecología, Universidad de Cádiz

Invernaderos en la costa de Almería (España). Mike Workman/Shutterstock

En un planeta que debe alimentar a más de 8 000 millones de personas, la agricultura es una pieza clave. De ella dependen los alimentos y materias primas que usamos cada día. Y, en ese engranaje, el plástico se ha vuelto una herramienta habitual. Se utiliza en invernaderos, sistemas de riego y cubiertas de cultivo. Gracias a estos materiales, ha sido posible aumentar la productividad y reducir el consumo de agua.

Desde hace años, incluso podemos ver esta realidad desde el espacio. Grandes extensiones agrícolas aparecen como manchas blancas en las imágenes por satélite. Son superficies cubiertas por invernaderos y plásticos agrícolas.

Vista de una zona costera cubierta de pequeños polígonos blancos
Vista de los invernaderos en las inmediaciones de El Ejido, Almería, desde el espacio.
NASA, CC BY-SA

Sin embargo, esta dependencia creciente tiene un coste ambiental. Durante mucho tiempo ha pasado desapercibido. Una parte de estos plásticos no se gestiona adecuadamente y acaba en el medio natural, como residuo.

En el trabajo publicado recientemente en iScience, analizamos por primera vez el recorrido de los plásticos agrícolas fuera de las zonas de cultivo. Nuestro trabajo muestra que muchos de estos materiales no se quedan en tierra.

Con el tiempo, los plásticos utilizados en la agricultura se dispersan en el entorno y acaban lejos de donde se usaron. Hemos detectado estos residuos a más de 100 kilómetros de la costa, en las profundidades del mar.

De las ramblas al mar: la ruta del plástico

Las ramblas son cauces secos que serpentean por nuestra geografía hasta desembocar en el mar. Permanecen secas la mayor parte del año y, cuando llueve, conducen rápidamente el agua hasta la costa. En ese tiempo de espera, también se convierten en depósitos silenciosos de basura.

En muchos de estos cauces, la mitad de los residuos encontrados son plásticos agrícolas. Durante los periodos secos, estos materiales se acumulan sin llamar la atención.

Algo que hemos aprendido es que la situación cambia con las lluvias intensas. En pocas horas, el agua arrastra todo lo que encuentra a su paso. Entre ello, grandes cantidades de plástico.




Leer más:
¿Qué hizo a la dana tan destructiva? Factores ambientales y humanos


Estos residuos llegan directamente al mar. Con el tiempo, algunos se hunden y otros se desplazan mar adentro. Las fuertes lluvias son capaces de movilizar grandes volúmenes de residuos en muy poco tiempo.

Lo que un día se encuentra en tierra, al siguiente puede aparecer en redes de pesca, ser visto por buceadores o volver a la costa con el oleaje. La mayoría, sin embargo, pasa desapercibida y acaba perdida en la inmensidad del mar.

Un problema que se extiende más allá del Mediterráneo

Aunque nuestro estudio se centró en el mar de Alborán, esto puede repetirse en muchas zonas del mundo. En el Mediterráneo, hasta un 38 % de la costa está ocupada por cultivos. Muchos de ellos son de regadío y utilizan grandes cantidades de plástico.

Esta combinación aumenta el riesgo de que los residuos agrícolas acaben en el mar. Regiones de América, Asia o África, con agricultura costera intensiva, podrían enfrentarse a un problema similar.

La mezcla de agricultura cercana a la costa, una gestión deficiente de los residuos y episodios de lluvias extremas está convirtiendo la contaminación por plásticos agrícolas en un fenómeno global. Ya no se trata de un problema local o regional.

Sin embargo, estos residuos han recibido menos atención en los debates internacionales. La conversación suele centrarse en otros tipos de basura marina.

El problema, además, no termina ahí. Con el tiempo, los plásticos se fragmentan en piezas muy pequeñas, conocidas como microplásticos. Estos fragmentos pueden ser ingeridos por organismos marinos. A esto se suma que muchos plásticos contienen sustancias químicas que pueden ser perjudiciales. Cuando entran en los ecosistemas, estos compuestos añaden un riesgo adicional para la vida marina.




Leer más:
La contaminación química del plástico, una amenaza silenciosa


Cuando el plástico agrícola se confunde con el marino

Cuando encontramos una red solitaria en el mar, solemos pensar que viene de la pesca. Una red fantasma, perdida o abandonada por un barco. Sin embrago, no siempre es así.

Hemos descubierto que, en muchos casos, ese material no procede del sector pesquero. Puede tratarse de una malla agrícola utilizada para sujetar cultivos. Estas mallas están hechas de plástico y se usan solo durante una temporada. Su vida útil suele ser de menos de un año, tras el cual pocas veces se reutilizan o reciclan, por falta de un sistema eficaz. Así que, una parte importante acaba dispersándose por el medio.

La confusión entre plásticos agrícolas y plásticos marinos tiene consecuencias importantes. Dificulta su correcta gestión y tratamiento. También afecta a la manera en que entendemos el origen de la contaminación en el mar. Si no se identifican bien estos residuos, se subestima el papel de la agricultura en el problema. Al mismo tiempo, se atribuye el impacto ambiental a otros sectores. Esto impide diseñar soluciones eficaces y justas.

¿Qué estamos haciendo mal? ¿Qué podemos mejorar?

Después de su uso, los agroplásticos suelen someterse a tres métodos de eliminación: vertido en vertederos, reciclaje físico y pirólisis. A pesar de que existen sistemas de gestión de residuos agrícolas en varios países europeos como España (en desarrollo), Francia, Alemania e Irlanda, el estudio muestra que muchos de ellos no funcionan correctamente. Esto sugiere que el problema puede darse en otras regiones del mundo.

Mapa que marca algunos países que han puesto marcha sistemas de gestión de residuos agrícolas para su reciclaje
Países que, según la FAO, han puesto en marcha iniciativas voluntarias u obligatorias para la recuperación selectiva de residuos agrícolas con fines de reciclaje.
Morales-Caselles et al., 2025, CC BY-NC

Hoy en día, la gestión de los plásticos agrícolas se centra sobre todo en la limpieza. Se actúa cuando el residuo ya está en el entorno, pero esto no es suficiente. Es necesario actuar en todo el ciclo de vida del plástico, desde su fabricación, uso y eliminación. Para ello se requieren políticas integradas adaptadas al contexto local que reduzcan la generación de residuos desde el origen.

Una de las medidas más urgentes es reducir el uso de plásticos innecesarios en la agricultura. También es fundamental apostar por alternativas reutilizables y más duraderas. Estas soluciones pueden mantener la productividad sin dañar el medio ambiente.

Otro aspecto clave es reforzar la responsabilidad compartida. Productores y usuarios deben garantizar que todos los materiales se recogen y se gestionan correctamente. Para ello son esenciales sistemas de control que eviten que los residuos acaben en la naturaleza.

Todo esto debe ir acompañado de apoyo al sector agrícola. La formación y la concienciación permiten promover buenas prácticas desde el inicio y evitar pérdidas de material.

Iniciativas internacionales como el Tratado Global sobre la Contaminación por Plásticos, actualmente en negociación en Naciones Unidas, ofrecen una oportunidad única. Pueden establecer normas comunes que aborden el problema en todas las fases del uso del plástico.




Leer más:
Tratado del plástico: si no se aprueba, la producción podría triplicarse de aquí a 2060


El futuro de la agricultura sostenible no puede apoyarse en materiales que comprometen los mismos ecosistemas de los que depende. Los plásticos agrícolas han sido aliados de la productividad, pero ahora debemos replantearnos cómo los utilizamos. Solo una gestión preventiva, integrada y transparente evitará que los alimentos que cultivamos dejen una huella plástica en la tierra y en el mar.

The Conversation

Parte de este estudio detrás de este artículo ha sido apoyado por los proyectos de investigación liderados por Carmen Morales Caselles: PLAN del programa operativo FEDER 2014-2020 y de la Junta de Andalucía (ref. FEDERUCA18-107828, proyecto PLAN), del proyecto DEEP del programa EMERGIA, del proyecto ISARGO, acción CSN2022-135760, financiado por MCIN/AEI/10.13039/501100011033 y por la Unión Europea «Next Generation EU»/PRTR, del proyecto COPLA, PCM_00056, financiado por la Consejería de Universidad, Investigación e Innovación de la Junta de Andalucía y por la Unión Europea “Next Generation EU”/PRTR. Gran parte de los datos presentados en el estudio fueron obtenidos gracias a ECOPUERTOS que desde 2016 recibe financiación de Ecoembes-Libera para apoyar sus actividades, incluyendo la clasificación de residuos y el trabajo con pescadores y buceadores. Los pescadores de Motril han colaborado con Ecopuertos recogiendo voluntariamente los residuos de sus capturas. También se ha recibido apoyo de buzos y recolectores de basura voluntarios que apoyaron los muestreos.

ref. Cómo acaban los plásticos de la agricultura en las profundidades del Mediterráneo – https://theconversation.com/como-acaban-los-plasticos-de-la-agricultura-en-las-profundidades-del-mediterraneo-269153