Inventan cerveza instantánea para hacer en casa, con nuevos sabores

Source: The Conversation – (in Spanish) – By Fabian Leonardo Moreno Moreno, Director Doctorado en Ingeniería, Universidad de La Sabana

Cápsulas de cerveza instantánea desarrollada por la Universidad de La Sabana (Colombia) y la Universidad Politécnica de Cataluña. CC BY-NC

Primero, el chasquido de la chapa al abrirse. Después, burbujas que ascienden lentamente. Luego viene el brindis, el primer sorbo, el juego de sabores, algunas veces más amargo, otras más dulce. Sea dorada, ámbar u oscura, detrás de cada cerveza hay una historia que, normalmente, implica desarrollos que nacen en un laboratorio y recorren un largo camino hasta llegar al paladar del consumidor, que es quien finalmente elige qué cerveza bebe, cómo, dónde y con quién.

Según las estimaciones de la empresa de investigación de mercado Mordor Intelligence, el volumen de negocio de la cerveza alcanzará los 914 210 millones de dólares en 2029. De ahí que las apuestas por el sector no cesen y que esa bebida ancestral que ya conocían los egipcios en la antigüedad continúe su evolución para ofrecer a millones de consumidores variaciones de presentación, sabor y experiencia.

## Innovación en forma de cápsulas

Un ejemplo de innovación son las patentes obtenidas por la Universidad de La Sabana (Colombia) y la Universidad Politécnica de Cataluña. Los investigadores Ruth Yolanda Ruiz, Manuel Osorio, Eduard Hernandéz (UPC), y quien firma este artículo hemos inventado una nueva modalidad de cerveza.

Las investigaciones han dado como resultado un producto que los consumidores pueden preparar en casa, de forma casi instantánea. Además, encontramos una técnica de concentración que permite potenciar los sabores.

Se trata de separar el agua de la cerveza mediante un proceso de crioconcentración, es decir, enfriándola para formar cristales de hielo y luego separarlos. Así conseguimos una cerveza reconstituible. Es decir, una bebida alcohólica instantánea, líquida, densa y concentrada disponible en una presentación pequeña, como las cápsulas de café.

Es decir, la cerveza llegará al consumidor en una pequeña cápsula con líquido. Al mezclar dicho líquido con agua fría y gasificarlo en una maquina gasificadora casera se obtiene la cerveza original sin perder la concentración, guardando las mismas propiedades de una cerveza no instantánea.

Este nuevo desarrollo de producto ofrece ventajas como la reducción en costes de trasporte, que resultará más sencillo al no tener que contar con el peso del líquido total y la botella de vidrio o lata. En cuanto a conservación, el asunto también se simplifica, pues al ocupar menor espacio en el refrigerador es posible considerar un ahorro energético.

Más concentración y nuevos sabores

La segunda patente que ha resultado del trabajo de investigación ha sido la mejora de la calidad de la cerveza, aumentando su concentración. Lo hemos conseguido aplicando procesos en frío.

¿Y por qué frío? La mejor manera de comprenderlo es pensar en una sopa. Cuando necesitamos que sea más espesa, podemos calentarla por más tiempo hasta lograr la densidad deseada. Pero la situación cambia cuando se trata de hacerlo con un zumo de naranja: al someter este al calor, el sabor cambia significativamente y los aromas se pierden. Con nuestra tecnología es posible retirar el agua que está dentro de la cerveza, dejando un líquido que queda más concentrado.

Inicialmente, uno de los grandes desafíos consistía en lograr conservar el alcohol en el proceso. Para ello, se recurrió a pruebas en los laboratorios del Doctorado en Ingeniería de la Universidad de La Sabana y en el laboratorio de la Universidad Politécnica de Cataluña a través de proyectos de investigación a nivel de Doctorado y Maestría.

Tras analizar los resultados, tomamos la decisión de explorar nuevas posibilidades en la cerveza industrial. El objetivo era claro: aumentar la concentración para intensificar los sabores y descubrir nuevos matices.

La idea detrás del proceso es sencilla. Al reducir parcialmente el agua, los sólidos que aportan sabor, los compuestos volátiles y el alcohol se concentran más. Como resultado, la cerveza adquiere un perfil sensorial más intenso, con aromas y sabores más definidos.

Pero la investigación no se detuvo ahí, pues optamos por aplicarla en la cerveza artesanal, un producto con menos procesos de filtración. Así se abrió la puerta a nuevas formas de realzar sus cualidades, logrando que tanto la cerveza industrial como la artesanal puedan ofrecer experiencias más profundas y atractivas para todo tipo de consumidores.

Impacto en la industria

En definitiva, hemos desarrollado una línea de investigación poco estudiada a nivel mundial: la innovación en la crioconcentración. Aplicada al campo de la industria cervecera, permite mejorar procesos y diversificar productos. De este modo se abren nuevas oportunidades de negocio, promoviendo la generación de empleo y ofreciendo a los consumidores la posibilidad de disfrutar de una nueva experiencia en la que pueden descubrir los sabores de una cerveza bien fría.

The Conversation

Fabian Leonardo Moreno Moreno no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Inventan cerveza instantánea para hacer en casa, con nuevos sabores – https://theconversation.com/inventan-cerveza-instantanea-para-hacer-en-casa-con-nuevos-sabores-257669

La paradoja mediterránea: un nuevo estudio explica cómo el mar parece haber estado lleno y vacío a la vez

Source: The Conversation – (in Spanish) – By Daniel García-Castellanos, Earth scientist, Instituto de Geociencias de Barcelona (Geo3Bcn – CSIC)

El Mediterráneo pasó por un periodo seco, donde solo sobrevivieron lagos someros y con baja salinidad, en sus zonas centrales. En la imagen, Cala Galdana (Menorca). Wikimedia Commons., CC BY

Una capa de sal de más de un kilómetro de espesor ocupa gran parte de las zonas profundas del Mediterráneo. Se acumuló en uno de los eventos medioambientales más extremos y mejor documentados que han tenido lugar en la Tierra, ocurrido hace entre 5,96 y 5,33 millones de años y conocido como la crisis de salinidad del periodo Messiniense (CSM).

Desde el descubrimiento de esta sal hace casi 60 años, persiste una intensa discusión entre científicos sobre si tal acumulación sucedió acompañada de una desecación casi total del Mediterráneo o si, por el contrario, tuvo lugar en un mar colmado de salmuera (agua saturada de sal).

Interpretación artística de la paleogeografía de los canales de comunicación entre el Atlántico y el Mediterráneo hace unos 6.5 millones de años, antes de la crisis salina del Messiniense. La presencia de varios corredores y su profundidad permitía mantener la salinidad del Mediterráneo a niveles normales como ocurre hoy en día.
Wikimedia Commons., CC BY

Especies llegadas de los lagos del este europeo

Las evidencias geológicas de ese periodo parecen contradictorias. El sedimento que se deposita tras la sal contiene numerosos fósiles de especies provenientes del este de Europa, de un gigantesco sistema lacustre ancestral que recibe el nombre de Paratetis (las actuales cuencas del Volga, el Danubio, el mar Negro, el mar Caspio, Aral, etcétera).

Esta fauna del Paratetis invadió un Mediterráneo que había perdido casi toda su vida marina, debido a la altísima salinidad que había alcanzado. Las nuevas especies, en cambio, estaban adaptadas a aguas muy poco profundas y poco saladas.

Esta baja salinidad del Mediterráneo posterior a la acumulación de sal concuerda con su aislamiento del océano y el hecho de que ya hubiera precipitado casi toda la sal de su salmuera. Y refleja la mezcla de los restos de la salmuera con el agua dulce de los ríos y de los lagos del Paratetis.

“Charcos” mediterráneos aislados

Más difícil de explicar es que en todas partes, a profundidades muy diversas, aparezcan esos fósiles de ostrácodos –clase de crustáceos de muy reducido tamaño– provenientes del este y típicos de aguas poco profundas.

Se les encuentra junto a la costa, sugiriendo que el mar estaba a un nivel parecido al actual, pero también se han hallado en sondeos marinos profundos, a más de 3 000 metros bajo el nivel del mar. Esto último sugiere que todo el Mediterráneo se había evaporado y que solo en sus zonas centrales permanecieron unos lagos someros donde se evaporaban las aguas aportadas por los ríos y donde los ostrácodos inmigrados podían proliferar.

Un mar vacío y lleno a la vez

Esta paradoja, la de un registro fósil que apunta a un Mediterráneo lleno y vacío en el mismo periodo, se refleja bien en el mismo nombre con el que nos referimos a la etapa de la crisis salina del Messiniense: el periodo Lago-Mare.

Para intentar resolver la aparente contradicción, en nuestro equipo de Geociencias Barcelona (GEO3BCN-CSIC), hemos simulado numéricamente la lluvia, la evaporación, la erosión y otros fenómenos que sabemos moldean el relieve terrestre. Las simulaciones parten de una reconstrucción de la geografía de la época –de hace 5,55 millones de años, cuando el Mediterráneo ya estaba completamente aislado– y terminan en el límite entre el Mioceno y el Plioceno, hace 5,33 millones de años, al final del periodo Messiniense y la CSM.

Así, hemos encontrado que hay dos mecanismos que pueden causar grandes oscilaciones de más de un kilómetro del nivel del Mediterráneo y explicar así la desconcertante ubicuidad de los ostrácodos del este que vivían en aguas de pocos metros de profundidad.

El primero ya había sido vislumbrado por el grupo neerlandés del profesor Wout Krijgsman: los cambios de la órbita terrestre (por ejemplo, la precesión de sus equinoccios, cambio lento y gradual en la orientación del eje de rotación de la Tierra) influyen en la lluvia en las cuencas mediterráneas y, por tanto, en el nivel de los lagos donde acababa el agua. Sin embargo, estas subidas y bajadas son insuficientes como única explicación: apenas pueden causar unas oscilaciones de unos 600 metros del nivel de los lagos.

Ríos que ya no desembocaban en el mar

El otro fenómeno que proponemos que contribuyó decisivamente a variar el nivel del Mediterráneo durante su aislamiento es la erosión a lo largo de los ríos entrantes que provenían de los lagos del Paratetis que lo rodeaban (por ejemplo: el mar Negro, el mar Caspio y el mar Panónico, hoy extinto). Al erosionar los desaguaderos de estos lagos, la erosión hizo bajar su nivel y la cantidad de agua en ellos evaporada. El exceso de agua fue gradualmente transferido hacia el Mediterráneo, haciendo subir su nivel, posiblemente más de un kilómetro.

Al desecarse el Mar Mediterráneo, este ya no podía retener a los ríos en sus desembocaduras, y debido a ello, estos comenzaron a excavar profundos cañones. Esta erosión fluvial se propagó aguas arriba (fenómeno de erosión remontante), hasta alcanzar los desaguaderos de los lagos. Esto causó el descenso de su nivel y un mayor aporte de agua a los lagos del Mediterráneo desecado. De esta forma, a las oscilaciones causadas por la precesión orbital se sumaría un progresivo llenado del Mediterráneo, a medida que los lagos vecinos reducían su tamaño.

Simulación numérica de los cambios producidos en el Mediterráneo durante la CSM.

Mar aislado y expuesto

Los resultados de esta investigación acaban de ser publicados en Science Advances y complementan otro reciente artículo del grupo del profesor Giovanni Aloisi, del Institut de Physique du Globe de París, que encuentra de forma independiente pruebas de una desecación de hasta 2,1 km bajo el nivel actual.

En conjunto, estos resultados parecen reducir el número de escenarios plausibles para la CSM y demuestran que, efectivamente, se produjo un aislamiento y una desecación casi totales al principio de la crisis, exponiendo a la atmósfera gran parte del fondo del Mediterráneo.

Juego de poblaciones

Nuestro modelo permite explicar también el impacto sin precedentes que tuvo este periodo sobre los ecosistemas mediterráneos, donde, al menos, el 89 % de las especies marinas no soportaron la alta salinidad.

El impacto biológico marino de la salinización del Mediterráneo.

Primero, la desecación extinguió la casi totalidad de las especies endémicas mediterráneas, que fueron remplazadas por especies de los lagos del Paratetis durante el periodo Lago-Mare.

Sin embargo, sin apenas tiempo para adaptarse, un nuevo cambio medioambiental volvió a reiniciar la vida en el Mediterráneo, cuando se restauró la conexión con el Atlántico, repoblándolo esta vez con especies atlánticas.

En definitiva, estos avances proporcionan un marco para entender otras “crisis de salinidad” frecuentes en el pasado de la Tierra, la formación de estos gigantescos depósitos salinos, su impacto en la evolución biológica y geológica y la resiliencia del medio ambiente ante cambios abruptos de tan gran escala.

The Conversation

Daniel García-Castellanos no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. La paradoja mediterránea: un nuevo estudio explica cómo el mar parece haber estado lleno y vacío a la vez – https://theconversation.com/la-paradoja-mediterranea-un-nuevo-estudio-explica-como-el-mar-parece-haber-estado-lleno-y-vacio-a-la-vez-260067

Dietas de un solo alimento: ¿eficaces, inútiles o peligrosas?

Source: The Conversation – (in Spanish) – By Ana Montero Bravo, Profesora Titular. Grupo USP-CEU de Excelencia “Nutrición para la vida (Nutrition for life)”, ref: E02/0720, Departamento de Ciencias Farmacéuticas y de la Salud, Facultad de Farmacia, Universidad San Pablo-CEU, CEU Universities, Universidad CEU San Pablo

Una monodieta popular en verano es la de la sandía. Brent Hofacker/Shutterstock

Cuando llega el verano aparecen por todos lados supuestas “dietas milagro”, soluciones perfectas para perder esos kilos que nos sobran y nos impiden lucir un cuerpo perfecto. Y entre ellas encontramos las llamadas monodietas, regímenes restrictivos que consisten en consumir exclusivamente un solo tipo de alimento (o un grupo muy limitado de alimentos) durante un periodo determinado. El objetivo es perder peso rápidamente o “desintoxicar” el organismo.

Ejemplos populares son las dietas de la piña, de la manzana, de la sandía, del melocotón, de la alcachofa, algunas que incluyen cereales como la dieta del arroz, e incluso regímenes basados en la ingesta de alimentos proteicos como el atún o la leche. Su aparente simplicidad y la promesa de obtener resultados rápidos explican su éxito.

Una pérdida de peso efímera

Al tratarse de dietas que generan una drástica reducción calórica, se produce una pérdida de peso a corto plazo. Sin embargo, consumir una ingesta tan baja de calorías da lugar a una disminución de los niveles de glucosa en sangre, lo que activa mecanismos compensatorios para mantener el suministro de energía.

Inicialmente, el organismo utiliza el glucógeno hepático, principal fuente de reserva de glucosa que se encarga de mantener los niveles adecuados de este azúcar en sangre, especialmente entre comidas o durante el ayuno. Sin embargo, al agotarse ese depósito, el organismo comienza a movilizar masa muscular para obtener aminoácidos que, a través de otras rutas metabólicas, permiten la síntesis de glucosa. Este proceso, sostenido en el tiempo, puede llevar a una pérdida significativa de masa muscular y otras alteraciones metabólicas.

Por tanto, buena parte de la bajada de peso corresponde a una pérdida de agua y masa muscular, más que a grasa corporal, por lo que esos resultados tienden a ser temporales. Al finalizar este tipo de dietas, es común que el individuo recupere rápidamente el peso perdido cuando recupera su alimentación habitual, lo que se conoce como “efecto rebote”.

En suma, las monodietas pueden resultar atractivas por la obtención de resultados rápidos, pero no promueven una pérdida de peso mantenida en el tiempo ni educan en hábitos alimentarios saludables.

Pero ¿tienen algún beneficio real?

Más allá de la citada pérdida de peso inicial, las evidencias científicas que respalden beneficios reales y duraderos de las monodietas son prácticamente inexistentes. Algunos individuos reportan una “sensación de ligereza” o mejor digestión, pero estos efectos pueden deberse más a la eliminación de alimentos procesados que al régimen en sí.

También puede producirse el llamado “efecto placebo”: al creer que están siguiendo una dieta detox y están “limpiando” su cuerpo, las personas se sienten mejor, aunque no haya cambios fisiológicos demostrados.

¿Son peligrosas?

Sí, las monodietas pueden llegar a ser peligrosas, especialmente si se prolongan en el tiempo. Su principal riesgo es la deficiencia de nutrientes esenciales. Al consumir solo un tipo de alimento, dejamos de ingerir proteínas, grasas saludables, vitaminas y minerales necesarios para el correcto funcionamiento del organismo. Además, pueden dar lugar a problemas digestivos, trastornos metabólicos, problemas osteomusculares, alteraciones hormonales y desequilibrios electrolíticos, especialmente en personas con una situación de salud previa vulnerable.




Leer más:
Cuanto más rápido perdemos peso, más rápido lo recuperamos: ¿verdadero o falso?


Otro peligro importante es el de generar una relación poco saludable con la comida, marcada por la restricción y la culpa, que en casos extremos pueden desencadenar trastornos alimentarios como la ortorexia o la anorexia nerviosa.

Adicionalmente, esta limitación radical de nutrientes puede afectar el equilibrio de neurotransmisores a nivel cerebral, contribuyendo a una situación de irritabilidad y fatiga, afectando negativamente al bienestar emocional.

¿Por qué siguen siendo populares?

A pesar de los riesgos citados, las monodietas siguen teniendo éxito, especialmente en redes sociales y medios de comunicación. Su atractivo radica en la simplicidad y la promesa de resultados rápidos sin demasiado esfuerzo. Además, muchas de estas dietas son promovidas por celebridades o influencers, lo que les otorga una falsa credibilidad. La desinformación, la presión estética y la falta de educación nutricional en la sociedad también contribuyen a su seguimiento.

Debemos hacer hincapié en que las dietas de un solo alimento pueden ser eficaces para perder peso de forma rápida y temporal, pero no son efectivas a largo plazo y resultan peligrosas si el seguimiento de las mismas es prolongado. No aportan beneficios reales para la salud y pueden ocasionar deficiencias nutricionales e importantes problemas de salud.

Por estas razones, no son recomendables ni deben ser promovidas como métodos adecuados de control de peso o mejora de la salud. La mejor estrategia para lograr y mantener un peso saludable sigue siendo una alimentación equilibrada, variada y sostenida en el tiempo, acompañada de actividad física regular y hábitos de vida saludables.

The Conversation

Ana Montero Bravo no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Dietas de un solo alimento: ¿eficaces, inútiles o peligrosas? – https://theconversation.com/dietas-de-un-solo-alimento-eficaces-inutiles-o-peligrosas-260888

Design and Disability at the V&A is a rich, thought-provoking exhibition

Source: The Conversation – UK – By Laudan Nooshin, Professor of Music, School of Communication and Creativity, City St George’s, University of London

One of the first things to greet visitors at the V&A’s new Design and Disability exhibition is a striking blue bench by artist Finnegan Shannon titled, Do You Want Us Here Or Not? This exhibit is a response to the often inadequate seating in museums, which not only acts as a barrier to accessibility for many people, but is more widely symptomatic of ableist approaches to museum and exhibition design.

In this case, the invitation to “Please sit here!” sets the tone for the whole exhibition, which also includes a large sensory map of the layout (located at wheelchair level), a tactile map, and QR codes that link to audio description for blind and partially sighted visitors, and also British Sign Language interpretation.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


Aiming to showcase the radical contributions of disabled, deaf and neurodivergent people to design history and contemporary culture from the 1940s until the present, the exhibition goes well beyond this, addressing an impressively wide range of issues around access, disability and exclusion. It also reveals how ableism operates across a range of exclusions, such as race, gender, class and more.

As the introductory notes point out: “Disabled people past and present have challenged and confronted the imbalance of design in society. This exhibition highlights disabled individuals at the heart of design history … It is both a celebration and a call to action.”

While the fight for disability justice goes back many decades – also documented in the exhibition – it’s only relatively recently that questions of access and equality have gone beyond the physical. These include a wide range of issues related to neuro-inclusion and sensory access, including calm spaces and sensory maps that indicate noisy areas.

My own interest in sound in museums has come partly out of research focusing on the role of acoustics in creating accessible spaces, and from my own experience of noise sensitivity conditions hyperacusis and misophonia. Inclusive sonic design seeks to address how sound operates as a factor of social inclusion and exclusion in places like museums.

The V&A exhibition comprises three sections: visibility, tools and living. Visibility focuses on design and art as fundamental tools of activism and includes work created as part of disability justice movements over many decades. This section is a stark reminder of the justice and rights that only come about through extensive struggles.

Tools highlight the extraordinary contribution to design innovation made by disabled people. Living explores stories of disabled people claiming space and imagining the worlds that they want to live in.

Sections two and three both advocate for the social model of disability in which people are rendered disabled by their environment, something that calls for design solutions (as opposed to the medical model in which people are required to navigate and find solutions to their “problem”).

The exhibition draws attention to a wide range of physical and sensory exclusions, both in the displays and the design of the space itself. The in-house design team includes staff with personal experience of disability who also worked closely with external partners living with disability.

There are plenty of exhibits that can be experienced through touch. For partially sighted visitors, there are strong visual contrasts in the wall colours and the edges of displays are lit up. And there are raised edgings on all exhibits for people using a cane – all of which help with navigation.

There are also quiet areas and plenty of seating. Some of these features are already being incorporated into gallery and exhibition design, and hopefully will soon become standard.

I particularly liked the way various issues intersect in the exhibition, in which a range of exclusions are set alongside one another: race, hearing impairment, youth exclusion and stammering, for example.

Other favourites included the B1 Blue Flame rattling football used for blind football, which visitors can pick up, feel, smell, shake and listen to. The Deaf Rave set and Woojer Vest are designed for deaf clubbers and performers and use vibrating tactile discs that amplify sound vibrations.

The beautiful blanket and pillow entitled Public S/Pacing by Helen Statford offers an invitation to rest, drawing attention to “crip time”, accepting “a different pace to non-disabled norms, challenging conventions of productivity, and resting in radical ways that would actually benefit society at large”.

The blanket highlights the failures of the design of public spaces to include disabled people, “challenging ableist assumptions with care and visibility”. The reverse of the blanket has a quotation from Rhiannon Armstrong’s Radical Act of Stopping (2016), embroidered by Poppy Nash.

The exhibition includes many examples of “disability gain” by which design aimed at a particular group of people unintentionally benefits others, too. An example is the smartphone touchscreen, based on technology developed by engineers Wayne Westerman and John Elias as an alternative to the standard keyboard, which Westerman was unable to use due to severe hand pain.

Initially marketed to people with hand disabilities, the technology was later sold to Apple where it revolutionised mobile phone technology.

The final panel of the exhibition is titled Label for Missing Objects, an imaginative and fitting way to mark the continuing story of designing a world that works for “every body and every mind”.

Design and Disability is a rich, thought-provoking and landmark exhibition. Kudos to the V&A, although its importance is so obvious, I wonder why it took this long to host a show dedicated to disabled artists and designers and the wider social impact of their work.

I very much hope there are plans for the exhibition to tour the UK and beyond, and to become a permanent gallery at the V&A, so that it can inform curation and design work in other museums.

Design and Disability at the V&A runs until February 15 2026.

The Conversation

Laudan Nooshin received funding from the AHRC for the project Place-making Through Sound: Designing for Inclusivity and Wellbeing (2023-24).

ref. Design and Disability at the V&A is a rich, thought-provoking exhibition – https://theconversation.com/design-and-disability-at-the-vanda-is-a-rich-thought-provoking-exhibition-261135

Why Russia is not taking Trump’s threats seriously

Source: The Conversation – UK – By Patrick E. Shea, Senior Lecturer in International Relations and Global Governance, University of Glasgow

The US president, Donald Trump, recently announced that Russia had 50 days to end its war in Ukraine. Otherwise it would face comprehensive secondary sanctions targeting countries that continued trading with Moscow.

On July 15, when describing new measures that would impose 100% tariffs on any country buying Russian exports, Trump warned: “They are very biting. They are very significant. And they are going to be very bad for the countries involved.”

Secondary sanctions do not just target Russia directly, they threaten to cut off access to US markets for any country maintaining trade relationships with Moscow. The economic consequences would affect global supply chains, targeting major economies like China and India that have become Russia’s commercial lifelines.

Despite the dire threats, Moscow’s stock exchange increased by 2.7% immediately following Trump’s announcement. The value of the Russian rouble also strengthened. On a global scale, oil markets appear to have relaxed, suggesting traders see no imminent risks.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


This market reaction coincided with a nonplussed Moscow. While official statements noted that time was needed for Russia to “analyse what was said in Washington”, other statements suggested that the threats would have no effect. Former Russian president Dmitry Medvedev, for example, declared on social media that “Russia didn’t care” about Trump’s threats.

The positive market reaction and lack of panic from Russian officials tell us more than simple scepticism about Trump’s willingness to follow through.

If investors doubted Trump’s credibility, we would expect market indifference, not enthusiasm. Instead, the reaction suggests that financial markets expected a stronger response from the US. As Artyom Nikolayev, an analyst from Invest Era, quipped: “Trump performed below market expectations.”

A reprieve, not a threat

Trump’s threat isn’t just non-credible – the positive market reaction in Russia suggests it is a gift for Moscow. The 50-day ultimatum is seen not as a deadline but as a reprieve, meaning nearly two months of guaranteed inaction from the US.

This will allow Russia more time to press its military advantages in Ukraine without facing new economic pressure. Fifty days is also a long time in American politics, where other crises will almost certainly arise to distract attention from the war.

More importantly, Trump’s threat actively undermines more serious sanctions efforts that were gaining momentum in the US Congress. A bipartisan bill has been advancing a far more severe sanctions package, proposing secondary tariffs of up to 500% and, crucially, severely limiting the president’s ability to waive them.

By launching his own initiative, Trump seized control of the policy agenda. Once the ultimatum was issued, US Senate majority leader John Thune announced that any vote on the tougher sanctions bill would be delayed until after the 50-day period. This effectively pauses a more credible threat facing the Kremlin.

This episode highlights a problem for US attempts to use economic statecraft in international relations. Three factors have combined to undermine the credibility of Trump’s threats.

First, there is Trump’s own track record. Financial markets have become so accustomed to the administration announcing severe tariffs only to delay, water down or abandon them that the jibe “Taco”, short for “Trump always chickens out”, has gained traction in financial circles.

This reputation for failing to stick to threats means that adversaries and markets alike have learned to price in a high probability of backing down.




Read more:
Investors are calling Trump a chicken – here’s why that matters


Second, the administration’s credibility is weakened by a lack of domestic political accountability. Research on democratic credibility in international relations emphasises how domestic constraints – what political scientists call “audience costs” – can paradoxically strengthen a country’s international commitments.

When leaders know they will face political punishment from voters or a legislature for backing down from a threat, their threats gain weight. Yet the general reluctance of Congress to constrain Trump undermines this logic. This signals to adversaries that threats can be made without consequence, eroding their effectiveness.

And third, effective economic coercion requires a robust diplomatic and bureaucratic apparatus to implement and enforce it. The systematic gutting of the State Department and the freezing of United States Agency for International Development (USAID) programmes eliminate the diplomatic infrastructure necessary for sustained economic pressure.

Effective sanctions require careful coordination with allies, which the Trump administration has undermined. In addition, effective economic coercion requires planning and credible commitment to enforcement, all of which are impossible without a professional diplomatic corps.

Investors and foreign governments appear to be betting that this combination of presidential inconsistency, a lack of domestic accountability, and a weakened diplomatic apparatus makes any threat more political theatre than genuine economic coercion. The rally in Russian markets was a clear signal that American economic threats are becoming less feared.

The Conversation

Patrick E. Shea does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Russia is not taking Trump’s threats seriously – https://theconversation.com/why-russia-is-not-taking-trumps-threats-seriously-261296

Worries about the UK economy are justified, but can the government afford to gamble on raising taxes?

Source: The Conversation – UK – By Alan Shipman, Senior Lecturer in Economics, The Open University

Gloomy economic figures have heaped more pressure on the British government and its promise to improve growth. And if that wasn’t enough, there have also been some stark warnings about public finances and the country’s ability to service its debts.

All of this has led to a growing expectation that the UK chancellor Rachel Reeves will have to bring in some significant tax hikes later this year, or reduce government spending.

But both of these options could worsen the long-term economic outlook, by further constraining GDP growth. That was precisely the fate of governments that pursued an agenda of “austerity” – cuts in spending and higher taxes – to tackle the expanded public debt after the financial crisis of 2008.

It was a strategy that ultimately led to higher public debt. Put simply, when governments spend less, GDP tends to fall. And when GDP falls and a country is less productive, tax revenues go down too.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


To make things even more complicated for the chancellor, the UK government has also widened its debt risk by changing its fiscal rules to acknowledge extra financial responsibilities.

This adjustment gave the government more financial assets, including student loans and public pension holdings. But it also meant taking on more liabilities, including the pension schemes it would have to bail out if necessary.

In July 2025, the Office for Budget Responsibility (OBR) identified several other sectors – including universities, housing associations and water companies – whose large debts could become government liabilities in the future.

A bigger balance sheet automatically means more public financial risk. And climate change further raises these risks, the OBR says, by forcing the government to spend more on dealing with environmental damage and eroding fossil-fuel taxes, which still raise around £24 billion for the Treasury.

The OBR is also concerned about the rising cost of pensions for an ageing population. In fact, the UK’s system is not particularly expensive, partly due to its reliance on private pensions (funded by employers and employees).

Yet this reliance brings a different kind of government cost. For these private sector schemes have attempted to insulate themselves against the strains of an ageing population, as more employees retire than join the workforce (and as retirees live longer).

Often this has involved shifting from “defined benefit” plans, which guarantee retirement income, to “defined contribution” plans, where payouts depend on how much members pay in and how well funds are invested.

But that shift has also made it harder for the government to borrow the money it needs for public spending.

Defined benefit funds, seeking a steady long-term return, used to be big buyers of UK government bonds (gilts) – the financial assets that the government sells to raise money. In contrast, defined contribution funds invest mainly in equities (company shares), which promise a higher return on investment that can grow pension pots faster.

UK industrial policy supports this shift from gilts to other assets. It wants pension funds to invest in innovation and infrastructure as a way of stimulating its often mentioned mission of economic growth.

The growth gamble

Yet the move by pensions towards equities is steadily deflating demand for new government bonds. This then forces the government to pay higher interest rates to attract enough buyers, often from overseas.

There is also pressure on the government to relax the “triple lock” on state pensions. This pledge – to raise the basic state pension by at least 2.5% every year, and maintained by all parties since 2011 – is costing around three times as much as was projected at launch, despite fewer pensioners escaping poverty since it was introduced.

Overall, inflation and an ageing population have lifted state spending on pensions to around 5% of GDP.

These pressures all strengthen the view that the government will need another tax-raising budget this year. How else will it pay for its plans for spending on healthcare, housing, infrastructure and defence?

Reeves sought to assure voters that £40 billion in tax hikes in October 2024 rises were enough to plug an inherited “black hole”. But she is already struggling to preserve those projections, after a politically painful retreat from welfare changes designed to save £5 billion.

Hopes that a faster-growing economy would narrow the deficit, by boosting tax receipts and reducing spending requirements, have not been fulfilled.

Yet calls for significant tax increases – which could dampen growth – may still be be resisted.

Under pressure, she may well consider a compromise like a “wealth tax” targeting the richest, that would also satisfy the Labour left. Yet the only way to really raise significant extra funds is to increase income tax, VAT or national insurance, which would be extremely risky politically.

But all economic policy comes with risk. And she may end up sticking with her position and putting her (taxpayers’) money on the hope that today’s deficit will eventually be narrowed by faster growth. Relying on more investment to solve economic problems depends on investors trusting the economic stability of the UK, which is a gamble. But it is a gamble the government may still be willing to take.

The Conversation

Alan Shipman has received funding from the British Academy/Leverhulme Trust and the Harry Ransom Center, University of Texas at Austin.

ref. Worries about the UK economy are justified, but can the government afford to gamble on raising taxes? – https://theconversation.com/worries-about-the-uk-economy-are-justified-but-can-the-government-afford-to-gamble-on-raising-taxes-260880

Britons are less likely than Americans to invest in stocks – but they may not have the full picture

Source: The Conversation – UK – By Sam Pybis, Senior Lecturer in Economics, Manchester Metropolitan University

ymgerman/Shutterstock

UK chancellor Rachel Reeves would like Britons to invest more in stocks – particularly UK stocks – rather than keep their money in cash. She has even urged the UK finance industry to be less negative about investing and highlight the potential gains as well as the risks.

Stock ownership is important for governments for a variety of reasons. Boosting capital markets can encourage business expansion, job creation and long-term economic growth. It can also give people another source of income in later life, especially as long-term investing can offer greater returns than saving.

But in the UK, excluding workplace pensions, only 23% of people have invested in the stock market, compared to nearly two-thirds in the US. Survey results suggest that American consumers are generally more comfortable with financial risks.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


And it appears that a greater degree of risk translates into closer political engagement. During market shocks driven by US president Donald Trump’s tariff chaos, many Americans tracked headlines – and their portfolios – closely. This contrasts with the UK, where most people keep their savings in safer assets like cash savings accounts or premium bonds.

If Britons are more risk-averse, media coverage that tends to be noisier when markets fall than when they recover may be having an impact. While concerns regarding market volatility may be valid, they can overshadow the long-term benefits of investing.

One key opportunity that many British consumers have missed out on is the rise of low-cost, diversified exchange-traded funds (ETFs), which have made investing more accessible and affordable. An ETF allows investors to buy or sell baskets of shares on an exchange. For example, a FTSE100 ETF gives investors exposure to the UK’s top 100 companies without having to buy each one individually.

This is exactly the kind of long-term, low-cost investing that Reeves appears to be promoting. But should savers be worried about current market volatility – much of it driven by trade tensions and tariff uncertainty? One view, of course, is that volatility is simply part of investing.

But it could also be argued that big shifts within the space of a single month are often exaggerated. People are also likely to be put off by news headlines, which tend to exaggerate the swings in the market.

Examining daily excess returns in the US stock market from November 2024 to April 2025, I plotted cumulative returns (which show how an investment grows over time by adding up past returns) within each month. April 2025 stands out. Despite experiencing several sharp daily losses, the market rebounded swiftly in the days that followed.

This pattern isn’t new. Historically, markets have shown a remarkable ability to recover from short-term shocks. Yet many potential investors could be deterred by alarming headlines that, while factually accurate, often highlight single-day declines without broader context.

The reality is that the stock market is frequently a series of short-lived storms. These are volatile, yes, but often followed by calm and recovery.

Fear and caution

During market downturns, it’s common for people to try to understand why this time is worse or analyse if this crash is more serious than previous ones.

The fear these headlines generate could feed into barriers to long-term investing in the UK. And that’s one of the challenges the chancellor faces in encouraging more Britons to invest.

For those already invested in the stock market, short-term declines are part of the journey. They are risks that can be borne with the understanding that markets tend to recover over time.

My analysis of daily US stock market data since 1926 shows that after sharp daily drops, the market often rebounds quickly (see pie chart below). In fact, more than a quarter of recoveries occur within just a few days.

But this resilience is rarely the focus of media coverage. It’s far more common to see headlines reporting that the market is down than to see follow-ups highlighting how quickly it bounced back.

Research has shown that negative economic information is likely to have a greater impact on public attitudes. For example, a sharp drop in the stock market might dominate front pages, while a steady recovery over the following weeks barely gets a mention. The imbalance reinforces a sense of crisis, even when the broader picture is less bleak.

front page of daily mail newspaper from april 2025 with the headline 'meltdown'
Markets went on to recover in April 2025… but did the headlines reflect this?
David G40/Shutterstock

Unbalanced reporting can distort perceptions, discouraging potential investors who might otherwise benefit from long-term participation in the market. It appears that American perceptions of their finances are also affected by news coverage in a similar way.

Over the long term, the difference between stock market returns and the generally lower returns from government bonds is known as the “equity risk premium puzzle”. Economists have long debated why this gap is so large. Some observers argue it may narrow in the future. But many others, including the chancellor, believe that investing in the stock market remains a beneficial long-term strategy.

If more people are to benefit from long-term investing, it’s vital to tell the full story. That means not just highlighting when markets fall, but following up on how they recover afterwards.

The Conversation

Sam Pybis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Britons are less likely than Americans to invest in stocks – but they may not have the full picture – https://theconversation.com/britons-are-less-likely-than-americans-to-invest-in-stocks-but-they-may-not-have-the-full-picture-259485

From tea towels to TV remotes: eight everyday bacterial hotspots – and how to clean them

Source: The Conversation – UK – By Manal Mohammed, Senior Lecturer, Medical Microbiology, University of Westminster

Parkin Srihawong/Shutterstock

From your phone to your sponge, your toothbrush to your trolley handle, invisible armies of bacteria are lurking on the everyday objects you touch the most. Most of these microbes are harmless – some even helpful – but under the right conditions, a few can make you seriously ill.

But here’s the catch: some of the dirtiest items in your life are the ones you might least expect.

Here are some of the hidden bacteria magnets in your daily routine, and how simple hygiene tweaks can protect you from infection.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


Shopping trolley handles

Shopping trolleys are handled by dozens of people each day, yet they’re rarely sanitised. That makes the handles a prime spot for germs, particularly the kind that spread illness.

One study in the US found that over 70% of shopping carts were contaminated with coliform bacteria, a group that includes strains like E. coli, often linked to faecal contamination. Another study found Klebsiella pneumoniae, Citrobacter freundii and Pseudomonas species on trolleys.

Protect yourself: Always sanitise trolley handles before use, especially since you’ll probably be handling food, your phone or touching your face.

Kitchen sponges

That sponge by your sink? It could be one of the dirtiest items in your home. Sponges are porous, damp and often come into contact with food: ideal conditions for bacteria to thrive.

After just two weeks, a sponge can harbour millions of bacteria, including coliforms linked to faecal contamination, according to the NSF Household Germ Study and research on faecal coliforms.

Protect yourself: Disinfect your sponge weekly by microwaving it, soaking it in vinegar, or running it through the dishwasher. Replace it if it smells – even after cleaning. Use different sponges for different tasks (for example, one for dishes, another for cleaning up after raw meat).

Chopping boards

Chopping boards can trap bacteria in grooves left by knife cuts. Salmonella and E. coli can survive for hours on dry surfaces and pose a risk if boards aren’t cleaned properly.

Protect yourself: Use separate boards for raw meat and vegetables. Wash thoroughly with hot, soapy water, rinse well and dry completely. Replace boards that develop deep grooves.

Tea towels

Reusable kitchen towels quickly become germ magnets. You use them to dry hands, wipe surfaces and clean up spills – often without washing them often enough.

Research shows that E. coli and salmonella can live on cloth towels for hours.

Protect yourself: Use paper towels when possible, or separate cloth towels for different jobs. Wash towels regularly in hot water with bleach or disinfectant.

Mobile phones

Phones go everywhere with us – including bathrooms – and we touch them constantly. Their warmth and frequent handling make them ideal for bacterial contamination.

Research shows phones can carry harmful bacteria, including Staphylococcus aureus.

Protect yourself: Avoid using your phone in bathrooms and wash your hands often. Clean it with a slightly damp microfibre cloth and mild soap. Avoid harsh chemicals or direct sprays.

Toothbrushes near toilets

Flushing a toilet releases a plume of microscopic droplets, which can land on nearby toothbrushes. A study found that toothbrushes stored in bathrooms can harbour E. coli, Staphylococcus aureus and other microbes.




Read more:
Toothbrushes and showerheads covered in viruses ‘unlike anything we’ve seen before’ – new study


Protect yourself: Store your toothbrush as far from the toilet as possible. Rinse it after each use, let it air-dry upright and replace it every three months – or sooner if worn.

Bathmats

Cloth bathmats absorb water after every shower, creating a warm, damp environment where bacteria and fungi can thrive.

Protect yourself: Hang your bathmat to dry after each use and wash it weekly in hot water. For a more hygienic option, consider switching to a wooden mat or a bath stone: a mat made from diatomaceous earth, which dries quickly and reduces microbial growth by eliminating lingering moisture.

Pet towels and toys

Pet towels and toys stay damp and come into contact with saliva, fur, urine and outdoor bacteria. According to the US national public health agency, the Centers for Disease Control and Prevention, pet toys can harbour E. coli, Staphylococcus aureus and Pseudomonas aeruginosa.

Protect your pet (and yourself): Wash pet towels weekly with hot water and pet-safe detergent. Let toys air dry or use a dryer. Replace worn or damaged toys regularly.

Shared nail and beauty tools

Nail clippers, cuticle pushers and other grooming tools can spread harmful bacteria if they’re not properly cleaned. Contaminants may include Staphylococcus aureus – including MRSA, a strain resistant to antibiotics – Pseudomonas aeruginosa, the bacteria behind green nail syndrome, and Mycobacterium fortuitum, linked to skin infections from pedicures and footbaths.

Protect yourself: Bring your own tools to salons or ask how theirs are sterilised. Reputable salons will gladly explain their hygiene practices.

Airport security trays

Airport trays are handled by hundreds of people daily – and rarely cleaned. Research has found high levels of bacteria, including E. coli.

Protect yourself: After security, wash your hands or use sanitiser, especially before eating or touching your face.

Hotel TV remotes

Studies show hotel remote controls can be dirtier than toilet seats. They’re touched by many hands and rarely sanitised.

Common bacteria include E. coli, enterococcus and Staphylococcus aureus, including MRSA, according to research.

Protect yourself: Wipe the remote with antibacterial wipes when you arrive. Some travellers even put it in a plastic bag. Always wash your hands after using shared items.

Bacteria are everywhere, including on the items you use every day. You can’t avoid all germs, and most won’t make you sick. But with a few good habits, such as regular hand washing, cleaning and smart storage, you can help protect yourself and others.

It’s all in your hands.

The Conversation

Manal Mohammed does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. From tea towels to TV remotes: eight everyday bacterial hotspots – and how to clean them – https://theconversation.com/from-tea-towels-to-tv-remotes-eight-everyday-bacterial-hotspots-and-how-to-clean-them-260784

Seclusion rooms don’t make schools safe, and Ontario needs a policy

Source: The Conversation – Canada – By Hunter Knight, Assistant Professor of Childhood and Youth Studies, Western University

A recent report entitled Crisis in the Classroom: Exclusion, Seclusion and Restraint of Students with Disabilities in Ontario Schools shares accounts of the frightening use of seclusion rooms in schools. It makes recommendations towards improving inclusion, belonging and educational achievement for disabled students.

The report is from Community Living Ontario, a non-profit organization that advocates for people who have an intellectual disability. It analyzes the results from a survey of 541 caregivers of students with disabilities about their experiences in Ontario schools.

Seclusion rooms are spaces where students can be kept in isolation and are not permitted to leave. Respondents to the Crisis in the Classroom report detailed incidents such as a student being secluded in a padded room, and a student being isolated in a small, closet-sized room.




Read more:
How school systems can honour the human rights of people with disabilities


While some school boards have developed guidance independently, there is currently no provincial policy on the use of seclusion rooms in Ontario. The Crisis in the Classroom report calls for clear and enforceable provincial regulations and policy around seclusion and restraint.

As an assistant professor of childhood and youth studies whose work examines constructions of the “problem child” and everyday injustices against disabled and racialized children, I believe it is critical for Ontario residents and policymakers to take stock of the negative effects of seclusion rooms and commit to alternatives.

I am unaffiliated with this report, but earlier in my career, I worked as as a one-on-one educational aide for students who attended a special education school that used seclusion.

Defining seclusion rooms

As education researchers Nadine Alice Bartlett and Taylor Floyd Ellis show, there is inconsistent terminology used to describe seclusion in schools, meaning that “the conditions under which such practices may be used in some instances are subjective,” and this “may contribute to a broad interpretation of what is deemed acceptable … in schools.”

As opposed to sensory rooms, which students can usually leave at will and are often designed with sensory tools available for self-regulation (like weighted toys), seclusion rooms serve to isolate or contain students.

Across North America, there are reports of seclusion rooms being built into schools or constructed in classroom corners.

In the Crisis in the Classroom report, 155 survey respondents said seclusion was used on their child in the 2022-23 school year, where seclusion means having a locked/blocked door (83 respondents) or being physically prevented from leaving (25 respondents).

Regular, sustained seclusion

Crisis in the Classroom notes that almost half of the students who had experienced seclusion were secluded on a regular basis, and more than 10 per cent were secluded for longer than three hours.

Research shows that seclusion is often discriminatory along lines of race, class and ability. Reflecting these patterns identified in larger research, the report flags that students had a higher risk for being secluded if they came from households with lower parental education and income levels, and if they were labelled with a behavioural identification or a mild intellectual disability.

More than half of the caregivers surveyed had never given permission for their children to be secluded, and the report includes quotes from caregivers who were never told it was happening.

Response to perceived source of school violence

Seclusion rooms are commonly justified as necessary tools to keep teachers and (other) students safe.

This justification ignores the evidenced success of schools that have reduced seclusion or eliminated it entirely through adequate staff support and trauma-informed training that draws from research-proven de-escalation strategies.

I argue that turning to these alternatives, as the report recommends, is of dire importance. Investigations elsewhere repeatedly find that seclusion rooms are most frequently used for discipline or punishment — not for safety.

Children in a classroom close to a teacher reading a book.
With adequate staffing and trauma-informed training, some schools have reduced or eliminated seclusion.
(CDC/Unsplash)

Outside Ontario, where policy requires tracking the reasons why children are sent into seclusion, seclusion has followed incidents like spilling milk or asking for more food at lunch.

Seclusion rooms act primarily as a disciplinary tool that targets the most vulnerable students in our schools.

Ineffective, dangerous tools

Seclusion is an ineffective educational and therapeutic practice and highly dangerous: research shows that seclusion rooms increase injury and violence in schools.

This appears in the physical harm (for students and staff) that can occur in the physical restraints often required to force a student into a seclusion room. It also appears in the trauma that can ensue from seclusion (for students and staff) that increases the likelihood of future physical confrontations.

Placing students, often in high distress, into a locked space where they cannot be closely supervised can and has resulted in their deaths.

Seclusion without regulation

As the Crisis in the Classroom report and repeated exposés illustrate, a lack of policy does not mean seclusion isn’t happening in Ontario. It means seclusion is happening without provincial policy to regulate things like:

  • Which students can or cannot be secluded, for how long and how often;
  • What rooms for seclusion must look like and essential safety features;
  • What data staff must collect about why seclusion rooms are used;
  • When caregivers must be notified.

Without these guidelines, sometimes no one knows that seclusion is happening — much less in what spaces, for which students and why — beyond the students and school staff who may be traumatized by this practice.

Reports of violence in schools

Crisis in the Classroom notes that teachers’ unions have reported there’s been an increase in violence by students against teachers, often presented in a way that suggests that disabled students are a primary source of this violence. The report acknowledges that the Elementary Teachers’ Federation of Ontario has said that students with special education needs have been “chronically under-served by the government.”

News media coverage, the report suggests, “often takes the side of educational staff, and has an unfortunate habit of conflating disability with aggressive behaviour.”

Unfortunately, the faulty perspective that disabled students are a source of school violence depends on an ableist logic that has worked historically to subject disabled people to over-incarceration. It effaces the fact that disabled children are actually more likely to be subjected to violence than their peers.




Read more:
Achieving full inclusion in schools: Lessons from New Brunswick


The report points to the dire need to eliminate seclusion and turn towards possibilities that do not increase violence in schools and target disabled students.

The report’s recommendations echo calls from teachers’ unions for appropriate, adequate staffing in schools and increased professional development, especially trauma-informed training, that would support teachers’ work delivering supportive and inclusive education that keeps everyone safe.

And these recommendations make an urgent call for strong and clear policy on seclusion and restraint in Ontario that would severely limit it or eliminate it entirely — and at least track when it’s occurring.

Safer and more humane schools

This devastating report illustrates that we need policy on seclusion in Ontario now to protect everyone in our schools.

I know first-hand that teaching, especially for educators working with students with disabilities, is underpaid and underappreciated work.

More humane practices will keep schools safer for everyone, including teachers and all students, especially students who are still being subjected to seclusion today.

The Conversation

Hunter Knight receives funding from the Social Sciences and Humanities Research Council.

ref. Seclusion rooms don’t make schools safe, and Ontario needs a policy – https://theconversation.com/seclusion-rooms-dont-make-schools-safe-and-ontario-needs-a-policy-259010

PFAS et dépollution de l’eau : les pistes actuelles pour traiter ces « polluants éternels »

Source: The Conversation – in French – By Julie Mendret, Maître de conférences, HDR, Université de Montpellier

Les PFAS, ces substances per- et polyfluoroalkylées, souvent surnommées les « polluants éternels », représentent un défi environnemental majeur en raison de leur persistance et de leur toxicité.

Aujourd’hui, outre mieux réguler leur utilisation, il faut de meilleures pistes pour traiter ces polluants, c’est-à-dire d’abord les extraire de l’environnement, puis les détruire. Un véritable défi puisque ces molécules sont à la fois très variées et très résistantes – ce qui fait leur succès.


Des mesures d’encadrement et d’interdiction des émissions de PFAS, indispensables pour limiter leur diffusion dans l’environnement, sont d’ores et déjà en route. Selon une loi adoptée en février 2025, la France doit tendre vers l’arrêt total des rejets industriels de PFAS dans un délai de cinq ans.




À lire aussi :
PFAS : comment les analyse-t-on aujourd’hui ? Pourra-t-on bientôt faire ces mesures hors du laboratoire ?


Récemment, une enquête menée par le Monde et 29 médias partenaires a révélé que la décontamination des sols et des eaux contaminées par ces substances pourrait coûter de 95 milliards à 2 000 milliards d’euros sur une période de vingt ans.

Comme pour d’autres contaminants organiques, on distingue deux grandes familles de procédés de traitement.

Certaines technologies consistent à séparer et parfois concentrer les PFAS du milieu pollué pour permettre le rejet d’un effluent épuré, mais elles génèrent par conséquent des sous-produits à gérer qui contiennent toujours les polluants. D’autres technologies consistent à dégrader les PFAS. Ces procédés impliquent la destruction de la liaison C-F (carbone-fluor) qui est très stable avec souvent des besoins énergétiques associés élevés.

Dans cet article, nous recensons une partie des nombreux procédés qui sont actuellement testés à différentes échelles (laboratoire, pilote, voire à l’échelle réelle), depuis des matériaux innovants, qui peuvent parfois simultanément séparer et détruire les PFAS, jusqu’à l’utilisation d’organismes vivants, comme des champignons ou des microbes.




À lire aussi :
Le casse-tête de la surveillance des PFAS dans les eaux


Procédés de séparation et concentration des PFAS dans l’eau

Actuellement, les techniques mises en œuvre pour éliminer les PFAS dans l’eau sont essentiellement des procédés de séparation qui visent à extraire les PFAS de l’eau sans les décomposer, nécessitant une gestion ultérieure des solides saturés en PFAS ou des concentrâts (déchets concentrés en PFAS) liquides.

La technique la plus couramment mise en œuvre est l’« adsorption », qui repose sur l’affinité entre le solide et les molécules de PFAS qui se fixent sur la surface poreuse. L’adsorption est une technique de séparation efficace pour de nombreux contaminants incluant les PFAS. Elle est largement utilisée dans le traitement de l’eau, notamment en raison de son coût abordable et de sa facilité d’utilisation. La sélection de l’adsorbant est déterminée par sa capacité d’adsorption vis-à-vis du polluant ciblé. De nombreux matériaux adsorbants peuvent être utilisés (charbon actif, résine échangeuse d’ions, minéraux, résidus organiques, etc.).

Parmi eux, l’adsorption sur charbon actif est très efficace pour les PFAS à chaîne longue mais peu efficace pour ceux à chaîne moyenne ou courte. Après adsorption des PFAS, le charbon actif peut être réactivé par des procédés thermiques à haute température, ce qui entraîne un coût énergétique élevé et un transfert des PFAS en phase gazeuse à gérer.

Une première unité mobile de traitement des PFAS par charbon actif a récemment été déployée à Corbas, dans le Rhône, et permet de traiter 50 mètres cube d’eau à potabiliser par heure.




À lire aussi :
Les sols, face cachée de la pollution par les PFAS. Et une piste pour les décontaminer


Comme autre procédé d’adsorption, les résines échangeuses d’ions sont constituées de billes chargées positivement (anions) ou négativement (cations). Les PFAS, qui sont souvent chargés négativement en raison de leurs groupes fonctionnels carboxyliques ou sulfoniques, sont attirés et peuvent se fixer sur des résines échangeuses d’anions chargées positivement. Des différences d’efficacité ont aussi été observées selon la longueur de la chaîne des PFAS. Une fois saturées, les résines échangeuses d’ions peuvent être régénérées par des procédés chimiques, produisant des flux de déchets concentrés en PFAS qui doivent être traités. Il est à noter que les résines échangeuses d’ions n’ont pas d’agrément en France pour être utilisées sur une filière de production d’eau potable.


Tous les quinze jours, de grands noms, de nouvelles voix, des sujets inédits pour décrypter l’actualité scientifique et mieux comprendre le monde. Abonnez-vous gratuitement dès aujourd’hui !


La technique actuellement la plus efficace pour l’élimination d’une large gamme de PFAS – à chaînes courtes et longues – dans l’eau est la filtration par des membranes de nanofiltration ou d’osmose inverse. Malheureusement, cette technique est énergivore et génère des sous-produits de séparation, appelés « concentrâts ». Ces derniers présentent des concentrations élevées en PFAS qui induisent des difficultés de gestion (les concentrâts sont parfois rejetés dans l’environnement alors qu’ils nécessiteraient un traitement supplémentaire).

Enfin, la flottation par fractionnement de mousse exploite les propriétés des PFAS (tête hydrophile et queue hydrophobe) qui se placent à l’interface air/liquide dans des bulles de mousse et sont récupérés en surface. Des taux d’extraction de 99 % sont ainsi obtenus pour des PFAS à chaîne longue. Comme les précédentes, cette technique produit un concentrât qu’il faut éliminer ultérieurement.

Vers des technologies qui dégradent les PFAS

D’autres procédés cherchent à dégrader les contaminants présents dans les eaux afin d’offrir une solution plus durable. Il existe diverses technologies de destruction des polluants dans l’eau : les procédés d’oxydation avancée, la sonolyse, la technologie du plasma, etc. Ces procédés peuvent être déployés en complément ou en remplacement de certaines technologies de concentration.

La destruction d’un polluant est influencée par son potentiel de biodégradabilité et d’oxydation/réduction. La dégradation naturelle des PFAS est très difficile en raison de la stabilité de la liaison C-F qui présente un faible potentiel de biodégradation (c’est-à-dire qu’elle n’est pas facilement détruite par des processus à l’œuvre dans la nature, par exemple conduits par des bactéries ou enzymes).

Les procédés d’oxydation avancée sont des techniques qui utilisent des radicaux libres très réactifs et potentiellement efficaces pour briser les liaisons C–F. Elles incluent entre autres l’ozonation, les UV/peroxyde d’hydrogène ou encore les procédés électrochimiques.

Le traitement électrochimique des PFAS constitue une méthode innovante et efficace pour la dégradation de ces composés hautement persistants. Ce procédé repose sur l’application d’un courant électrique à travers des électrodes spécifiques, générant des radicaux oxydants puissants capables de rompre les liaisons carbone-fluor, parmi les plus stables en chimie organique.

Pour tous ces procédés d’oxydation avancée, un point de surveillance est indispensable : il faut éviter la production de PFAS à chaînes plus courtes que le produit initialement traité.

Récemment, une entreprise issue de l’École polytechnique fédérale de Zurich (Suisse) a développé une technologie innovante capable de détruire plus de 99 % des PFAS présents dans les eaux industrielles par catalyse piézoélectrique. Les PFAS sont d’abord séparés et concentrés par fractionnement de la mousse. Ensuite, la mousse concentrée est traitée dans deux modules de réacteurs où la technologie de catalyse piézoélectrique décompose et minéralise tous les PFAS à chaîne courte, moyenne et longue.

La dégradation sonochimique des PFAS est également une piste en cours d’étude. Lorsque des ondes ultrasonores à haute fréquence sont appliquées à un liquide, elles créent des bulles dites « de cavitation », à l’intérieur desquelles des réactions chimiques ont lieu. Ces bulles finissent par imploser, ce qui génère des températures et des pressions extrêmement élevées (plusieurs milliers de degrés et plusieurs centaines de bars), qui créent des espèces chimiques réactives. Le phénomène de cavitation est ainsi capable de rompre les liaisons carbone-fluor, permettant d’obtenir finalement des composants moins nocifs et plus facilement dégradables. Très prometteuse en laboratoire, elle reste difficile à appliquer à grande échelle, du fait de son coût énergétique et de sa complexité.

Ainsi, malgré ces récents progrès, plusieurs défis subsistent pour la commercialisation de ces technologies en raison notamment de leur coût élevé, de la génération de potentiels sous-produits toxiques qui nécessitent une gestion supplémentaire ou encore de la nécessité de la détermination des conditions opérationnelles optimales pour une application à grande échelle.

Quelles perspectives pour demain ?

Face aux limites des solutions actuelles, de nouvelles voies émergent.

Une première consiste à développer des traitements hybrides, c’est-à-dire combiner plusieurs technologies. Des chercheurs de l’Université de l’Illinois (États-Unis) ont par exemple développé un système innovant capable de capturer, concentrer et détruire des mélanges de PFAS, y compris les PFAS à chaîne ultra-courte, en un seul procédé. En couplant électrochimie et filtration membranaire, il est ainsi possible d’associer les performances des deux procédés en s’affranchissant du problème de la gestion des concentrâts.

Des matériaux innovants pour l’adsorption de PFAS sont également en cours d’étude. Des chercheurs travaillent sur l’impression 3D par stéréolithographie intégrée au processus de fabrication de matériaux adsorbants. Une résine liquide contenant des polymères et des macrocycles photosensibles est solidifiée couche par couche à l’aide d’une lumière UV pour former l’objet souhaité et optimiser les propriétés du matériau afin d’améliorer ses performances en termes d’adsorption. Ces matériaux adsorbants peuvent être couplés à de l’électroxydation.

Enfin, des recherches sont en cours sur le volet bioremédiation pour mobiliser des micro-organismes, notamment les bactéries et les champignons, capables de dégrader certains PFAS). Le principe consiste à utiliser les PFAS comme source de carbone pour permettre de défluorer ces composés, mais les temps de dégradation peuvent être longs. Ainsi ce type d’approche biologique, prometteuse, pourrait être couplée à d’autres techniques capables de rendre les molécules plus « accessibles » aux micro-organismes afin d’accélérer l’élimination des PFAS. Par exemple, une technologie développée par des chercheurs au Texas utilise un matériau à base de plantes qui adsorbe les PFAS, combiné à des champignons qui dégradent ces substances.

Néanmoins, malgré ces progrès techniques, il reste indispensable de mettre en place des réglementations et des mesures en amont de la pollution par les PFAS, pour limiter les dommages sur les écosystèmes et sur la santé humaine.

The Conversation

Julie Mendret a reçu des financements de l’Institut Universitaire de France (IUF).

Mathieu Gautier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. PFAS et dépollution de l’eau : les pistes actuelles pour traiter ces « polluants éternels » – https://theconversation.com/pfas-et-depollution-de-leau-les-pistes-actuelles-pour-traiter-ces-polluants-eternels-259406