Desperate Journey: wartime cliches overwhelm this extraordinary true account

Source: The Conversation – UK – By Barry Langford, Professor of Film Studies, Royal Holloway, University of London

What does the familiar film tagline “based on a true story” really mean? Leaving aside questions of historical fidelity versus poetic license, what does an audience get from the assurance that a given story “truly happened”?

At best, these claims remind us that – however fantastic or horrific – these events were once realities for other people very much like ourselves. At worst, they exercise a kind of moral blackmail: guilt tripping the audience into thinking that criticising the film’s storytelling somehow disrespects the real people who endured those traumatic events.

Every story of Holocaust survival and rescue is unique and, against the backdrop of ubiquitous slaughter, uniquely miraculous. Annabel Jankel’s new film Desperate Journey, based on the experiences of Austrian-born Holocaust survivor Freddie Knoller (1921-2022), is certainly as extraordinary a story as any.

Freddie (played by Lucas Lynggaard Tønnesen) escaped his parents’ fate through a circuitous and often picturesque journey across Nazi-dominated Europe, ultimately landing late in the war in Auschwitz-Birkenau and surviving a death march.

Such accounts can all too easily topple into cliches of wartime derring-do, or fall victim to sentimentality and sensationalism. What inoculates them against these perils is precisely the unique and often tiny details that ground wildly improbable tales of survival in gritty reality. (The French film-maker Claude Lanzmann, best known for the Holocaust documentary film Shoah, once remarked that “there is more truth for me in some trivial confirmation than in any number of generalisations about the nature of evil”.)

However, Desperate Journey – which focuses almost exclusively on the most colourful part of Freddie’s tale, his time working under a false identity in German-frequented nightclubs in occupied Paris – leaves out almost all of this granular detail. As a consequence, it ends up feeling almost as divorced from the hard-to-fathom realities of the Holocaust as much-derided fantasies like The Boy in the Striped Pyjamas (2008)

The trailer for Desperate Journey.

For example, shortly after Freddie’s escape from Austria, at the urging of a friendly farmer’s wife (Niamh Cusack), he burns his Austrian passport, stamped with the lethal “J” (for Jew). From this point on he is stateless and without papers in a continent-wide trap. But, bar a narrow escape on board a train, in the film his fugitive status seems to cause him remarkably few problems.

Numerous survivor accounts attest to the grinding daily fear and the incessant improvisation needed to stay one step ahead of the Gestapo. Yet courtesy of a friendly Jewish landlady and a tolerant employer, Freddie enjoys a life – albeit precarious – of sleazy glamour in the demi-monde of wartime Parisian nightclubs and brothels.

Despite a screenplay by Michael Radford (1984, Il Postino), and handsome if somewhat overblown production design, almost nothing in Freddie’s story has a ring of authenticity. Nazi officers are uniformly leering sadists. The nightclub dancer (Sienna Guillory) with whom Freddie strikes up an ill-starred and dramatically unconvincing romance performs improbably elaborate routines that evoke Josephine Baker and harbours her own tragic secret to match Freddie’s. The French Resistance fighters he encounters are hard but honourable tough guys in leather jackets and crew-neck sweaters (including an enjoyably hammy turn from Steven Berkoff).

Viewers who favour historical precision will be dissatisfied to find Freddie fleeing Austria days after the March 1938 Anschluss and arriving apparently a few weeks later in occupied Paris (France surrendered to Germany in June 1940; the real Freddie Knoller spent two years in Belgium before fleeing to France ahead of Hitler’s advancing armies).

To those with an aversion to cliche, Freddie’s arrival there, emerging from the Metro to the strains of accordion music and the overtures of improbably glamorous street prostitutes and a cartoonishly Mephistophelian pimp (Fernando Guallar), will be equally grating.

Perhaps the film’s most fatal flaw is its refusal even to try to dramatise the trauma that Freddie – only 17 when his six-year trans-European odyssey began – underwent. He sees his family torn apart, sees his companions mown down by German border guards, lives with the ever-present threat of capture and deportation, and ultimately survives (offscreen) the death factory of Auschwitz and the nightmare of the death marches. Yet his principal character note, from early on in the film, is his adolescent fascination with the imagined lubricious pleasures of Parisian nightlife. His exploits there play more as a slightly risky caper than a struggle for survival.

Desperate Journey feels like a throwback – a 1950s Hollywood version of the war. It is far too light and conventionally melodramatic to hold up against decades of scholarship and public understanding of the real costs of survival amid inconceivable terror and against overwhelming odds.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Barry Langford does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Desperate Journey: wartime cliches overwhelm this extraordinary true account – https://theconversation.com/desperate-journey-wartime-cliches-overwhelm-this-extraordinary-true-account-270531

How Stranger Things went from Netflix Original to a global franchise

Source: The Conversation – UK – By Orcun Can, Lecturer in Digital Economy, King’s College London

Warning: this article contains spoilers for the first four seasons of Stranger Things.

When the Duffer Brothers came up with the idea for a television show that would mix Stephen King and Stephen Spielberg, they had trouble convincing network executives. Some thought a show that had an ensemble cast of kids as main characters would be a production nightmare. Others thought the tone was far too scary to have children at the centre, so it either needed to be “nicer” or focus on the reclusive, angry-sad chief of police Jim Hopper (David Harbour) instead.

But when the first season aired in its entirety in 2016, it became an instant success. The 1980s nostalgia, the use of practical effects on the Demogorgon (the show’s monster), a comeback performance by Winona Ryder and the chemistry between the kids all combined to bring a show that represented what the quintessential Netflix Original should be.

When the first season of Stranger Things was released, House of Cards (2013-18) was the flagship Netflix Original. Long before the lawsuits against lead actor Kevin Spacey, the show enjoyed such status that the iconic Netflix “tu-dum” that sounds out whenever you hit play was derived from the show (just watch the very final few seconds of the season two finale and you’ll understand).

The trailer for Stranger Things volume five.

There were other successful and popular originals like Orange is the New Black (2013-19), a new season of Arrested Development (2013-19), and the Marvel collaboration, Daredevil (2015-18). But nothing showed the narrative and world-building possibilities of this still-strange way of TV quite like Stranger Things did.

Fans of Stranger Things soon began to exert power over the show itself. When many demanded “justice for Barb” (Shannon Purser), a character that died early on in the first season, plans for season two changed to make room for just that. When season four came around, fans were equally passionate about Eddie’s (Joseph Quinn) death, petitioning to bring him back for the final season.

As it prepares to launch its fifth and final season, Stranger Things is a different beast than before. It is now more akin to the “event television” series like Lost (2004-10), Sherlock (2010-17), or perhaps the most prominent example, Game of Thrones (2011-19). Event TV is the kind of television show that fans don’t just binge on their mobile phones during their commute, or watch in the background as they eat cereal, but make plans to watch, frequently with friends or family. Fans are no longer just watching new seasons as they drop, but diving into petitions, online debates and the personal lives of its cast.

Acknowledging this change of status, the fourth season of Stranger Things wasn’t released in one go, as with the previous three seasons, but drip fed to fans in two parts – seven episodes first, then two more episodes a few days later. The final season is to be divided into three parts. Four episodes first on November 27, three episodes a month later on December 26, and the grand finale, a few days later, on New Year’s Eve.

Back in 2016, The Guardian’s TV critic Mark Lawson likened Stranger Things to watercooler TV hits, shows from the pre-streaming era that would create such a buzz that you would talk about them with your colleagues over the watercooler the next day. Nine years later, Netflix seems eager to frame Stranger Things as the watercooler TV show of the 2025 holiday season.

The trailer for Tales From ‘85.

The original show is going to end, but in many ways this seems to be just the beginning. A teaser trailer has already been released for a new animated Stranger Things series, called Tales from 85. The show takes place between the events of the second and third seasons of Stranger Things, already positioning the show to become a franchise.

Add to that other spin-offs like the after-show Beyond Stranger Things, in which cast and creators discuss the events and behind-the-scenes details of episodes, tie-in mobile games, an immersive viewing experience, books, board games, merchandise and a London theatre production, Stranger Things: The First Shadow, it is now so much more than the quintessential Netflix Original.

Stranger Things proves that even in an era filled with sequels, prequels, remakes and reboots, it’s still possible for a brand-new story to launch a major franchise that grows far beyond its original platform. Who knows, we may even get a new season down the line. After all, stranger things have happened in the TV business.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Orcun Can does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How Stranger Things went from Netflix Original to a global franchise – https://theconversation.com/how-stranger-things-went-from-netflix-original-to-a-global-franchise-270327

Blue Moon: Ethan Hawke’s performance is a tour de force – but can’t save this uncertain film

Source: The Conversation – UK – By Dominic Broomfield-McHugh, Professor of Musicology, University of Sheffield

Does the truth matter in a film about historical events? This question sits at the heart of any biographical drama, shaping how we judge the balance between storytelling and accuracy.

Early in Richard Linklater’s Blue Moon, I thought of Amadeus. That 1984 film isn’t about Mozart – it’s about jealousy. Similarly, Blue Moon isn’t a documentary about Broadway composer Richard Rogers (Andrew Scott) and lyricist Lorenz Hart (Ethan Hawke). It’s a moving drama about Hawke’s character dealing with professional and romantic failure. Don’t expect it to be historically accurate.

Linklater and screenwriter Robert Kaplow set the movie as an extended scene in a bar called Sardi’s on the opening night of Oklahoma! in 1943. The Broadway hit marked the end of the exclusive partnership of Rodgers and Hart when the former decided to form a new, genre-defining pairing with Oscar Hammerstein II.

The Rodgers and Hammerstein musicals (including Carousel, South Pacific, The King and I, and The Sound of Music) went on to become staples of the repertoire. They provided models for much of what came later. In contrast, the Rodgers and Hart collaboration is now remembered more for its songs, such as standards like My Funny Valentine and The Lady Is a Tramp, than for its musicals, including On Your Toes and Pal Joey.

Blue Moon shows us a portrait of Hart who can see that the parade has passed him by. He comments loudly on Hammerstein’s clunky lyric writing while watching the title song of Oklahoma! in the theatre (the number itself rather feebly staged). When Rodgers arrives at Sardi’s, Hart discloses his low opinion of the show.

The trailer for Blue Moon.

The alcoholism that would soon take his life is a key theme used to explain why Rodgers can’t bear to write with him anymore. He has become unreliable. Meanwhile, a romantic crush inspired by 11 letters written to Hart from a Yale college student (a vulnerable Margaret Qualley) is used to explore Hart’s sexual fluidity, though it’s not clear that Hart ever met her in real life.

Hawke’s elegiac performance is worth the price of admission alone. This is a truly stunning portrayal of someone whose illness makes them unable to evolve professionally when the culture around them changes. Both witty and deeply sad, it’s an intense psychological tour de force, worthy of an Academy Award.

But that intensity is also tiring. Almost the entire movie shows Hart sitting in Sardi’s, having discussions with the bartender (a wonderfully colourful performance from Bobby Cannavale), writer E.B. White (sensitively portrayed by Patrick Kennedy) and the pianist (Jonah Lees, hampered by having to mime a strangely pedestrian piano soundtrack of songbook classics). Although the screenplay is notionally conversational, Hart’s inability to share a genuine exchange with anyone other than his crush means that much of the time it feels like being adrift in an 80-minute monologue.

That’s where the movie is most striking and most problematic. You can’t help but find Hawke’s colossal speeches compelling, but it’s so static that it feels more like the material for a play – possibly even a radio play – than a movie. The sustained focus also makes Linklater’s awkward handling of Hart’s diminutive stature (achieved through careful placing of the camera) distracting far too much of the time. It quite unnecessarily allows the fact that the real Hart was about 4ft 10in to hinder the presentation of Hawke’s searing portrayal.

Throwing in other factual details also unhelpfully overwhelms common sense. The film recounts how Rodgers and Hart got together again a few months after Oklahoma! to write some new songs for a revival of their 1927 musical A Connecticut Yankee. Yet it shows Rodgers proposing the revival to Hart in the middle of their fraught exchange in the bar soon after the composer arrives for his opening night party – something that doesn’t ring true and upsets the psychology of the scene.

Ethan Hawke discusses his role in Blue Moon.

Another implausible moment, when Hammerstein introduces his future protege Stephen Sondheim – then a child – to Hart as his “neighbour”, borders on the risible. Sondheim wasn’t at the opening night of Oklahoma! and wasn’t that close to Hammerstein at this point, and it’s almost certain that the stagestruck child would not have been so rude when meeting a major lyricist (it was only later that he became openly critical of him). He was too much in love with the theatre and was only 13 years old.

It seems to me that these sorts of problems stem from the decision to set all the action on one night, rather than splitting it into two or three scenes in Hart’s final months. Throwing in too many facts and then not paying attention to credibility undoes the research itself.

If we’re here to learn about human truths that speak to a wider audience beyond theatre nerds, then why allow the reality of Hart’s height to be the thing that dictates where the camera is most of the time? After all, Rodgers wasn’t sleek and handsome in the way Scott embodies him, so why is Hart’s height a constant focus? Or if the aim is to engage with historical truths, why portray Hart as snarky about Hammerstein’s lyrics – and pompous about his own syntactic ability as a writer – when he was no more pedantic than his colleague?

As such, Blue Moon falls between two stools, the real and the imagined, without being quite sure which is the more important. Thankfully, and ironically, Hawke’s performance rises above it.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Dominic Broomfield-McHugh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Blue Moon: Ethan Hawke’s performance is a tour de force – but can’t save this uncertain film – https://theconversation.com/blue-moon-ethan-hawkes-performance-is-a-tour-de-force-but-cant-save-this-uncertain-film-269992

From Stuttgart’s first industrial revolution to Dubai’s fifth – the need for research to connect outside the academy

Source: The Conversation – UK – By Stephen Khan, Editor-in-Chief, The Conversation

In the late 19th century, Stuttgart was booming. The southern German city was famously the cradle of an emerging automobile sector and had already established itself as an industrial powerhouse and centre for toolmaking, mechanical engineering and textiles. Rail connections in the Baden-Württemberg region accelerated development, transported workers and spread wealth.

One might think, then, that an obvious place for the nascent railways to reached out to would have been the historic university town of Tübingen, about 20 miles from Stuttgart. No so, Tilman Wörtz of the university’s communications department informed me, on a recent visit. In fact, explained Wörtz, an accomplished journalist, the story goes that the academic grandees of the era resisted a connection with the emerging financial and industrial powerhouse, perhaps regarding it as somewhat uncouth and vulgar to distract from deep cultural and scientific considerations to engage with the forces of commerce. So for a long time, the proposed railroad hit the buffers.

Fortunately today, thanks to the efforts of university leaders, the institution strives to connect, both with industry and the wider community. There is now a railway station, and I was thrilled to speak with a number of academics about relaying their research and knowledge to non-academic readers. Indeed, this fascinating read on rapper Haftbefehl, who is the subject of a Netflix documentary gripping Germany has already come out of the sessions, and do stay tuned in the coming weeks and months for more from the University of Tübingen, which was founded in 1477 and is now the first German member institution of The Conversation.

Fast forward a week, and I found myself in the eye of what some cast as a fourth or even fifth industrial revolution, in Dubai, incorporating AI, nanobiology, and bioengineering. The city is pitching itself as being at the heart of, and a driving force in, this new era of change, which sees civic government enabling human and technological collaboration tackling societal issues and powering growth.

For more than a decade, what is now called Protoypes For Humanity has been an exhibition at the heart of this city’s dash for development, powering projects that bring the prospect of solutions to challenges in the environment, energy, health, technology and other spheres.

When I attended Prototypes a year ago, it was still largely a showcase for PhD candidates’ projects from some of the world’s leading universities, many of which are members of The Conversation. In the last 12 months, however, a new element has been developed, under the guidance of Naren Barfield, former Provost and Deputy Vice Chancellor of the UK’s Royal College of Art. This sees senior academics come to the city to deliver papers drawing on key aspects of their research.

Full transparency, I served on the selection panel Professor Barfield designed to finalise the programme and The Conversation was a media partner for the 2025 Prototypes event.

The themes for the year were as follows:

  • Wellbeing and health futures
  • Sustainable and resilient infrastructure
  • Artificial intelligence and augmented intelligence
  • Environmental sustainability and climate action
  • Socio-economic empowerment and innovation
  • Open and speculative categories

Following short paper presentations in the Socio-Economic Empowerment category, Barfield explained the thinking behind the new element of Prototypes and the opportunity for researchers:

We are bringing together some of the world’s sharpest minds and most innovative researchers to tackle challenges faced in different parts of the planet. Dubai and this initiative provide a unique chance to generate ideas across a range of academic disciplines that might not otherwise collaborate in such an impactful way.

The Prototypes for Humanity initiative and the relatively new Professors’ Programme has a proven track record of connecting academia with policymakers, industry, and the public in a way often described elsewhere as aspirational. Here, it is actually happening.

The reference to industry struck a chord, perhaps given that I’d so recently heard that story of detachment from 19th century Stuttgart, but also because it’s a grumble I regularly encounter across the world when it comes to academia and its engagement (or lack of) beyond the university sector today.

At the conference venue, in the Emirates Towers of Dubai International Financial District, Tadeu Baldani Caravieri, director of Prototypes for Humanity, discussed the thinking behind the project and potential routes forward.

At Protoypes we’ve seen how researchers can directly drive innovation in partnership with industry and, in the case of Dubai, with the city government as a facilitator.

This has been possible thanks to some of the advantages of this state and region. But these are solutions that can, and do present wider benefits – in some cases globally relevant solutions solutions.

He later added:

This edition [of Prototypes] helped to confirm fundamental assumptions for the space we operate in, i.e. creating bridges between academic ingenuity and real-world needs. The main one is that, although there is sometimes a disconnect between university innovation capabilities and industry needs, there is genuine interest, across all of the parts in this equation, to overcome obstacles and do more. We have enabled and witnessed very promising and results-oriented conversations between academia and potential partners, from PhDs and private sector discussing pilots in applied robotics, to professors supporting a humanitarian agency to rethink aid allocation systems, to multinationals looking to fuel their R&D roadmaps.

Dubai is an excellent incubator for these bridges we are building but, in keeping with the city’s outlook and spirit, we want to enable impact across the world – so it’s just natural that, in the future, we hope to open structured avenues for multi-city collaborations, where local ecosystems complement each other’s strengths.

Prototypes’ community brings in research talent from more than 800 universities around the world, including many academics who have also engaged with The Conversation. For instance, Jeremy Howick, of the University of Leicester, presented on empathy in healthcare in the age of AI, and has written this account. Further articles based on projects that exhibited and on the professors’ papers will be published on The Conversation and will be accessible via this link.

Stay tuned to read more on critical and diverse research relating to subjects such as monitoring and diagnosing pre-eclampsia (Patricia Maguire, University College Dublin, using seaweed to create a sustainable packaging alternative (Austeja Platukyte, Vilnius Academy of Arts ) and the emergent Internet of Beings (Francesco Grillo, Bocconi University, Milan).

The Conversation

ref. From Stuttgart’s first industrial revolution to Dubai’s fifth – the need for research to connect outside the academy – https://theconversation.com/from-stuttgarts-first-industrial-revolution-to-dubais-fifth-the-need-for-research-to-connect-outside-the-academy-270528

La materia oscura del universo podría haber sido observada por primera vez

Source: The Conversation – (in Spanish) – By Óscar del Barco Novillo, Profesor asociado. Departamento de Física (área de Óptica)., Universidad de Murcia

Recreación artística de la formación de estructuras de materia oscura durante los primeros instantes del universo. Créditos: Ralf Kaehler/SLAC National Accelerator Laboratory, American Museum of Natural History. CC BY

Tras un meticuloso estudio de los datos recogidos durante 15 años por el telescopio espacial de rayos gamma Fermi de la NASA sobre el halo galáctico de la Vía Láctea, investigadores japoneses afirman tener evidencias directas de la detección de las elusivas partículas de materia oscura en el universo.

De confirmarse el hallazgo, supondría una verdadera revolución en el campo de la física, obligando a modificar el modelo estándar de la física de partículas (que es la teoría que describe con precisión la estructura fundamental de la materia). Además, tendría enormes consecuencias en cosmología a la hora de explicar la formación y evolución de los cúmulos galácticos.

Este innovador trabajo ha sido publicado en la revista Journal of Cosmology and Astroparticle Physics y su único autor es el astrofísico nipón Tomonori Totani.

Totani sostiene que el patrón energético hallado en sus investigaciones podría ser la primera evidencia directa de las llamadas Partículas Masivas de Interacción Débil (WIMP, por sus siglas en inglés), aunque la comunidad científica pide cautela y verificación independiente antes de confirmar un hallazgo que podría transformar la física actual.

Invisibles para cualquier telescopio

La hipótesis ampliamente aceptada es que la materia oscura está compuesta por esas esquivas WIMPs, cientos de veces más masivas que el protón y de movimiento muy lento. Como no absorben ni emiten luz ni interaccionan con cualquier partícula observada, es imposible su detección directa por instrumentos ópticos como el telescopio.

El cúmulo de Coma (con hasta 1 000 galaxias identificadas) es el lugar del cosmos donde surgieron las primeras evidencias de la existencia de materia oscura. En 1930, el astrónomo suizo Fritz Zwicky observó que aquellas galaxias se movían demasiado rápido para la gravedad creada por la materia ordinaria observada. Deberían haber escapado del cúmulo, pero, en cambio, permanecían juntas.

En otras palabras, no existía suficiente masa visible como para mantener unidas a tantas galaxias. La materia oscura en el cúmulo de Coma es tan predominante que constituye aproximadamente el 90 % de su masa total.

Imagen en falso color de la región central del cúmulo de Coma donde se combinan imágenes infrarrojas y de luz visible para revelar miles de galaxias muy tenues (en color verde). Créditos: NASA / JPL-Caltech / L. Jenkins (GSFC).
CC BY

A raíz de esas observaciones, Zwicky sugirió que podría existir una forma invisible de materia que creaba la gravedad adicional que aglutinaba a estas galaxias. La denominó “Dunkle Materie” (“materia oscura” en alemán).

Más tarde, en la década de 1970, la astrónoma norteamericana Vera Rubin recurrió a ese concepto para explicar la velocidad anómala de las estrellas en los bordes exteriores de las galaxias espirales. Hoy en día, aunque no todos los astrónomos están de acuerdo sobre la verdadera naturaleza de la materia oscura, su existencia está ampliamente aceptada.

Imagen del cúmulo Bala (formado por dos cúmulos de galaxias en colisión) registrada por el Telescopio Espacial Hubble, el Observatorio de rayos X Chandra de la NASA y el Telescopio Gigante Magallanes terrestre. La materia visible aparece en tonalidades rosas, mientras que la materia oscura del cúmulo se muestra en colores azules. Esta observación constituye uno de los ejemplos directos más claros de la existencia de materia oscura. Crédito: X-ray: NASA/CXC/CfA/M.Markevitch, Optical and lensing map: NASA/STScI, Magellan/U.Arizona/D.Clowe, Lensing map: ESO WFI.
CC BY

La materia oscura constituye la mayor parte de la masa de las galaxias y cúmulos galácticos. Los astrónomos estiman que la materia visible constituye solo alrededor del 5 % del universo, mientras que la materia oscura representa alrededor del 27 %. El 68 % restante correspondería a energía oscura y sería responsable de la expansión acelerada del universo, aunque se desconoce aún su naturaleza exacta.

Distrubición en el universo de la materia ordinaria o visible (5%), materia oscura (27%) y energía oscura (68%). Créditos: NASA’s Goddard Space Flight Center.
CC BY

Emiten radiación muy energética al aniquilarse

Tal como comentamos anteriormente, estas partículas de materia oscura son indetectables por cualquier telescopio, dado que no emiten ni absorben luz en ninguna longitud de onda. Cabe preguntarse ahora, ¿cómo pueden ser detectadas mediante observaciones directas?

La buena noticia es que al interaccionar, las hipotéticas WIMP se aniquilarían mutuamente produciendo una radiación muy energética en forma de rayos gamma. De hecho, los investigadores analizan los datos del Telescopio Espacial de Rayos Gamma Fermi de la NASA para buscar señales de WIMP interactuando y aniquilándose. Este es el caso de la sorprendente investigación del astrónomo Tomonori Totani.

Así, un exceso de radiación gamma altamente energética en determinadas regiones galácticas tendría su origen en las aniquilaciones de partículas de materia oscura, y podrían ser prueba de la existencia de WIMPs. Lo discutible es si la prueba es concluyente o estamos ante una hipótesis especulativa.

Secuencia del proceso de aniquilación de dos partículas de materia oscura o WIMPs (imágenes superior y central) y la posterior producción de dos fotones de rayos gamma altamente energéticos (imagen inferior). Crédito: NASA/Goddard Space Flight Center.
CC BY

La señal que confirmaría la existencia de la materia oscura

En este nuevo estudio, Totani analizó los datos del halo de la Vía Láctea, una región esférica de estrellas viejas que rodea a nuestra galaxia y con una supuesta alta concentración de materia oscura.

Interpretación artística de los halos interno y externo de la Vía Láctea. Créditos: NASA, ESA, and A. Feild (STScI).
CC BY

El análisis detallado de los datos en esta región galáctica reveló un exceso de radiación gamma altamente energética, alrededor de 20 gigaelectronvoltios (20 GeV). Además, el espectro energético hallado coincide perfectamente con la predicción teórica de la aniquilación de WIMPs, lo que sugiere que las partículas tienen una masa aproximadamente 500 veces mayor que la de un protón.

Mapa de intensidad de radiación gamma en la región de interés del estudio (halo de la Vía Láctea). La barra gris horizontal en la región central corresponde al área del plano galáctico, que se excluyó del análisis. Crédito: Tomonori Totani, Universidad de Tokio.
CC BY

En palabras del autor del estudio: “Detectamos rayos gamma con una energía extremadamente alta, extendiéndose en una estructura similar a un halo hacia el centro de la Vía Láctea. El componente de emisión de rayos gamma se asemeja mucho a la forma esperada del halo de materia oscura”.

Además, este patrón de radiación gamma tan específico no es fácilmente explicable a partir de otros eventos astronómicos alternativos como supernovas o púlsares de rápida rotación.

El trabajo de Totani constituye un indicio plausible de emisión de rayos gamma a partir de aniquilación de materia oscura, aunque no exento de incertidumbre y lejos de ser totalmente concluyente.

La necesaria prudencia ante estos nuevos resultados

“Las afirmaciones extraordinarias requieren evidencia extraordinaria”. Esas palabras de Carl Sagan resumen a la perfección como debe procederse en ciencia ante resultados tan revolucionarios como el propuesto en este nuevo estudio sobre materia oscura.

Este nuevo hallazgo entra ahora en un período de intenso escrutinio y verificación por otros grupos de investigación.

Se deberán realizar análisis independientes para verificar esta característica señal de 20 GeV asociada a partículas WIMP, probablemente en otros ambientes ricos en materia oscura como las galaxias enanas del halo de la Vía Láctea.

Tendremos que esperar para conocer si este interesante trabajo asienta las bases para una detección sólida de la elusiva “materia ausente” que tanto ha desconcertado a los astrónomos en las últimas décadas.

The Conversation

Óscar del Barco Novillo no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. La materia oscura del universo podría haber sido observada por primera vez – https://theconversation.com/la-materia-oscura-del-universo-podria-haber-sido-observada-por-primera-vez-270690

Le climat est-il vraiment seul responsable de l’augmentation du degré d’alcool des vins ?

Source: The Conversation – in French – By Léo Mariani, Anthropologue, Maître de conférence Habilité à diriger des recherches, Muséum national d’histoire naturelle (MNHN)

La précocité des vendanges que l’on observe depuis quelques années a été favorisée par le changement climatique. Mais la hausse du degré d’alcool dans les vins est aussi le fruit d’une longue tradition qui emprunte à l’histoire, au contexte social et aux progrès techniques, et qui a façonné la définition moderne des appellations d’origine protégée. Autrement dit, le réchauffement du climat et celui des vins, entraîné par lui, ne font qu’exacerber un phénomène ancien. C’est l’occasion d’interroger tout un modèle vitivinicole.


Dans l’actualité du changement climatique, la vitiviniculture a beaucoup occupé les médias à l’été 2025. En Alsace, les vendanges ont débuté un 19 août. Ce record local traduit une tendance de fond : avec le réchauffement des températures, la maturité des raisins est plus précoce et leur sucrosité plus importante. Cela a pour effet de bouleverser les calendriers et d’augmenter le taux d’alcool des vins.

La précocité n’est pas un mal en soi. Le millésime 2025 a pu être ainsi qualifié de « très joli » et de « vraiment apte à la garde » par le président d’une association de viticulteurs alsaciens. Ce qui est problématique en revanche, c’est l’augmentation du degré d’alcool : elle est néfaste pour la santé et contredit l’orientation actuelle du marché, plutôt demandeur de vins légers et fruités. Autrement dit, le public réclame des vins « à boire » et non « à garder », alors que la hausse des températures favorise plutôt les seconds.

Il faudrait préciser pourtant que l’augmentation des niveaux d’alcool dans les vins n’est pas nouvelle. Elle est même décorrélée du changement climatique :

« Les vins se sont réchauffés bien avant le climat »,

me souffle un jour un ami vigneron. Autrement dit, l’augmentation du degré d’alcool dans les vins a suivi un mouvement tendanciel plus profond au cours de l’histoire, dont on va voir qu’il traduit une certaine façon de définir la « qualité du vin ».

Se pose donc la question de savoir qui a « réchauffé » qui le premier ? Le climat en portant à maturité des grappes de raisins plus sucrées ? ou le savoir-faire des viticulteurs, qui a favorisé des vins plus propices à la conservation – et donc plus alcoolisés ? Postulons que le climat ne fait aujourd’hui qu’exacerber un problème qu’il n’a pas créé le premier. En lui faisant porter seul la responsabilité de la hausse de l’alcool dans les vins, on s’empêche de bien définir ce problème.

De plus, on inverse alors le sens des responsabilités : la vitiviniculture, en tant qu’activité agricole émettrice de gaz à effet de serre, participe à l’augmentation des températures – et cela bien avant que le changement climatique ne l’affecte en retour.




À lire aussi :
Vin et changement climatique, 8 000 ans d’adaptation


Quand la définition des AOP fait grimper les degrés

Disons-le sans détour, l’instauration des appellations d’origine protégée AOP constitue un moment objectif de la hausse des taux d’alcool dans les vins. Dans certaines appellations du sud-est de la France, où j’ai enquêté, elle est parfois associée à des écarts vertigineux de deux degrés en trois à dix ans.

Il faut dire que les cahiers des charges des AOP comportent un « titre alcoométrique volumique naturel minimum » plus élevé que celui des autres vins : l’alcool est perçu comme un témoin de qualité.

Mais il n’est que le résultat visible d’une organisation beaucoup plus générale : si les AOP « réchauffent » les vins, souvent d’ailleurs bien au-delà des minima imposés par les AOP, c’est parce qu’elles promeuvent un ensemble de pratiques agronomiques et techniques qui font augmenter mécaniquement les niveaux d’alcool.

C’est ce paradigme, qui est associé à l’idée de « vin de garde », dont nous allons expliciter l’origine.

Les « vins de garde » se généralisent sous l’influence de l’État, de la bourgeoisie et de l’industrie

Contrairement à une idée répandue, le modèle incarné par les AOP n’est pas représentatif de l’histoire vitivinicole française en général.

Napoléon III, en voulant développer le commerce du vin avec l’Angleterre, a favorisé l’émergence du standard des vins de garde.
Jean Hippolyte Flandrin

Les « vins de garde » n’ont rien d’un canon immuable. Longtemps, la conservation des vins n’a intéressé que certaines élites. C’est l’alignement progressif, en France, des intérêts de l’une d’elles en particulier, la bourgeoisie bordelaise du XIXe siècle, de ceux de l’État et de ceux de l’industrie chimique et pharmaceutique, qui est à l’origine de la vitiviniculture dont nous héritons aujourd’hui.

L’histoire se précipite au milieu du XIXe siècle : Napoléon III signe alors un traité de libre-échange avec l’Angleterre, grande consommatrice de vins, et veut en rationaliser la production. Dans la foulée, son gouvernement sollicite deux scientifiques.

Au premier, Jules Guyot, il commande une enquête sur l’état de la viticulture dans le pays. Au second, Louis Pasteur, il confie un objectif œnologique, car l’exportation intensive de vins ne demande pas seulement que l’on gère plus efficacement la production. Elle exige aussi une plus grande maîtrise de la conservation, qui offre encore trop peu de garanties à l’époque. Pasteur répondra en inventant la pasteurisation, une méthode qui n’a jamais convaincu en œnologie.




À lire aussi :
Louis Pasteur : l’empreinte toujours d’actualité d’un révolutionnaire en blouse


Sur le principe, la pasteurisation anticipe toutefois les grandes évolutions techniques du domaine, en particulier le développement du dioxyde de soufre liquide par l’industrie chimique et pharmaceutique de la fin du XIXe siècle. Le dioxyde de soufre (SO2) est un puissant conservateur, un antiseptique et un stabilisant.

Carte des principaux crus de la région bordelaise.
DemonDeLuxe/WikiCommons, CC BY-SA

Il va permettre un bond en avant dans la rationalisation de l’industrie et de l’économie du vin au XXe siècle (les fameux sulfites ajoutés pour faciliter la conservation), au côté d’autres innovations techniques, comme le développement des levures chimiques. En cela, il va être aidé par la bourgeoisie française, notamment bordelaise, qui investit alors dans la vigne pour asseoir sa légitimité. C’est ce groupe social qui va le plus bénéficier de la situation, tout en contribuant à la définir. Désormais, le vin peut être stocké.

Cela facilite le développement d’une économie capitaliste rationnelle, incarnée par le modèle des « vins de garde » et des AOP ainsi que par celui des AOC et des « crus ».

Ce qu’il faut pour faire un vin de garde

Bien sûr ce n’est pas, en soi, le fait de pouvoir garder les vins plus longtemps qui a fait augmenter les taux d’alcool. Mais l’horizon esthétique que cette capacité a dessiné.

Un vin qui se garde, c’est en effet un vin qui contient plus de tanins, car ceux-ci facilitent l’épreuve du temps. Or pour obtenir plus de tanins, il faut reculer les vendanges et donc augmenter les niveaux d’alcool, qui participent d’ailleurs aussi à une meilleure conservation.

Les vins de garde sont vieillis en fûts de chêne, ce qui a contribué à en façonner le profil aromatique.
Mark Stebnicki/Pexels, CC BY

Ensuite, la garde d’un vin dépend du contenant dans lequel il est élevé. En l’espèce, c’est le chêne, un bois tanique qui a été généralisé (déterminant jusqu’à la plantation des forêts françaises). Il donne aujourd’hui les arômes de vanille aux vins de consommation courante et le « toasté » plus raffiné de leurs cousins haut de gamme.

La garde d’un vin dépend également des choix d’encépagement. En l’espèce, les considérations relatives à la qualité des tanins et/ou à la couleur des jus de raisin ont souvent prévalu sur la prise en compte de la phénologie des cépages (c’est-à-dire, les dates clés où surviennent les événements périodiques du cycle de vie de la vigne).

Grappes de raisin de variété syrah.
Hahn Family Wines/WIki Commons, CC BY

Ceci a pu favoriser des techniques de sélection mal adaptées aux conditions climatiques et des variétés qui produisent trop d’alcool lorsqu’elles sont plantées hors de leur région d’origine, par exemple la syrah dans les Côtes du Rhône septentrionales.

En effet, la vigne de syrah résiste très bien à la chaleur et les raisins produisent une très belle couleur, raison pour laquelle ils ont été plantés dans le sud. Mais le résultat est de plus en plus sucré, tanique et coloré : la syrah peut alors écraser les autres cépages dans l’assemblage des côtes-du-rhône.

On pourrait évoquer enfin, et parmi beaucoup d’autres pratiques techniques, les vendanges « en vert », qui consiste à éclaircir la vigne pour éviter une maturation partielle du raisin et s’assurer que la vigne ait davantage d’énergie pour achever celle des grappes restant sur pied. Ceci permet d’augmenter la concentration en arômes (et la finesse des vins de garde), mais aussi celle du sucre dans les raisins (et donc d’alcool dans le produit final).

C’est tout un monde qui s’est donc organisé pour produire des vins de garde, des vins charpentés et taillés pour durer qui ne peuvent donc pas être légers et fruités, comme semblent pourtant le demander les consommateurs contemporains.

« Sans sulfites », un nouveau modèle de vins « à boire »

Ce monde, qui tend à réchauffer les vins et le climat en même temps, n’est pas représentatif de l’histoire vitivinicole française dans son ensemble. Il résulte de la généralisation d’un certain type de rapport économique et esthétique aux vins et à l’environnement.

C’est ce monde dans son entier que le réchauffement climatique devrait donc questionner lorsqu’il oblige, comme aujourd’hui, à ramasser les raisins de plus en plus tôt… Plutôt que l’un ou l’autre de ces aspects seulement.

Comme pistes de réflexion en ce sens, je propose de puiser l’inspiration dans l’univers émergent des vins « sans soufre » (ou sans sulfites) : des vins qui, n’étant plus faits pour être conservés, contiennent moins d’alcool. Ces vins sont de surcroît associés à beaucoup d’inventivité technique tout en étant écologiquement moins délétères, notamment parce qu’ils mobilisent une infrastructure technique bien plus modeste et qu’ils insufflent un esprit d’adaptation plus que de surenchère.

Leur autre atout majeur est de reprendre le fil d’une vitiviniculture populaire que l’histoire moderne a invisibilisée. Ils ne constitueront toutefois pas une nouvelle panacée, mais au moins une piste pour reconnaître et valoriser les vitivinicultures passées et présentes, dans toute leur diversité.


Ce texte a fait l’objet d’une première présentation lors d’un colloque organisé en novembre 2025 par l’Initiative alimentation de Sorbonne Université.

The Conversation

Léo Mariani ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Le climat est-il vraiment seul responsable de l’augmentation du degré d’alcool des vins ? – https://theconversation.com/le-climat-est-il-vraiment-seul-responsable-de-laugmentation-du-degre-dalcool-des-vins-270501

How Catholic women in 18th-century Italy defied sexual harassment in the confessional

Source: The Conversation – France in French (3) – By Giada Pizzoni, Marie Curie Research Fellow, Department of History, European University Institute

The European Institute for Gender Equality defines sexual harassment as follows: “any form of unwanted verbal, non-verbal or physical conduct of a sexual nature occurs, with the purpose or effect of violating the dignity of a person, in particular when creating an intimidating, hostile, degrading, humiliating or offensive environment”. Harassment stems from power, and it is meant to control either psychologically or sexually. In both instances, victims often feel confused, alone, and uncertain about whether they caused the abuse.

As a historian, I aim to understand how women in the past experienced and tackled intimidating behaviour. Particularly, I am looking at harassment during confession in 18th-century Italy. Catholic women approached this sacrament to share doubts and hopes about subjects ranging from reproduction to menstruation, but at times were met with patronising remarks that unsettled them.

A power imbalance

The Vatican archives show us that some of the men who made these remarks dismissed them as emerging from sheer camaraderie or from curiosity, or as boastfulness, and that they belittled women who remained upset or resentful. The women were often younger, they had less power, and they could be threatened to comply. Yet, the archives also show us how some women deemed these exchanges inappropriate and stood up to such abuse.

The archives hold the records of the trials of the Inquisition tribunals, which all over the Italian peninsula handled reports of harassment and abuse in the confessional booth. For women, confession was paramount because it dictated morality. A priest’s duty was to ask women if they were abiding Christians, and a woman’s morals were bound to her sexuality. Church canons taught that sex was to be only heterosexual, genital, and within marriage. Sexuality was framed by a moral code of sin and shame, but women were active sexual agents, learning from experience and observation. The inner workings of the female body were a mystery, but sex was not. While literate men had access to medical treatises, women learned through knowledge exchanged within the family and with peers. However, beyond their neighbourhoods, some women saw the confessional box as a safe space where they could vent, question their experiences, and seek advice on the topic of sexuality. Clergymen acted as spiritual guides, semi-divine figures that could provide solace – a power imbalance that could lead to harassment and abuse.

Reporting instances of harassment

Some women who experienced abuse in the confessional reported it to the Inquisition, and those religious authorities listened. In the tribunals, notaries put down depositions and defendants were summoned. During trials, witnesses were cross-examined to corroborate their statements. Guilty convictions varied: a clergyman could be assigned fasting or spiritual exercises, suspension, exile, or the galleys (forced labour).

The archives show that in 18th-century Italy, Catholic women understood the lurid jokes, the metaphors and the allusions directed at them. In 1736 in Pisa, for example, Rosa went to her confessor for help, worried her husband did not love her, and was advised to “use her fingers on herself” to arouse his desire. She was embarrassed and reported the inappropriate exchange. Documents in the archives frequently show women were questioned if a marriage produced no children: asked if they checked whether their husbands “consumed from behind”, in the same “natural vase”, or if semen fell outside. In 1779 in Onano, Colomba reported that her confessor asked if she knew that to have a baby, her husband needed to insert his penis in her “shameful parts”. In 1739 in Siena, a childless 40-year-old woman, Lucia, was belittled as a confessor offered to check up on her, claiming women “had ovaries like hens” and that her predicament was odd, as it was enough for a woman “to pull their hat and they would get pregnant”. She reported the exchange as an improper interference into her intimate life.

Records from the confessional show examples of women being told, “I would love to make a hole in you”; seeing a priest rubbing rings up and down his fingers to mimic sex acts; and being asked the leading question if they had “taken it in their hands” – and how each of these women knew what was being insinuated. They understood that such behaviour amounted to harassment. Acts the confessor thought of as flirting – such as when a priest invited Alessandra to meet him in the vineyard in 1659 – were appalling to the women who reported the events (Vatican, Archivio Dicastero della Fede, Processi vol.42)“.

The bewildering effect of abuse

It was also a time when the stereotype of older women no longer being sexual beings was rife. Indeed, it was believed that women in their 40s or 50s were no longer physically fit for intercourse, and their sexual drive was mocked by popular literature. In 1721, Elisabetta Neri, a 29-year-old woman seeking advice about her fumi (hot flushes) that knocked her out, was told that by the time women turned 36 they no longer needed to touch themselves, but that this could help let off some steam and help with her condition.

Women were also often and repeatedly asked about pleasure: if they touched themselves when alone; if they touched other females, or boys, or even animals; if they looked at their friends’ “shameful parts” to compare who “had the largest or the tightest natura, with hair or not” (ADDF, CAUSE 1732 f.516). To women, these comments were inappropriate intrusions; to male harassers, they could be examples of titillating curiosity and advice, such as when a Franciscan friar, in 1715, dismissed intrusive comments about a woman’s sexual life (ADDF, Stanza Storica, M7 R, Trial 3)

Seeking meaningful guidance, women had entrusted these learned figures with their most intimate secrets, and they could be bewildered by the attitudes confessors often displayed. In 1633, Angiola claimed she “shivered for 3 months” after the verbal abuse (ADDF, Vol.31, Processi). The unsolicited remarks and unwanted physical touch struck them.

The courage to speak up

It is undeniable that sexuality has always been cultural, framed by moral codes and political agendas that are constantly being negotiated. Women have been endlessly policed; with their bodies and behaviour under constant scrutiny. However, history teaches us that women could be aware of their bodies and their sexual experiences. They discussed their doubts, and some stood up to harassment or abusive relationships. In the 18th century in Italy, Catholic women did not always have the language to frame abuse, but they were aware when, in the confessional, they did not experience an “honest” exchange, and at times they did not accept it. They could not prevent it, but they had the courage to act against it.

A culture of sexual abuse is hard to eradicate, but women can be vocal and achieve justice. The events of past centuries show that time was up then, as it still is now.

Author’s note: the parenthetical references in the text refer to physical records in the Vatican archives.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Giada Pizzoni has received a Marie Curie Fellowship.

ref. How Catholic women in 18th-century Italy defied sexual harassment in the confessional – https://theconversation.com/how-catholic-women-in-18th-century-italy-defied-sexual-harassment-in-the-confessional-270594

Comme pour le climat, le monde a besoin d’un panel d’experts pour affronter la crise des inégalités

Source: The Conversation – in French – By Joseph E. Stiglitz, Professor, Columbia Business School, Columbia University

Face à l’ampleur de l’aggravation des inégalités dans le monde, les pays ne devraient-ils pas s’unir pour créer un panel international sur cette question, sur le modèle du Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC), l’organisme des Nations unies créé pour évaluer les données scientifiques relatives au changement climatique ? L’idée de créer un groupe international sur les inégalités a été recommandée par le Comité extraordinaire du G20 d’experts indépendants sur les inégalités mondiales.

La réflexion qui sous-tend la création de ce panel est présentée dans un rapport remis au G20 par les experts de ce comité. Ils affirment que le panel proposé « aiderait les gouvernements et les agences multilatérales en leur fournissant des évaluations et des analyses faisant autorité sur les inégalités ». Il ne ferait pas recommandations directes aux pays. Il proposerait plutôt un ensemble de mesures politiques pouvant être utilisées pour lutter contre les inégalités. Joseph E. Stiglitz, président du panel et lauréat du prix Nobel, explique l’idée.

Quelles sont les principales conclusions du rapport sur les inégalités ?

Notre rapport s’est penché sur les recherches consacrées à l’état des inégalités. Les conclusions devraient tous nous alarmer. Les inégalités de richesse sont plus importantes que les inégalités de revenus. Elles se sont aggravées dans la plupart des pays au cours des 40 dernières années.

L’augmentation mondiale des revenus et des richesses pour les plus aisés est particulièrement inquiétante. Les plus riches accumulent des fortunes tandis que la vie des gens ordinaires stagne. Pour chaque dollar de richesse créé depuis l’an 2000, 41 cents sont allés aux 1 % les plus riches. Seul un centime est allé aux 50 % les plus pauvres.

Pour chaque dollar de richesse créé depuis l’année 2000, 41 centimes sont allés au 1 % des personnes les plus riches, et seulement 1 centime est allé aux 50 % les plus pauvres.

Cette concentration de richesse confère une influence massive sur l’économie et la politique. Elle menace les performances économiques et les bases mêmes de la démocratie.

Que recommande le rapport aux pays du G20 pour lutter contre les inégalités ?

Les inégalités résultent de choix. Il existe des politiques qui permettent de les réduire. Il s’agit notamment de proposer une fiscalité plus progressive, un allègement de la dette, une révision des règles du commerce mondial et une limitation des monopoles.

Notre comité a constaté que des progrès significatifs ont été réalisés dans le suivi de l’ampleur des inégalités, de leurs facteurs et des solutions politiques. Néanmoins, les décideurs politiques ne disposent toujours pas d’informations suffisantes, fiables ou accessibles sur les inégalités.

Il existe un besoin urgent d’institutions capables de produire des analyses solides sur les inégalités.

En 1988, les gouvernements ont créé le Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC) afin d’évaluer les données et de fournir des analyses rigoureuses pour aider les gouvernements à faire face à l’urgence climatique. Aujourd’hui, nous sommes confrontés à une urgence en matière d’inégalités et avons besoin d’un effort mondial similaire.

C’est pourquoi notre principale recommandation est de créer un groupe d’experts international sur les inégalités.

En vous appuyant sur ce rapport, que recommandez-vous à l’Afrique du Sud pour réduire les inégalités ?

L’Afrique du Sud a fait preuve d’un leadership extraordinaire en plaçant sa présidence du G20 sur la solidarité, l’égalité et la durabilité. Ce rapport en est la preuve. Nous espérons que l’Afrique du Sud continuera à défendre nos recommandations, en particulier la création d’un panel international sur les inégalités.

Notre comité a choisi de ne pas commenter les politiques spécifiques de certains pays. Mais le rapport propose plusieurs pistes qui peuvent réduire les inégalités. Il s’agit notamment de mesures nationales telles que le renforcement des lois sur la concurrence, la mise en place d’une réglementation favorable aux travailleurs, l’investissement dans les services publics et des politiques fiscales et budgétaires plus progressistes.

The Conversation

Joseph E. Stiglitz is chair of the G20 Extraordinary Committee of Independent Experts on Global Inequality.

Imraan Valodia receives funding from various agencies that support independent academic research

ref. Comme pour le climat, le monde a besoin d’un panel d’experts pour affronter la crise des inégalités – https://theconversation.com/comme-pour-le-climat-le-monde-a-besoin-dun-panel-dexperts-pour-affronter-la-crise-des-inegalites-270465

How England’s Premier League is trying to stop football’s financial arms race – without a salary cap

Source: The Conversation – Global Perspectives – By James Skinner, Dean Newcastle Business School/Professor of Sport Business, University of Newcastle

Debates about financial regulation in sport often begin with salary caps: strict, transparent cost-control mechanisms common in North American and Australian leagues.

They’re credited with improving competitive balance and financial sustainability, so many might assume English football would follow suit.

While England’s Premier League is preparing the most significant overhaul of its financial rules in a generation, it is avoiding a hard salary cap in favour of a bespoke framework designed for Europe’s promotion and relegation ecosystem and globally fluid transfer market.

So why have these rules been implemented, and will they help address football’s financial arms race, given one of the world’s richest and most financially unequal sporting competitions still refuses to introduce a salary cap?

What’s changing in the Premier League?

The Premier League recently announced that from 2026–27, clubs will move away from the Profitability and Sustainability Rules (PSR) introduced in the 2015–16 season, and towards a model centred on controlling football-related spending and ensuring long-term financial health.

The league’s stated aims are clarity, predictability and resilience. They shift focus from backward-looking accounting to real-time cost control and robust balance-sheet strength, with closer alignment to the approach of the Union of European Football Associations (UEFA).

Owners will retain freedom to invest in stadiums and infrastructure, but will face tighter constraints on wages, agent fees and “transfer amortisation” – an accounting practice where clubs spread the cost of a player’s transfer fee over the length of their contract to reduce annual costs and stay within spending limits.

Introducing the ‘squad cost ratio’

At the heart of the reforms, the squad cost ratio (SCR) caps how much a club may spend on its first-team squad (wages, agent fees and transfers) relative to its football revenue.

The headline limit is 85% of eligible income, with a small buffer for newly promoted sides to ease the transition.

In practice, a club generating £300 million (A$609 million) from match day, commercial and league distributions could spend around £255 million (A$518 million) on its squad.

Overspending can result in sanctions, including points deductions.

Unlike PSR’s three-year, business-wide profitability test, this squad cost ratio isolates football costs and is monitored during the season, making it easier to understand and harder to game.

Infrastructure and academy investment sit outside the ratio, which means the rule will likely curtail short-term arms races in player wages and fees.

The intent is to stop clubs overspending to keep pace with rivals, enhancing competitive balance without prescribing a hard salary cap.

The second pillar

The second pillar — sustainability and systemic resilience (SSR) — introduces financial health checks aimed at ensuring clubs are solvent and can survive unexpected financial shocks.

Three tests apply:

1. Working capital test. This verifies clubs hold enough cash and commitments to meet month-to-month obligations.

2. Liquidity test. This assesses whether a club can withstand an £85 million (A$173 million) adverse shock, such as lost broadcast income or failure to sell a player during the transfer window.

3. Positive equity test. This requires phasing in the replacement of owner loans with real investment – for example, instead of an owner lending £100 million that must be repaid, the owner invests £100 million as equity, making the club financially stronger.

Together, these measures push for stronger balance sheets, reduced reliance on risky debt and greater transparency, vital after years of insolvency threats across England’s football ecosystem.

By embedding resilience alongside cost control, the framework aims to curb boom-and-bust cycles and protect competitive integrity.

Some concerns remain

Despite its promise, the framework raises practical and strategic concerns.

First, English clubs may face competitive disadvantages in European markets if the rules around how they can generate and spend revenue are stricter than those used abroad. Minor differences may compound in a global talent race, potentially constraining investment in elite players over time.

Second, mandating equity injections while phasing out soft loans raises the cost of capital and narrows financial engineering options, making clubs more expensive to run and less attractive to private equity investment, especially mid-table teams with limited profits.

Third, and most acute, is valuation risk: SSR gives regulatory weight to “squad market value”, a volatile and loosely defined metric. Without clear standards, player valuations can legitimately diverge by tens of millions, allowing clubs to manipulate these valuations to meet financial rules instead of improving real finances.

Closing loopholes on operating spend and debt may inadvertently open a larger one around player valuations, which are harder to audit and easier to manipulate.

Will these changes work?

The two key components shaping the Premier League’s path are the SCR, a cap-like limit tied to football revenue, and SSR, which measures liquidity, working capital, and equity strength to secure financial health.

Ultimately, the question is whether these changes will deliver the desired financial transparency, or just create new loopholes.

A traditional hard salary cap for Premier League clubs remains unlikely. The Professional Footballers’ Association has warned it would unlawfully restrict trade, and leading legal opinions argue rigid caps risk breaching UK or EU employment and competition law and don’t fit a football pyramid system.

The Premier League’s innovative approach could set a benchmark, but we will have to wait and see if it becomes a yardstick for other leagues.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. How England’s Premier League is trying to stop football’s financial arms race – without a salary cap – https://theconversation.com/how-englands-premier-league-is-trying-to-stop-footballs-financial-arms-race-without-a-salary-cap-270666

The Hong Kong high-rise fire shows how difficult it is to evacuate in an emergency

Source: The Conversation – Global Perspectives – By Milad Haghani, Associate Professor and Principal Fellow in Urban Risk and Resilience, The University of Melbourne

Tommy Wang/Getty

The Hong Kong high-rise fire, which spread across multiple buildings in a large residential complex, has killed dozens, with hundreds reported missing.

The confirmed death toll is now 44, with close to 300 people still unaccounted for and dozens in hospital with serious injuries.

This makes it one of Hong Kong’s deadliest building fires in living memory, and already the worst since the Garley Building fire in 1996.

Although more than 900 people have been reportedly evacuated from the Wang Fuk Court, it’s not clear how many residents remain trapped.

This catastrophic fire – which is thought to have spread from building to building via burning bamboo scaffolding and fanned by strong winds – highlights how difficult it is to evacuate high-rise buildings in an emergency.

When the stakes are highest

Evacuations of high-rises don’t happen every day, but occur often enough. And when they do, the consequences are almost always severe. The stakes are highest in the buildings that are full at predictable times: residential towers at night, office towers in the day.

We’ve seen this in the biggest modern examples, from the World Trade Center in the United States to Grenfell Tower in the United Kingdom.

The patterns repeat: once a fire takes hold, getting thousands of people safely down dozens of storeys becomes a race against time.

But what actually makes evacuating a high-rise building so challenging?

It isn’t just a matter of “getting people out”. It’s a collision between the physical limits of the building and the realities of human behaviour under stress.

It’s a long way down to safety

The biggest barrier is simply vertical distance. Stairwells are the only reliable escape route in most buildings.

Stair descent in real evacuations is far slower than most people expect. Under controlled or drill conditions people move down at around 0.4–0.7 metres per second. But in an actual emergency, especially in high-rise fires, this can drop sharply.

During 9/11, documented speeds at which survivors went down stairs were often slower than 0.3 m/s. These slow-downs accumulate dramatically over long vertical distances.

Fatigue is a major factor. Prolonged walking significantly reduces the speed of descent. Surveys conducted after incidents confirm that a large majority of high-rise evacuees stop at least once. During the 2010 fire of a high-rise in Shanghai, nearly half of older survivors reported slowing down significantly.

Long stairwells, landings, and the geometry of high-rise stairs all contribute to congestion, especially when flows from multiple floors merge into a single shaft.

Slower movers include older adults, people with physical or mobility issues and groups evacuating together. These reduce the overall pace of descent compared with the speeds typically assumed for able-bodied individuals. This can create bottlenecks. Slow movers are especially relevant in residential buildings, where diverse occupants mean movement speeds vary widely.

Visibility matters too. Experimental studies show that reduced lighting significantly slows down people going down stairs. This suggests that when smoke reduces visibility in real events, movement can slow even further as people hesitate, misjudge steps, or adjust their speed.

Human behaviour can lead to delays

Human behaviour is one of the biggest sources of delay in high-rise evacuations. People rarely act immediately when an alarm sounds. They pause, look for confirmation, check conditions, gather belongings, or coordinate with family members.

These early minutes are consistently some of the costliest when evacuating from tall buildings.

Studies of the World Trade Center evacuations show the more cues people saw – smoke, shaking, noise – the more they sought extra information before moving. That search for meaning adds delay. People talk to colleagues, look outside windows, phone family, or wait for an announcement. Ambiguous cues slow them even further.

In residential towers, families, neighbours and friend-groups naturally try to evacuate together. Groups tend to form wider steps, or group together in shapes that reduce overall flow. But our research shows when a group moves in a “snake” formation – one behind the other – they travel faster, occupy less space, and allow others to pass more easily.

These patterns matter in high-rise housing, where varied household types and mixed abilities make moving in groups the norm.

Why stairs aren’t enough

As high-rises grow taller and populations age, the old assumption that “everyone can take the stairs” simply no longer holds. A full building evacuation can take too long, and for many residents (older adults, people with mobility limitations, families evacuating together) long stair descents are sometimes impossible.

This is why many countries have turned to refuge floors: fire- and smoke-protected levels built into towers as safe staging points. These can reduce bottlenecks and prevent long queues. They give people somewhere safe to rest, transfer across to a clearer stair, or wait for firefighters. Essentially, they make vertical movement more manageable in buildings where continuous descent isn’t realistic.

Alongside them are evacuation elevators. These are lifts engineered to operate during a fire with pressurised shafts, protected lobbies and backup power. The most efficient evacuations use a mix of stairs and elevators, with ratios adjusted to the building height, density and demographics.

The lesson is clear: high-rise evacuation cannot rely on one tool. Stairs, refuge floors and protected elevators should all be made part of ensuring vertical living is safer.

The Conversation

Erica Kuligowski is affiliated with the Society of Fire Protection Engineers (SFPE) as a Section Editor for their Handbook of Fire Protection Engineering (Human Behaviour Section) and as a member of the Board of Governors for the SFPE Foundation. From 2002 to 2020, Erica worked as a research engineer and social scientist in the Engineering Laboratory of NIST, where she contributed to NIST’s Technical Investigation of the 2001 WTC Disaster and received US government funding to study occupant evacuation elevators.

Ruggiero Lovreglio receives funding from the Ministry of Business, Innovation and Employment (New Zealand), Royal Society Te Apārangi (New Zealand) and NIST (USA)

Milad Haghani does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Hong Kong high-rise fire shows how difficult it is to evacuate in an emergency – https://theconversation.com/the-hong-kong-high-rise-fire-shows-how-difficult-it-is-to-evacuate-in-an-emergency-270774