« Je suis sorti et j’ai pleuré » : ce que le personnel des établissements pour personnes âgées dit de son chagrin lorsque des résidents décèdent

Source: The Conversation – in French – By Jennifer Tieman, Matthew Flinders Professor and Director of the Research Centre for Palliative Care, Death and Dying, Flinders University

Les expériences répétées de la mort peuvent entraîner un chagrin cumulatif. Maskot/Getty Images

Avec le vieillissement de la population, nous vivons plus longtemps et mourons plus âgés. Les soins de fin de vie occupent donc une place de plus en plus importante dans les soins aux personnes âgées. Au Canada, environ 30 % des personnes âgées de 85 ans et plus vivent dans un établissement de soins infirmiers ou une résidence pour personnes âgées, proportion qui augmente significativement avec l’âge avancé.

Mais qu’est-ce que cela signifie pour ceux qui travaillent dans le secteur des soins aux personnes âgées ? Des recherches suggèrent que le personnel soignant éprouve un type de deuil particulier lorsque les résidents décèdent. Cependant, leur chagrin passe souvent inaperçu et ils peuvent se retrouver sans soutien suffisant.


Cet article fait partie de notre série La Révolution grise. La Conversation vous propose d’analyser sous toutes ses facettes l’impact du vieillissement de l’imposante cohorte des boomers sur notre société, qu’ils transforment depuis leur venue au monde. Manières de se loger, de travailler, de consommer la culture, de s’alimenter, de voyager, de se soigner, de vivre… découvrez avec nous les bouleversements en cours, et à venir.


Nouer des relations au fil du temps

Le personnel des établissements de soins aux personnes âgées ne se contente pas d’aider les résidents à prendre leur douche ou leurs repas, il s’implique activement et tisse des liens avec eux.

Dans le cadre de nos propres recherches, nous avons discuté avec des membres du personnel soignant qui s’occupent de personnes âgées dans des établissements de soins et à leur domicile.

Le personnel soignant est conscient que bon nombre des personnes dont il s’occupe vont mourir et qu’il a un rôle à jouer pour les accompagner vers la fin de leur vie. Dans le cadre de leur travail, ils nouent souvent des relations enrichissantes et gratifiantes avec les personnes âgées dont ils s’occupent.

Par conséquent, le décès d’une personne âgée peut être source d’une profonde tristesse pour le personnel soignant. Comme l’une d’entre elles nous l’a confié :

Je sais que je pleure certains de ceux qui décèdent […] Vous passez du temps avec eux et vous les aimez.

Certains soignants que nous avons interrogés ont évoqué le fait d’être présents auprès des personnes âgées, de leur parler ou de leur tenir la main lorsqu’elles décèdent. D’autres ont expliqué qu’ils versaient des larmes pour la personne décédée, mais aussi en raison de leur perte, car ils connaissaient la personne âgée et avaient été impliqués dans sa vie.

Je pense que ce qui a aggravé les choses, c’est quand sa respiration est devenue très superficielle et que j’ai su qu’elle arrivait à la fin. Je suis sortie. Je lui ai dit que je sortais un instant. Je suis sortie et j’ai pleuré parce que j’aurais voulu pouvoir la sauver, mais je savais que je ne pouvais pas.

Parfois, le personnel soignant n’a pas l’occasion de dire au revoir, ou d’être reconnu comme quelqu’un qui avait subi une perte, même s’il a pris soin de la personne pendant plusieurs mois ou années. Une soignante pour personnes âgées a noté :

Si les gens meurent à l’hôpital, c’est un autre deuil. Parce qu’ils ne peuvent pas dire au revoir. Souvent, l’hôpital ne vous le dit pas.

Le personnel soignant doit souvent aider les familles et leurs proches à accepter la mort d’un parent, d’un proche ou d’un ami. Cela peut alourdir le fardeau émotionnel du personnel qui peut lui-même être en deuil.




À lire aussi :
Que manger après 50 ans pour prévenir les blessures musculaires ?


Chagrin cumulatif

Les expériences répétées de la mort peuvent entraîner un chagrin cumulatif et une tension émotionnelle. Si le personnel interrogé conférait un sens et une valeur à son travail, il trouvait également difficile d’être régulièrement confronté à la mort.

Un membre du personnel nous a confié qu’avec le temps, et après avoir été confronté à de nombreux décès, on peut « se sentir un peu robotisé. Parce qu’il faut devenir ainsi pour pouvoir gérer la situation ».

Les problèmes organisationnels tels que le manque de personnel ou la charge de travail élevée peuvent également exacerber ces sentiments d’épuisement et d’insatisfaction. Le personnel a souligné la nécessité de pouvoir compter sur du soutien pour faire face à cette situation.

Parfois, tout ce que vous voulez, c’est parler. Vous n’avez pas besoin que quelqu’un résolve quoi que ce soit pour vous. Vous voulez juste être écouté.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Aider le personnel à gérer son chagrin

Les organismes de soins aux personnes âgées doivent prendre des mesures pour soutenir le bien-être de leur personnel, notamment en reconnaissant le deuil que beaucoup ressentent lorsque des personnes âgées décèdent.

Après le décès d’une personne âgée, offrir un soutien au personnel qui a travaillé en étroite collaboration avec cette personne et reconnaître les liens émotionnels qui existaient entre eux sont des moyens efficaces de reconnaître et de valider le deuil du personnel. Il suffit de demander au membre du personnel comment il va, ou de lui donner la possibilité de prendre le temps de faire le deuil de la personne décédée.




À lire aussi :
Vieillir en milieu rural est un enjeu collectif qui doit être pris au sérieux


Les lieux de travail devraient également encourager plus largement les pratiques d’autogestion de la santé, en promouvant des activités telles que les pauses programmées, les relations avec les collègues et la priorité accordée au temps de détente et aux activités physiques. Le personnel apprécie les lieux de travail qui encouragent, normalisent et soutiennent leurs pratiques d’autogestion de la santé.

Nous devons également réfléchir à la manière dont nous pouvons normaliser la capacité à parler de la mort et du processus de fin de vie au sein de nos familles et de nos communautés. La réticence à reconnaître la mort comme faisant partie de la vie peut alourdir le fardeau émotionnel du personnel, en particulier si les familles considèrent la mort comme un échec des soins prodigués.

À l’inverse, le personnel chargé des soins aux personnes âgées nous a maintes fois répété à quel point il était important pour lui de recevoir des commentaires positifs et la reconnaissance des familles. Comme l’a rappelé une soignante :

Nous avons eu un décès ce week-end. Il s’agissait d’un résident de très longue date. Et sa fille est venue spécialement ce matin pour me dire à quel point les soins prodigués avaient été fantastiques. Cela me réconforte, cela me confirme que ce que nous faisons est juste.

En tant que membres de familles et de communautés, nous devons reconnaître que les personnes soignantes sont particulièrement vulnérables au sentiment de deuil et de perte, car elles ont souvent noué des relations avec les personnes dont elles s’occupent au fil des mois ou des années. En soutenant le bien-être de ces travailleuses essentielles, nous les aidons à continuer à prendre soin de nous et de nos proches à mesure que nous vieillissons et que nous approchons de la fin de notre vie.

La Conversation Canada

Jennifer Tieman reçoit des financements du ministère de la Santé, du Handicap et du Vieillissement, du ministère de la Santé et du Bien-être (SA) et du Medical Research Future Fund. Des subventions de recherche spécifiques ainsi que des subventions nationales pour des projets tels que ELDAC, CareSearch et palliAGED ont permis la réalisation des recherches et des projets dont les résultats et les ressources sont présentés dans cet article. Jennifer est membre de divers comités et groupes consultatifs de projets, notamment le comité directeur d’Advance Care Planning Australia, le réseau IHACPA Aged Care Network et le groupe consultatif national d’experts de Palliative Care Australia.

Dr Priyanka Vandersman receives funding from Department of Health, Disability and Ageing. She is affiliated with Flinders University, End of Life Directions for Aged Care project. She is a Digital Health adviser for the Australian Digital Health Agency, and serves as committee member for the Nursing and Midwifery in Digital Health group within the Australian Institute of Digital Health, as well as Standards Australia’s MB-027 Ageing Societies committee.

ref. « Je suis sorti et j’ai pleuré » : ce que le personnel des établissements pour personnes âgées dit de son chagrin lorsque des résidents décèdent – https://theconversation.com/je-suis-sorti-et-jai-pleure-ce-que-le-personnel-des-etablissements-pour-personnes-agees-dit-de-son-chagrin-lorsque-des-residents-decedent-263502

Future of nation’s energy grid hurt by Trump’s funding cuts

Source: The Conversation – USA (2) – By Roshanak (Roshi) Nateghi, Associate Professor of Sustainability, Georgetown University

Large-capacity electrical wires carry power from one place to another around the nation. Stephanie Tacy/NurPhoto via Getty Images

The Trump administration’s widespread cancellation and freezing of clean energy funding is also hitting essential work to improve the nation’s power grid. That includes investments in grid modernization, energy storage and efforts to protect communities from outages during extreme weather and cyberattacks. Ending these projects leaves Americans vulnerable to more frequent and longer-lasting power outages.

The Department of Energy has defended the cancellations, saying that “the projects did not adequately advance the nation’s energy needs, were not economically viable and would not provide a positive return on investment of taxpayer dollars.” Yet before any funds are actually released through these programs, each grant must pass evaluations based on the department’s standards. Those included rigorous assessments of technical merits, potential risks and cost-benefit analyses — all designed to ensure alignment with national energy priorities and responsible stewardship of public funds.

I am an associate professor studying sustainability, with over 15 years of experience in energy systems reliability and resilience. In the past, I also served as a Department of Energy program manager focused on grid resilience. I know that many of these canceled grants were foundational investments in the science and infrastructure necessary to keep the lights on, especially when the grid is under stress.

The dollar-value estimates vary, and some of the money has already been spent. A list of canceled projects maintained by energy analysis company Yardsale totals about US$5 billion. An Oct. 2, 2025, announcement from the department touts $7.5 billion in cuts to 321 awards across 223 projects. Additional documents leaked to Politico reportedly identified additional awards under review. Some media reports suggest the full value of at-risk commitments may reach $24 billion — a figure that has not been publicly confirmed or refuted by the Trump administration.

These were not speculative ventures. And some of them were competitively awarded projects that the department funded specifically to enhance grid efficiency, reliability and resilience.

Grid improvement funding

For years, the federal government has been criticized for investing too little in the nation’s electricity grid. The long-term planning — and spending — required to ensure the grid reliably serves the public often falls victim to short-term political cycles and shifting priorities across both parties.

But these recent cuts come amid increasingly frequent extreme weather, increased cybersecurity threats to the systems that keep the lights on, and aging grid equipment that is nearing the end of its life.

These projects sought to make the grid more reliable so it can withstand storms, hackers, accidents and other problems.

National laboratories

In addition to those project cancellations, President Donald Trump’s proposed budget for 2026 contains deep cuts to the Office of Energy Efficiency and Renewable Energy, a primary funding source for several national laboratories, including the National Renewable Energy Laboratory, which may face widespread layoffs.

Among other work, these labs conduct fundamental grid-related research like developing and testing ways to send more electricity over existing power lines, creating computational models to simulate how the U.S. grid responds to extreme weather or cyberattacks, and analyzing real-time operational data to identify vulnerabilities and enhance reliability.

These efforts are necessary to design, operate and manage the grid, and to figure out how best to integrate new technologies.

A group of solar panels sits next to several large metal containers, as a train rolls past in the background.
Solar panels and large-capacity battery storage can support microgrids that keep key services powered despite bad weather or high demand.
Sandy Huffaker/AFP via Getty Images

Grid resilience and modernization

Some of the projects that have lost funding sought to upgrade grid management – including improved sensing of real-time voltage and frequency changes in the electricity sent to homes and businesses.

That program, the Grid Resilience and Innovation Partnerships Program, also funded efforts to automate grid operations, allowing faster response to outages or changes in output from power plants. It also supported developing microgrids – localized systems that can operate independently during outages. The canceled projects in that program, estimated to total $724.6 million, were in 24 states.

For example, a $19.5 million project in the Upper Midwest would have installed smart sensors and software to detect overloaded power lines or equipment failures, helping people respond faster to outages and prevent blackouts.

A $50 million project in California would have boosted the capacity of existing subtransmission lines, improving power stability and grid flexibility by installing a smart substation, without needing new transmission corridors.

Microgrid projects in New York, New Mexico and Hawaii would have kept essential services running during disasters, cyberattacks and planned power outages.

Another canceled project included $11 million to help utilities in 12 states use electric school buses as backup batteries, delivering power during emergencies and peak demand, like on hot summer days.

Several transmission projects were also canceled, including a $464 million effort in the Midwest to coordinate multiple grid connections from new generation sites.

Long-duration energy storage

The grid must meet demand at all times, even when wind and solar generation is low or when extreme weather downs power lines. A key element of that stability involves storing massive amounts of electricity for when it’s needed.

One canceled project would have spent $70 million turning retired coal plants in Minnesota and Colorado into buildings holding iron-air batteries capable of powering several thousand homes for as many as four days.

Two large yellow buses are parked next to each other.
Electric school buses like these could provide meaningful amounts of power to the grid during an outage.
Chris Jackson for The Washington Post via Getty Images

Rural and remote energy systems

Another terminated program sought to help people who live in rural or remote places, who are often served by just one or two power lines rather than a grid that can reroute power around an interruption.

A $30 million small-scale bioenergy project would have helped three rural California communities convert forest and agricultural waste into electricity.

Not all of the terminated initiatives were explicitly designed for resilience. Some would have strengthened grid stability as a byproduct of their main goals. The rollback of $1.2 billion in hydrogen hub investments, for example, undermines projects that would have paired industrial decarbonization with large-scale energy storage to balance renewable power. Similarly, several canceled industrial modernization projects, such as hybrid electric furnaces and low-carbon cement plants, were structured to manage power demand and integrate clean energy, to improve grid stability and flexibility.

The reliability paradox

The administration has said that these cuts will save money. In practice, however, they shift spending from prevention of extended outages to recovery from them.

Without advances in technology and equipment, grid operators face more frequent outages, longer restoration times and rising maintenance costs. Without investment in systems that can withstand storms or hackers, taxpayers and ratepayers will ultimately bear the costs of repairing the damage.

Some of the projects now on hold were intended to allow hospitals, schools and emergency centers to reduce blackout risks and speed power restoration. These are essential reliability and public safety functions, not partisan initiatives.

Canceling programs to improve the grid leaves utilities and their customers dependent on emergency stopgaps — diesel generators, rolling blackouts and reactive maintenance — instead of forward-looking solutions.

The Conversation

Roshanak (Roshi) Nateghi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Future of nation’s energy grid hurt by Trump’s funding cuts – https://theconversation.com/future-of-nations-energy-grid-hurt-by-trumps-funding-cuts-267504

Children learn to read with books that are just right for them – but that might not be the best approach

Source: The Conversation – USA (2) – By Timothy E Shanahan, Distinguished Professor Emeritus of Literacy, University of Illinois Chicago

Children and an adult read books at the Altadena Main Library in Altadena, Calif., in March 2025. Hans Gutknecht/MediaNews Group/Los Angeles Daily News via Getty Images

After decades of stagnating reading performance, American literacy levels have begun to drop, according to the National Assessment of Educational Progress, a program of the Department of Education.

The average reading scores of 12th graders in 2024 were 3 points lower than they were in 2019. More kids are failing to even reach basic levels of reading that would allow them to successfully do their schoolwork, according to the assessment.

There is much blaming and finger-pointing as to why the U.S. isn’t doing better. Some experts say that parents are allowing kids to spend too much time on screens, while others argue that elementary teachers aren’t teaching enough phonics, or that schools closing during the COVID-19 pandemic has had lingering effects.

As a scholar of reading, I think the best explanation is that most American schools are teaching reading using an approach that new research shows severely limits students’ opportunities to learn.

A person's hands partially cover a stack of children's books.
Students often learn to read with books that are preselected so they can easily understand most of the words in them.
Jacqueline Nix/iStock/Getty Images Plus

A Goldilocks approach to books

In the 1940s, Emmett Betts, a scholar of education and theory, proposed the idea that if the books used to teach reading were either too easy or too hard, then students’ learning would be stifled.

The thinking went that kids should be taught to read with books that were just the right fit for them.

The theory was backed by research and included specific criteria for determining the best books for each child.

The idea is that kids should work with books they could already read with 95% word accuracy and 75% to 89% comprehension.

Most American schools continue to use this approach to teaching reading, nearly a century later.

A popular method

To implement this approach, schools usually test children multiple times each year to determine which books they should be allowed to read in school. Teachers and librarians will label and organize books into color-coded bins, based on their level of difficulty. This practice helps ensure that no child strays into a book judged too difficult for them to easily follow. Teachers then divide their class into reading groups based on the book levels the students are assigned.

Most elementary teachers and middle school teachers say they try to teach at their students’ reading levels, as do more than 40% of high school English teachers.

This approach might sound good, but it means that students work with books they can already read pretty well. And they might not have very much to learn from those books.

New research challenges these widely used instructional practices. My July 2025 book, “Leveled Reading, Leveled Lives,” explains that students learn more when taught with more difficult texts. In other words, this popular approach to teaching has been holding kids back rather than helping them succeed.

Many students will read at levels that match the grades they are in. But kids who cannot already read those grade-level texts with high comprehension are demoted to below-grade-level books in the hopes that this will help them make more progress.

Often, parents do not know that their children are reading at a level lower than the grade they are in.

Perhaps that is why, while more than one-third of American elementary students read below grade level, 90% of parents think their kids are at or above grade level.

What’s in a reading level?

The approach to “just right” reading has long roots in American history.

In the 1840s, U.S. schools were divided into grade levels based on children’s ages. In response, textbook publishing companies organized their reading textbooks the same way. There was a first grade book, a second grade book and so on.

These reading levels admittedly were somewhat arbitrary. The grade-level reading diet proposed by one company may have differed from its competitors’ offerings.

That changed in 2010 with the Common Core state standards, a multistate educational initiative that set K-12 learning goals in reading and math in more than 40 states.

At the time, too many students were leaving high school without the ability to read the kinds of books and papers used in college, the workplace or the military.

Accordingly, Common Core set ranges of text levels for each grade to ensure that by high school graduation, students would be able to easily handle reading they will encounter in college and other places after graduation. Many states have replaced or revised those standards over the past 15 years, but most continue to keep those text levels as a key learning goal.

That means that most states have set reading levels that their students should be able to accomplish by each grade. Students who do this should graduate from high school with sufficient literacy to participate fully in American society.

But this instructional level theory can stand in the way of getting kids to those goals. If students cannot already read those grade level texts reasonably well, the teacher is to provide easier books than adjusting the instruction to help them catch up.

But that raises a question: If children spend their time while they are in the fourth grade reading second grade books, will they ever catch up?

Two young children sit at a desk and read books.
New research suggests that children could benefit more from reading books that are slightly advanced for them, even if they cannot immediately grasp almost all of the words.
Jerry Holt/Star Tribune via Getty Images

What the research says

For more than 40 years, there was little research into the effectiveness of teaching reading with books that were easy for kids to follow. Still, the numbers of schools buying into the idea burgeoned.

Research into effectiveness – or, actually, ineffectiveness – of this method has finally begun to accumulate. These studies show that teaching students at their reading levels, rather than their grade levels, either offers no benefit or can slow how much children learn.

Since 2000, the federal government has spent tens of billions of dollars trying to increase children’s literacy rates. State expenditures toward this goal have been considerable, as well.

Despite these efforts, there have been no improvements in U.S. reading achievement for middle school or high school students since 1970.

I believe it is important to consider the emerging research that shows there will not be considerable reading gains until kids are taught to read with sufficiently challenging and meaty texts.

The Conversation

Timothy E Shanahan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Children learn to read with books that are just right for them – but that might not be the best approach – https://theconversation.com/children-learn-to-read-with-books-that-are-just-right-for-them-but-that-might-not-be-the-best-approach-267510

How the Philadelphia Art Museum is reinventing itself for the Instagram age

Source: The Conversation – USA (2) – By Sheri Lambert, Professor of Marketing, Temple University

Modernizing a century-old cultural brand in Philly can be risky. Rob Cusick/Philadelphia Art Museum

On Philadelphia’s famed Benjamin Franklin Parkway, where stone, symmetry and civic ambition meet, something subtle yet seismic has happened.

The city’s grandest temple to art has shed a preposition.

After nearly a century as the Philadelphia Museum of Art, or PMA, the institution now calls itself simply the Philadelphia Art Museum – or PhAM, as the new logo and public rollout invite us to say.

The change may seem cosmetic, but as a marketing scholar at Temple University whose research focuses on branding and digital marketing strategy, I know that in the tight geometry of naming and branding, every word matters.

The museum’s new identity signals not just a typographic update but a transformation in tone, purpose and reach. It’s as if the museum has taken a long, deep breath … and decided to loosen its collar.

Insta-friendly design

For decades, the museum’s granite facade has represented permanence. Its pediments crowned with griffins – mythological creatures that are part lion and part eagle – have looked out across the parkway like silent sentinels of culture. The rebrand dares to make those sentinels dance.

In its new form, PhAM is deliberately more flexible, less marble, more motion. The logo revives the griffin but places it with a bold, circular emblem that is unmistakably digital. The new logo is chunkier, more assertive and designed to hold its own on a phone screen.

Like the 2015 Metropolitan Museum of Art’s digital overhaul in New York, the Philadelphia Art Museum is leaning into an era where visitors first encounter culture through screens, not doors. As the Met’s former chief digital officer Sree Sreenivasan stated, “Our competition is Netflix and Candy Crush,” not other museums.

The PhAM’s visual language is redesigned for environments filled with scrolling, swiping and sharing. Through this marketer’s lens, the goal is clear: to ensure that the museum lives not only on the parkway but in the algorithm.

A wall with colorful white, pink, blue, teal and black signs with modern graphic designs
The museum’s new branding and signage aims to appeal to younger and more diverse audiences.
Rob Cusick/Philadelphia Art Museum

A little younger, more cheeky

There is something refreshing about a legacy institution willing to meet its existing or future audience where they already are. The museum’s leadership frames the change as a broader renewal – a commitment to accessibility, community and openness.

The rebrand showcases “Philadelphia” and it takes center stage in the new name and logo, a subtle but potent reminder that the museum’s roots are here. In the previous design, the word “Art” was much larger and more bolded than “Philadelphia.”

And then there’s the nickname: PhAM. It’s playful – think, “Hey, fam!” or a Batman comic-style Pow! Bam! PhAM! – compact, easy to say and just cheeky enough to intrigue a new generation. It’s Instagrammable and hashtaggable. It’s got trending power. I asked a lecture hall full of marketing students in their 20s what they thought of it, and they generally loved it. They thought it was “fun,” “hip” and had enough “play” in the name to make them want to visit.

It’s also a nod to the way folks from Philly actually talk about the place. No one in Philadelphia ever says, “Let’s go to the Museum of Art.” They call it “the Art Museum.” The brand finally caught up with the vernacular.

A balancing act

Rebrands in the cultural sector are rarely simple makeovers. They are identity reckonings.

The Tate Modern in London mastered this dance in 2016 when it modernized its graphics and digital outreach while keeping the weight of its bones intact.

Others have stumbled.

When the Whitney Museum in New York debuted a minimalist “W” in 2013, reactions were mixed. To me, it felt more like a tech startup than a place of art.

PhAM now faces that same paradox. How does the cultural institution appear modern without erasing its majesty? Museums, after all, trade in authority as much as accessibility.

The new name carries subtle risks. Some longtime patrons may bristle at the casual tone. And the phrase “Museum of Art” carries an academic formality that “Art Museum” softens.

And the more flexible a brand’s logo or voice becomes, the more it risks dissolving into the noise of digital sameness. While brands must adapt their visuals and tone to fit different social media platforms and audiences, there is a fine line between flexibility and dilution. The more a brand’s logo, voice or visual identity bends to accommodate every digital trend or platform aesthetic, the greater the risk that it loses its edge.

For example, when too many brands adopt a minimalistic sans-serif logo – as we’ve seen with fashion brands such as Burberry and Saint Laurent – the result is a uniform aesthetic that makes it difficult for any single identity to stand out.

Flexibility should serve differentiation, not erode it.

In the end, I appreciate how PhAM’s revival of the griffin, steeped in the building’s history, keeps the brand tethered to its architectural DNA.

For now, the rebrand communicates both humility and confidence. It acknowledges that even icons must learn to speak new languages. The gesture isn’t just aesthetic; it’s generational. By softening its posture and modernizing its voice, the Philadelphia Art Museum appears intent on courting a new cohort of museumgoers used to stories unfolding on screens. This is a rebrand not merely for the faithful but for those who might never have thought of the museum as “for them” in the first place.

A billboard on urban street reads 'Art for all, All for art'
Philadelphia Art Museum’s new branding on display on N. 5th Street in North Philadelphia.
Sheri Lambert, CC BY-NC-SA

Read more of our stories about Philadelphia and Pennsylvania, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Sheri Lambert does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How the Philadelphia Art Museum is reinventing itself for the Instagram age – https://theconversation.com/how-the-philadelphia-art-museum-is-reinventing-itself-for-the-instagram-age-267945

Why you can salvage moldy cheese but never spoiled meat − a toxicologist advises on what to watch out for

Source: The Conversation – USA (3) – By Brad Reisfeld, Professor Emeritus of Chemical and Biological Engineering, Biomedical Engineering, and Public Health, Colorado State University

Molds on foods produce a range of microbial toxins and biochemical byproducts that can be harmful. JulieAlexK/iStock via Getty Images

When you open the refrigerator and find a wedge of cheese flecked with green mold, or a package of chicken that smells faintly sour, it can be tempting to gamble with your stomach rather than waste food.

But the line between harmless fermentation and dangerous spoilage is sharp. Consuming spoiled foods exposes the body to a range of microbial toxins and biochemical by-products, many of which can interfere with essential biological processes. The health effects can vary from mild gastrointestinal discomfort to severe conditions such as liver cancer.

I am a toxicologist and researcher specializing in how foreign chemicals such as those released during food spoilage affect the body. Many spoiled foods contain specific microorganisms that produce toxins. Because individual sensitivity to these chemicals varies, and the amount present in spoiled foods can also vary widely, there are no absolute guidelines on what is safe to eat. However, it’s always a good idea to know your enemies so you can take steps to avoid them.

Nuts and grains

In plant-based foods such as grains and nuts, fungi are the main culprits behind spoilage, forming fuzzy patches of mold in shades of green, yellow, black or white that usually give off a musty smell. Colorful though they may be, many of these molds produce toxic chemicals called mycotoxins.

Two common fungi found on grains and nuts such as corn, sorghum, rice and peanuts are Aspergillus flavus and A. parasiticus. They can produce mycotoxins known as aflatoxins, which form molecules called epoxides that can trigger mutations when they bind to DNA. Repeated exposure to aflatoxins can damage the liver and has been linked to liver cancer, especially for people who already have other risk factors for it, such as hepatitis B infection.

Mold on corn cobs
Fusarium molds can grow on corn and other grains.
Orest Lyzhechka/iStock via Getty Images Plus

Fusarium is another group of fungal pathogens that can grow as mold on grains such as wheat, barley and corn, especially at high humidity. Infected grains may appear discolored or have a pinkish or reddish hue, and they might emit a musty odor. Fusarium fungi produce mycotoxins called trichothecenes, which can damage cells and irritate the digestive tract. They also make another toxin, fumonisin B1, which disrupts how cells build and maintain their outer membranes. Over time, these effects can harm the liver and kidneys.

If grains or nuts look moldy, discolored or shriveled, or if they have an unusual smell, it’s best to err on the side of caution and throw them out. Aflotoxins, especially, are known to be potent cancer-causing agents, so they have no safe level of exposure.

Fruits

Fruits can also harbor mycotoxins. When they become bruised or overripe, or are stored in damp conditions, mold can easily take hold and begin producing these harmful substances.

One biggie is a blue mold called Penicillium expansum, which is best known for infecting apples but also attacks pears, cherries, peaches and other fruit. This fungus produces patulin, a toxin that interferes with key enzymes in cells to hobble normal cell functions and generate unstable molecules called reactive oxygen species that can harm DNA, proteins and fats. In large amounts, patulin can injure major organs such as the kidneys, liver, digestive tract and immune system.

P. expansum’s blue and green cousins, Penicillium italicum and Penicillium digitatum, are frequent flyers on oranges, lemons and other citrus fruits. It’s not clear whether they produce dangerous toxins, but they taste awful.

Green and white mold on an orange
Penicillium digitatum forms a pretty green growth on citrus fruits that makes them taste terrible.
James Scott via Wikimedia Commons, CC BY-SA

It is tempting to just cut off the moldy parts of a fruit and eat the rest. However, molds can send out microscopic, rootlike structures called hyphae that penetrate deeply into food, potentially releasing toxins even in seemingly unaffected bits. Especially for soft fruits, where hyphae can grow more easily, it’s safest to toss moldy specimens. Do it at your own risk, but for hard fruits I do sometimes just cut off the moldy bits.

Cheese

Cheese showcases the benefits of controlled microbial growth. In fact, mold is a crucial component in many of the cheeses you know and love. Blue cheeses such as Roquefort and Stilton get their distinctive, tangy flavor from chemicals produced by a fungus called Penicillium roqueforti. And the soft, white rind on cheeses such as Brie or Camembert contributes to their flavor and texture.

On the other hand, unwanted molds look fuzzy or powdery and may take on unusual colors. Greenish-black or reddish molds, sometimes caused by Aspergillus species, can be toxic and should be discarded. Also, species such as Penicillium commune produce cyclopiazonic acid, a mycotoxin that disrupts calcium flow across cell membranes, potentially impairing muscle and nerve function. At high enough levels, it may cause tremors or other nervous system symptoms. Fortunately, such cases are rare, and spoiled dairy products usually give themselves away by their sharp, sour, rank odor.

Cheesemaker examining cheeses
Mold is a crucial component of blue cheeses, adding a distinctive, tangy taste.
Peter Cade/Photodisc via Getty Images

As a general rule, discard soft cheeses such as ricotta, cream cheese and cottage cheese at the first sign of mold. Because these cheeses contain more moisture, the mold’s filaments can spread easily.

Hard cheeses, including cheddar, Parmesan and Swiss, are less porous. So cutting away at least one inch around the moldy spot is more of a safe bet – just take care not to touch the mold with your knife.

Meat

While molds are the primary concern for plant and dairy spoilage, bacteria are the main agents of meat decomposition. Telltale signs of meat spoilage include a slimy texture, discoloration that’s often greenish or brownish and a sour or putrid odor.

Some harmful bacteria do not produce noticeable changes in smell, appearance or texture, making it difficult to assess the safety of meat based on sensory cues alone. That stink, though, is caused by chemicals such as cadaverine and putrescine that are formed as meat decomposes, and they can cause nausea, vomiting and abdominal cramps, as well as headaches, flushing or drops in blood pressure.

Spoiled meats are rife with bacterial dangers. Escherichia coli, a common contaminant of beef, produces shiga toxin, which chokes off some cells’ ability to make proteins and can cause a dangerous kidney disease called hemolytic uremic syndrome. Poultry often carries the bacterium Campylobacter jejuni, which produces a toxin that invades gastrointestinal cells, often leading to diarrhea, abdominal cramps and fever. It can also provoke the body’s immune system to attack its own nerves, potentially sparking a rare condition called Guillain–Barré syndrome, which can lead to temporary paralysis.

Salmonella, found in eggs and undercooked chicken, is one of the most common types of food poisoning, causing diarrhea, nausea and abdominal cramps. It releases toxins into the lining of the small and large intestines that drive extensive inflammation. Clostridium perfringens also attacks the gut, but its toxins work by damaging cell membranes. And Clostridium botulinum, which can lurk in improperly stored or canned meats, produces botulinum toxin, one of the most potent biological poisonslethal even in tiny amounts.

It is impossible for meat to be totally free of bacteria, but the longer it sits in your refrigerator – or worse, on your counter or in your grocery bag – the more those bacteria multiply. And you can’t cook the yuck away. Most bacteria die at meat-safe temperatures – between 145 and 165 degrees Fahrenheit (63-74 C) – but many bacterial toxins are heat stable and survive cooking.

The Conversation

Brad Reisfeld does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why you can salvage moldy cheese but never spoiled meat − a toxicologist advises on what to watch out for – https://theconversation.com/why-you-can-salvage-moldy-cheese-but-never-spoiled-meat-a-toxicologist-advises-on-what-to-watch-out-for-263908

Logement : les partis municipaux prisonniers de la logique du marché

Source: The Conversation – in French – By Renaud Goyer, Professeur, politiques et programmes sociaux, École de travail social, Université du Québec à Montréal (UQAM)

À quelques semaines des élections municipales, prévues le 3 novembre prochain, la question du logement s’impose comme l’un des enjeux centraux de la campagne au Québec. Dans un contexte de crise d’abordabilité et de hausse des expulsions, les partis municipaux rivalisent de promesses pour accroître l’offre de logements, mais leurs propositions restent souvent prisonnières d’une même logique : miser sur le marché pour résoudre une crise qu’il a contribué à créer.


Cet article fait partie de notre série Nos villes d’hier à demain. Le tissu urbain connait de multiples mutations, avec chacune ses implications culturelles, économiques, sociales et – tout particulièrement en cette année électorale – politiques. Pour éclairer ces divers enjeux, La Conversation invite les chercheuses et chercheurs à aborder l’actualité de nos villes.


Un règlement inefficace

En 2005, bien avant même l’arrivée de Projet Montréal à la mairie, le conseil municipal (alors dirigé par Gérald Tremblay) adoptait une politique de construction de logements sociaux et abordables au sein des projets de développement privés, en misant sur la négociation avec les promoteurs immobiliers. À son arrivée, Projet Montréal a renforcé la politique en imposant l’inclusion de logements abordables, familiaux et/ou sociaux. Aujourd’hui, cette politique est jugée inefficace par l’ensemble des formations politiques, y compris celles qui en avaient été à l’origine.

En fait, tant la politique d’inclusion que le règlement pour une métropole mixte n’ont permis de construire des logements abordables, familiaux et/ou sociaux en nombre suffisant pour répondre aux besoins. Les promoteurs préfèrent, dans 97 % des cas, payer la maigre compensation prévue plutôt que de les bâtir : à peine 250 unités construites annuellement, alors que les mises en chantier représentaient au moins 20 fois plus d’unités.

Tous les partis formulent la même critique : la politique d’inclusion serait trop contraignante pour les promoteurs immobiliers, qui hésiteraient à lancer de nouveaux projets. Ce frein réglementaire aurait, selon eux, ralenti le développement. Or, les chiffres racontent une tout autre histoire : depuis l’adoption du règlement, Montréal a connu des années records de mises en chantier.

La prégnance de la politique de l’offre

En réalité, cette politique, tout comme les propositions électorales actuelles, repose sur une même idée : pour résoudre la crise du logement, il suffirait de construire davantage, peu importe le type de logements.

Dans cette optique, les partis reprennent la stratégie de la SCHL : faciliter la vie aux promoteurs en réduisant les barrières à la construction et la « paperasserie ». Projet Montréal, par exemple, a annoncé la désignation de zones « prêtes à bâtir » alors qu’Ensemble Montréal promet de construire 50 000 logements en cinq ans par l’accélération des procédures de permis.

Lorsqu’il est question de logements abordables ou sociaux, les intentions demeurent plus vagues. Tous souhaitent en accroître le nombre – sauf Action Montréal –, mais peu avancent des mesures concrètes. Les solutions proposées sont surtout financières : garanties municipales (Projet Montréal), fonds privé-public pour les OBNL pour élargir le spectre de l’offre (Futur Montréal), ou microcrédit pour protéger les locataires vulnérables (Ensemble Montréal). Ces approches révèlent une contradiction : on cherche à mobiliser les mécanismes du marché pour produire du logement… hors marché.

Cette logique n’est d’ailleurs pas nouvelle. Les anciennes politiques d’inclusion reposaient elles aussi sur la collaboration avec le secteur privé pour construire du logement social ou abordable. Or, ce modèle a contribué à marginaliser ce type d’unités dans le parc immobilier montréalais : la part des HLM, notamment, a reculé au cours des dix dernières années.




À lire aussi :
Élections municipales : les enjeux des villes changent, mais pas leurs pouvoirs


Une crise à travers le prisme de l’itinérance

Pour la plupart des partis politiques, la crise actuelle n’est pas d’abord une crise du logement, mais une crise de l’itinérance. Cette dernière est bien réelle, bien sûr, mais elle sert trop souvent à détourner le regard du problème plus large : l’accès au logement pour l’ensemble de la population. Peu de propositions visent à loger le plus grand nombre ou à renforcer le parc de logements sociaux et abordables.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Cette approche révèle un certain malaise politique. Les partis peinent à défendre le logement social sans recourir à la figure des personnes marginalisées ou non logées, présentées implicitement comme les « indésirables » de la ville qu’il faudrait soustraire à la vue. Les politiques de logement deviennent ainsi un outil de gestion de la visibilité de la pauvreté, plutôt qu’une réponse structurelle à la crise.

Ce glissement explique sans doute l’absence, dans la campagne actuelle, d’un débat sur la cohabitation urbaine, de la perspective des personnes non logées. Ce silence étonne, alors même que l’Office de consultation publique de Montréal a déposé un rapport sur la question cet été. Les commissaires y rappellent dans un premier temps que les enjeux de cohabitation découlent de la crise de logement et nourrissent la stigmatisation, l’exclusion et la criminalisation des personnes en situation d’itinérance. Dans un deuxième temps, ils interpellaient élus et candidats pour qu’ils exercent un leadership inclusif sur cette question.

La non-responsabilité comme modus operandi

Les élus municipaux ne prennent pas leurs responsabilités concernant la cohabitation, le logement et l’itinérance ; ils donnent l’impression que ce n’est pas de leur ressort et que la responsabilité revient plutôt à Québec ou à Ottawa.

Pourtant, tant en matière de logement que d’itinérance, le palier municipal peut agir. D’ailleurs, le parti Transition Montréal rappelle que la Ville a maintenant de nouveaux pouvoirs de taxation pour financer des initiatives en matière de logement – même si le parti reste vague sur la manière dont il entendrait les utiliser.

À l’instar de Vancouver en Colombie-Britannique, ou même de Montréal dans les années 1980, la Ville pourrait devenir maître d’œuvre de projets en matière de logement à travers une organisation qui existe déjà : la Société d’habitation de Montréal. Créée par la Ville, cette dernière possède 5000 logements hors marché et pourrait être mobilisée, avec un financement indexé, pour démarchandiser des logements existants ou construire de nouvelles unités.

Une telle démarche permettrait de diversifier les modes d’intervention, qui ont surtout reposé sur le privé et le marché au cours des 20 dernières années, et de confier au secteur communautaire la tâche de gérer la crise et d’y remédier. Au pire, elle permettrait d’ouvrir le débat sur le pouvoir des villes en matière de logement.

À quelques semaines du scrutin, le choix qui se profile est moins celui de la couleur politique que de la vision du rôle de la ville : simple facilitatrice du marché ou véritable maître d’œuvre du logement ?

La Conversation Canada

Renaud Goyer a reçu des financements Conseil de recherches en sciences humaines du Canada.

Louis Gaudreau est chercheur-associé à l’Institut de recherche et d’informations socio-économiques (IRIS). Il reçoit présentement du financement du CRSH.

Léanne Tardif ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Logement : les partis municipaux prisonniers de la logique du marché – https://theconversation.com/logement-les-partis-municipaux-prisonniers-de-la-logique-du-marche-268069

Connaissez-vous le DBA, ce diplôme qui peut vous aider à relever certains défis de l’entreprise ?

Source: The Conversation – France (in French) – By Michel Kalika, Professeur émérite, iaelyon School of Management – Université Jean Moulin Lyon 3

Oubliez tous vos préjugés sur le doctorat. Différent d’un doctorat traditionnel, le Doctorate of business administration (DBA) peut apporter une aide aux entreprises. Précisions sur ce diplôme peu connu qui crée des ponts entre le monde des affaires et celui de la recherche.


Dans un contexte caractérisé par la multiplication des crises (géopolitiques, environnementales, économiques, sanitaires, etc.), les entreprises doivent remettre en cause leurs processus de décision et leurs business models. Pour cela, elles se heurtent à un obstacle : les connaissances passées des employés et des dirigeants sont frappées d’une obsolescence accélérée.

Un certain nombre de managers déjà titulaires de MBA ou de maîtrises, soit des programmes très professionnels mais sans le pas de côté indispensable à la réflexion et à l’action, s’engagent alors dans un parcours doctoral afin de trouver de nouvelles réponses. L’expérience acquise dans un secteur ou un métier permet en effet de prendre la mesure des changements en cours.




À lire aussi :
Recherche ou innovation ? Avec les thèses Cifre, plus besoin de choisir


Récemment, un manager membre du Comex d’une grande entreprise internationale du secteur électronique nous a contactés, sa direction lui ayant confié la mission de refondre les business models des différentes divisions pour les adapter au changement climatique inévitable.

Deux sous-questions se posent alors :

  • Un parcours doctoral est-il utile à un manager, et plus généralement à une organisation ?

  • Quel parcours doctoral choisir, entre doctorat traditionnel (sigle anglais, PhD) et Doctorate of Business Administration (DBA) ?

L’intérêt d’un doctorat

Concernant la première question, l’expérience des auteurs, qui ont dirigé ensemble plus d’une soixantaine de thèses de doctorat, ainsi que le récent Livre blanc « La recherche en Management au bénéfice des entreprises », qui présente le récit et les résultats d’une vingtaine de parcours doctoraux de managers en activité, apportent une réponse sans ambiguïté. Des managers ayant une expérience professionnelle significative peuvent bénéficier grandement d’une porte ouverte vers la performance en s’engageant dans une thèse de doctorat sur un sujet en lien avec leur pratique.

Le premier DBA a été créé en France en 1993 par GEM, et les programmes (environ une vingtaine aujourd’hui) se sont véritablement développés depuis une dizaine d’années. Les candidats viennent d’organisations, privées ou publiques. Ils occupent des fonctions très diverses, ils peuvent être consultants. Mais ils ont tous en commun d’être animés par un désir commun de prise de recul.

Cette démarche les autorise à prendre un recul utile par rapport aux routines organisationnelles. Par ailleurs, la possibilité de travailler en grande symbiose pendant plusieurs années avec des professeurs habilités à diriger des recherches est toujours fructueuse. De cette façon, il est possible d’associer expérience managériale, richesse du terrain et apport conceptuel et méthodologique des encadrants.

Une prise de recul sur l’expérience

La réponse à la deuxième question, celle du choix du format, suppose en préalable de clarifier les différences entre doctorat traditionnel et Doctorate of business administration, même si, bien évidemment, certaines caractéristiques les rapprochent. Au plan international, EQUAL (organisme international qui fédère AACSB, AMBA, EFMD et une vingtaine d’associations académiques) précise clairement l’existence de deux parcours doctoraux dans son document « Guidelines for Doctoral Programmes in Business and Management ».

Dans le domaine du management, le doctorat traditionnel concerne plutôt de jeunes chercheurs financés pour réaliser une thèse et intégrer ensuite à temps plein une institution d’enseignement supérieur (université ou école). Si, en revanche, l’objectif du doctorant est de prendre du recul sur son expérience, de la capitaliser en produisant des connaissances nouvelles utiles à son organisation ou à son secteur, tout en restant en activité, le DBA apparaît plus adapté.

L’organisation pédagogique diffère également : le doctorat suppose un travail à temps plein, alors que le DBA est conçu à temps partiel, compatible avec une activité professionnelle. Les deux parcours ont en commun de reposer sur une réflexion conceptuelle, de mobiliser la littérature existante, une méthodologie de recherche et une analyse de données de terrain. En revanche, ils diffèrent par l’objectif majeur de la thèse. Le doctorat traditionnel poursuit principalement une finalité conceptuelle et académique, matérialisée par des publications dans des revues scientifiques internationales en vue d’une carrière académique. Le DBA, quant à lui, trouve son importance dans la formulation de recommandations managériales et la création indispensable d’un impact organisationnel et sociétal.

Le rôle du directeur de thèse amendé

De plus, dans le doctorat traditionnel, le rôle du directeur de thèse est déterminant dans le choix du sujet, tandis que dans le DBA, c’est le doctorant-manager qui se présente avec son expérience, son sujet et son accès au terrain.

Fnege 2025.

Un autre élément de distinction est propre au contexte français : le doctorat est un diplôme national préparé au sein des Écoles Doctorales universitaires. Le DBA, créé par Harvard en 1953, reconnu internationalement par les organismes d’accréditation (AACSB, AMBA, EFMD), reste en France un diplôme d’établissement qu’il soit universitaire ou d’école.

Cela dit, les trajectoires ne sont pas toujours linéaires. Certains titulaires de doctorat en management rejoignent l’industrie, tandis que certains DBA s’orientent vers une carrière académique, soit en prolongeant leur DBA par un doctorat, soit en complétant leur parcours par des publications académiques.

Au regard de l’expérience internationale et nationale, le DBA peut contribuer très positivement à répondre aux défis actuels des organisations publiques et privées.

Une récente enquête auprès d’une soixantaine de diplômés 2023-2024 indique que les domaines d’impact les plus cités sont : transformation digitale (18 %), gestion du changement (15 %), planification stratégique (30 %), résolution de problèmes organisationnels (20 %). En effet, ce programme crée un pont entre deux mondes qui – on peut le regretter – s’ignorent trop souvent : celui de la recherche académique et celui des pratiques managériales.

Un pont existait déjà avec les thèses Cifre, mais celles-ci s’adressent à de jeunes diplômés, qui font leur thèse dans une entreprise que, bien souvent, ils découvrent, quand la thèse de DBA s’adresse au manager en DBA travaillant déjà dans l’entreprise qui est son terrain d’investigation.

The Conversation

Michel Kalika a créé le DBA a l université Paris Dauphine en 2008 et le DBA du Business Science Institute en 2012. Les deux co-auteurs ont coordonne le livre blanc de la Fnege sur le DBA .

Jean-Pierre Helfer ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Connaissez-vous le DBA, ce diplôme qui peut vous aider à relever certains défis de l’entreprise ? – https://theconversation.com/connaissez-vous-le-dba-ce-diplome-qui-peut-vous-aider-a-relever-certains-defis-de-lentreprise-265098

Le cas Yuka : quand l’information sur les aliments convoque confiance, « empowerment » et gouvernance algorithmique

Source: The Conversation – France in French (3) – By Jean-Loup Richet, Maître de Conférences en Systèmes d’Information, Université Paris 1 Panthéon-Sorbonne

Portées par la popularité de Yuka ou d’Open Food Facts, les applications de scan alimentaire connaissent un réel engouement. Une étude analyse les ressorts du succès de ces outils numériques qui fournissent des informations nutritionnelles perçues comme plus indépendantes que celles présentes sur les emballages et délivrées soit par les pouvoirs publics (par exemple, l’échelle Nutri-Score) soit par les marques.


La confiance du public envers les autorités et les grands industriels de l’alimentaire s’érode, et un phénomène en témoigne : le succès fulgurant des applications de scan alimentaire. Ces outils numériques, tels que Yuka ou Open Food Facts, proposent une alternative aux étiquettes nutritionnelles officielles en évaluant les produits au moyen de données collaboratives ouvertes ; elles sont ainsi perçues comme plus indépendantes que les systèmes officiels.

Preuve de leur succès, on apprend à l’automne 2025 que l’application Yuka (créée en France en 2017, ndlr) est désormais plébiscitée aussi aux États-Unis. Robert Francis Kennedy Jr, le ministre de la santé de l’administration Trump, en serait un utilisateur revendiqué.

Une enquête autour des sources d’information nutritionnelle

La source de l’information apparaît essentielle à l’ère de la méfiance. C’est ce que confirme notre enquête publiée dans Psychology & Marketing. Dans une première phase exploratoire, 86 personnes ont été interrogées autour de leurs usages d’applications de scan alimentaire, ce qui nous a permis de confirmer l’engouement pour l’appli Yuka.

Nous avons ensuite mené une analyse quantitative du contenu de plus de 16 000 avis en ligne concernant spécifiquement Yuka et, enfin, mesuré l’effet de deux types de signaux nutritionnels (soit apposés sur le devant des emballages type Nutri-Score, soit obtenus à l’aide d’une application de scan des aliments comme Yuka).

Les résultats de notre enquête révèlent que 77 % des participants associent les labels nutritionnels officiels (comme le Nutri-Score) aux grands acteurs de l’industrie agroalimentaire, tandis qu’ils ne sont que 27 % à percevoir les applis de scan comme émanant de ces dominants.




À lire aussi :
Pourquoi Danone retire le Nutri-Score de ses yaourts à boire


À noter que cette perception peut être éloignée de la réalité. Le Nutri-Score, par exemple, n’est pas affilié aux marques de la grande distribution. Il a été développé par le ministère français de la santé qui s’est appuyé sur les travaux d’une équipe de recherche publique ainsi que sur l’expertise de l’Agence nationale de sécurité sanitaire de l’alimentation, de l’environnement et du travail (Anses) et du Haut Conseil de la santé publique (HCSP).

C’est quoi, le Nutri-Score ?

  • Le Nutri-Score est un logo apposé, sur la base du volontariat, sur l’emballage de produits alimentaires pour informer le consommateur sur leur qualité nutritionnelle.
  • L’évaluation s’appuie sur une échelle de cinq couleurs allant du vert foncé au orange foncé. Chaque couleur est associée à une lettre, de A à E.
  • La note est attribuée en fonction des nutriments et aliments à favoriser dans le produit pour leurs qualités nutritionnelles (fibres, protéines, fruits, légumes, légumes secs) et de ceux à éviter (énergie, acides gras saturés, sucres, sel et édulcorants pour les boissons).

De son côté, la base de données Open Food Facts (créée en France en 2012, ndlr) apparaît comme un projet collaboratif avec, aux manettes, une association à but non lucratif. Quant à l’application Yuka, elle a été créée par une start-up.

Des applis nutritionnelles perçues comme plus indépendantes

Ces applications sont vues comme liées à de plus petites entités qui, de ce fait, apparaissent comme plus indépendantes. Cette différence de perception de la source engendre un véritable fossé de confiance entre les deux types de signaux. Les consommateurs les plus défiants se montrent plus enclins à se fier à une application indépendante qu’à une étiquette apposée par l’industrie ou par le gouvernement (Nutri-Score), accordant ainsi un avantage de confiance aux premières.

Ce phénomène, comparable à un effet « David contre Goliath », illustre la manière dont la défiance envers, à la fois, les autorités publiques et les grandes entreprises alimente le succès de solutions perçues comme plus neutres. Plus largement, dans un climat où rumeurs et désinformation prospèrent, beaucoup préfèrent la transparence perçue d’une application citoyenne aux communications officielles.

Dimension participative et « volet militant »

Outre la question de la confiance, l’attrait des applications de scan tient aussi à l’empowerment ou empouvoirement (autonomisation) qu’elles procurent aux utilisateurs. L’empowerment du consommateur se traduit par un sentiment accru de contrôle, une meilleure compréhension de son environnement et une participation plus active aux décisions. En scannant un produit pour obtenir instantanément une évaluation, le citoyen reprend la main sur son alimentation au lieu de subir passivement l’information fournie par le fabricant.

Cette dimension participative a même un volet qui apparaît militant : Yuka, par exemple, est souvent présentée comme l’arme du « petit consommateur » contre le « géant agro-industriel ». Ce faisant, les applications de scan contribuent à autonomiser les consommateurs qui peuvent ainsi défier les messages marketing et exiger des comptes sur la qualité des produits.

Des questions de gouvernance algorithmique

Néanmoins, cet empowerment s’accompagne de nouvelles questions de gouvernance algorithmique. En effet, le pouvoir d’évaluer les produits bascule des acteurs traditionnels vers ces plateformes et leurs algorithmes. Qui définit les critères du score nutritionnel ? Quelle transparence sur la méthode de calcul ? Ces applications concentrent un pouvoir informationnel grandissant : elles peuvent, d’un simple score, influer sur l’image d’une marque, notamment celles à la notoriété modeste qui ne peuvent contrer une mauvaise note nutritionnelle.

Garantir la sécurité et l’intégrité de l’information qu’elles fournissent devient dès lors un enjeu essentiel. À mesure que le public place sa confiance dans ces nouveaux outils, il importe de s’assurer que leurs algorithmes restent fiables, impartiaux et responsables. Faute de quoi, l’espoir d’une consommation mieux informée pourrait être trahi par un excès de pouvoir technologique non contrôlé.

À titre d’exemple, l’algorithme sur lequel s’appuie le Nutri-Score est réévalué en fonction de l’avancée des connaissances sur l’effet sanitaire de certains nutriments et ce, en toute transparence. En mars 2025, une nouvelle version de cet algorithme Nutri-Score est ainsi entrée en vigueur.

La montée en puissance des applications de scan alimentaire est le reflet d’une perte de confiance envers les institutions, mais aussi d’une aspiration à une information plus transparente et participative. Loin d’être de simples gadgets, ces applis peuvent servir de complément utile aux politiques de santé publique (et non s’y substituer !) pour reconstruire la confiance avec le consommateur.

En redonnant du pouvoir au citoyen tout en encadrant rigoureusement la fiabilité des algorithmes, il est possible de conjuguer innovation numérique et intérêt général. Réconcilier information indépendante et gouvernance responsable jouera un rôle clé pour que, demain, confiance et choix éclairés aillent de pair.

The Conversation

Marie-Eve Laporte a reçu des financements de l’Agence nationale de la recherche (ANR).

Béatrice Parguel, Camille Cornudet, Fabienne Berger-Remy et Jean-Loup Richet ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur poste universitaire.

ref. Le cas Yuka : quand l’information sur les aliments convoque confiance, « empowerment » et gouvernance algorithmique – https://theconversation.com/le-cas-yuka-quand-linformation-sur-les-aliments-convoque-confiance-empowerment-et-gouvernance-algorithmique-267489

Xi-Trump summit: Trade, Taiwan and Russia still top agenda for China and US presidents – 6 years after last meeting

Source: The Conversation – Global Perspectives – By Rana Mitter, Professor of U.S.-Asia Relations, Harvard Kennedy School

Six years have passed since presidents Xi Jinping and Donald Trump last met, but the substance of discussions remains largely the same. Back in 2019, trade and Taiwan also rode high on the agenda.

Ahead of the pair’s expected meeting on Oct. 30, 2025, Trump also indicated he wants to enlist China’s help in bringing Russia to the peace table – adding a third weighty issue for the two men to chat about.

But how has the needle moved on these three issues – trade, Taiwan and China-Russia relations – since the last meeting between Trump and Xi? Rana Mitter, professor of U.S.-Asia relations at Harvard Kennedy School, explains what has changed since 2019 and the geopolitical background to the upcoming bilateral talks.

Taiwan: US hawks in retreat

Compared with where the two countries were in 2019, the biggest variable that has changed is whether the U.S. has softened its position on Taiwan.

In the first Trump administration, Taiwan policy was shaped by figures such as Secretary of State Mike Pompeo who were decidedly hawkish on China and the issue of Taiwan. The U.S. was seemingly pushing then to bolster its assurance – falling short of commitment – to help Taiwan pursue a path of autonomy, but not outright independence.

During the Biden administration, the U.S. position on Taiwan was shaped by other, wider China-U.S. events, such as the spy balloon and then the controversial visit to Taiwan by then-House Speaker Nancy Pelosi – both of which damaged Washington-Beijing relations and resulted in an uptick in tensions across the Taiwan Strait.

A person steps on an image of a woman's face.
A pro-China supporter steps on a defaced photo of U.S. House of Representatives Speaker Nancy Pelosi during a protest in Hong Kong against her visit to Taiwan on Aug. 3, 2022.
Anthony Kwan/Getty Images

Trump’s current secretary of state, Marco Rubio, has also traditionally been very hawkish on Taiwan – but there is a wider sense that this hawkish approach isn’t dominant in the second Trump administration.

Much of this centers on Trump himself and questions over whether he is looking to find a different compromise agreement with China that includes the U.S. stance on Taiwan.

Evidence of this could be seen earlier this year when the Trump administration prevented Taiwan President William Lai Ching-te from stopping off in New York on his way to Central and South America – something that could be interpreted as a concession to Beijing. Similarly, the Trump nixed US$400 million of U.S. weapons earmarked for Taiwan over the summer.

The other main difference now, compared with when Xi and Trump last met, is that they are dealing with a politically different Taiwan. In 2019, the U.S. and China were dealing with Taiwanese President Tsai Ing-wen, who had a practical and flexible approach to the issue of Taiwanese independence – something that Beijing vehemently opposes.

The new Taiwanese president, Lai Ching-te, hasn’t pushed for independence, but certainly a lot of analysts have said he is more enthusiastic in wanting to stress the separation of Taiwan from the mainland. That is a position that the U.S. doesn’t want to give any signal that it is supporting.

Meanwhile, Beijing has continued to push hard on Taiwan – days before the Trump-Xi meeting, Chinese state media announced that “confrontation drills” involving Chinese H-6K bombers had taken place near Taiwan.

But this is typical. The Chinese government has traditionally pushed a maximalist line on Taiwan before meetings and then scaled back rhetoric during negotiations.

So what does Beijing want? In recent weeks and months, the Chinese Communist Party has indicated that it would like Washington’s phrasing on Taiwan to change from “the U.S. does not support independence” to “the U.S. opposes independence.”

But I would not expect any move from Washington in the short term on this. The preferred settlement on Taiwan for the short to medium term is status quo. However, that gets harder and harder due to China’s increased presence in Taiwanese air and naval space.

Trade: Trump tools are blunted

In 2019, the U.S. and China were in the process of working out a “phase one” economic and trade agreement, which was supposed to develop into a much bigger deal.

But the wider deal didn’t come about. Both sides were finding it hard to achieve the terms of the deal, and then the pandemic in 2020 threw global trade and supply chains out of kilter.

We are now in a very different tariff environment than during the first Trump administration – tariffs are now universal, and Trump wants everyone to pay them.

That creates in the short term a harder negotiating position for Trump – there is less incentive for U.S. allies to help pressure China with additional restrictions of their own. Take the U.K, for example. In the first Trump administration, a succession of phone calls from the White House pressured the Boris Johnson government to ban Chinese giant Huawei from having a slice of the U.K. telecommunications market. But at that point, there was no U.S.-imposed 10% tariff on the U.K. And while 10% is low compared with that imposed elsewhere, it is still an obstacle when trying to impose pressure on allies and partners against China.

And compared with 2019, the vulnerability of supply chains has become even more apparent. We have seen evidence of that with China’s actions over restricting rare earth materials. But in the intervening years, Beijing has inserted itself even more so into global supply chains – making it harder for Trump to also pressure American companies.

Take Apple. It has, under pressure from the Trump administration, moved more of its production of iPhones to India – a rival to China. But in practice, iPhone component production and assembly still take place in China – as no other place can do the job with such precision and volume.

Russia: China continues balancing act

China’s approach to its relationship with Russia hasn’t really changed since the first Trump term – Beijing still makes its decisions on Russia with little regard to what the U.S. thinks.

Of course, Russia did not fully invade Ukraine until 2022 – three years after Xi and Trump last met. But by then there had been the invasion of Crimea in 2014 and Georgia in 2008.

China didn’t condemn Russia for those actions, but it noticeably abstained in the U.N. on those issues. And it never acknowledged Russia’s annexation of those areas.

Similarly today, Beijing has never acknowledged Russia’s claims over the parts of eastern Ukraine it occupies.

So China has continued its balanced, cautious position. Its priority is not offending Russia, which it increasingly eyes as a key market for Chinese goods. It provides tech that has dual-use capability useful for Russia’s military sector, and oil – but drives a hard bargain. These are no “mate’s rates.”

China wants nothing to disturb that trade, so it has been at first suspicious, then relieved by the relative warmth of the Trump administration toward Russia.

As to the war itself, China evidently understands that Russia may not win the war, but it is able to maintain it – and that is just fine. An isolated Russia, dependent on Chinese goods, is to Beijing’s benefit.

The Conversation

Rana Mitter does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Xi-Trump summit: Trade, Taiwan and Russia still top agenda for China and US presidents – 6 years after last meeting – https://theconversation.com/xi-trump-summit-trade-taiwan-and-russia-still-top-agenda-for-china-and-us-presidents-6-years-after-last-meeting-268471

How the explosion of prop betting threatens the integrity of pro sports

Source: The Conversation – USA (2) – By John Affleck, Knight Chair in Sports Journalism and Society, Penn State

Miami Heat guard Terry Rozier was one of 34 people arrested as part of a wide-ranging investigation into illegal gambling. Scott Taetsch/Getty Images

When I first heard about the arrests of Portland Trail Blazers coach Chauncey Billups, Miami Heat guard Terry Rozier and former NBA player Damon Jones in connection to federal investigations involving illegal gambling, I couldn’t help but think of a recent moment in my sports writing class.

I was showing my students a clip from an NFL game between the Jacksonville Jaguars and Kansas City Chiefs. Near the end of play, Jaguars quarterback Trevor Lawrence threw a perfect pass to receiver Brian Jones Jr. to secure a critical first down. Out of the blue, a student groaned and said that he’d lost US$50 on that throw.

I thought of that moment because it revealed how ubiquitous sports betting has become, how much the types of bets have changed over time, and – given these trends – how it’s naive to think players won’t continue to be tempted to game the system.

The prop bet hits it big

I’ve been following the evolution of sports gambling for about a decade in my position as chair of Penn State’s sports journalism program.

Back when legal American sports betting was mostly confined to Las Vegas, the standard bets tended to be tied to picking a winner or which team would cover a point spread.

But ahead of the 1986 Super Bowl between the Chicago Bears and the overmatched New England Patriots, casinos offered bets on whether Bears defensive lineman – and occasional running back – William “Refrigerator” Perry would score a touchdown. The excitement around that sideshow kept fan interest going during a 46-10 blowout.

Perry did end up scoring, and the prop bet took off from there.

Prop bets are wagers that depend on an outcome within a game but not its final result. They can often involve an athlete’s individual performance in some statistical category – for instance, how many yards a running back will rush for, how many rebounds a basketball center will secure, or how many strikeouts a pitcher will have. They’ve become routine offerings on sports betting menus.

For example: As I write this, I am looking at a FanDuel account I opened years ago, seeing that, for the Green Bay Packers-Pittsburgh Steelers game currently in progress, I can place a wager on which player will score a touchdown, how many yards each quarterback will throw for and much, much more. As the game progresses, the odds constantly shift – allowing for what are called “live bets.”

Returning to my student who lost the bet on Lawrence’s pass completion: It’s possible he’d placed a bet on Lawrence to throw fewer than a set number of yards. Or he could have been part of a fantasy league, which is also dependent on individual player performances.

Either way, a problem with prop bets, from an anti-corruption perspective, is that an individual can often control the outcome. You don’t need a group of players to be in on it – which is what happened during the infamous Black Sox Scandal, when eight players on the Chicago White Sox were accused of conspiring with gamblers to intentionally lose the 1919 World Series.

In the indictment against him, Rozier is accused of telling a co-defendant to pass along information to particular bettors that he planned to leave a March 2023 game early – a move everyone involved knew meant he would not reach his statistical benchmarks for the game. They could then place bets that he wouldn’t hit those marks.

In baseball, meanwhile, Luis Ortiz of the Cleveland Guardians was placed on leave during the 2025 season and is under investigation for possibly illegally wagering on the outcome of two pitches he threw. MLB authorities are essentially trying to determine if he deliberately threw balls as opposed to strikes in two instances. (Yes, prop bets have become so granular that you can even bet on whether a pitcher will throw a ball or a strike on an individual pitch.)

An exploding market with no end in sight

The popularity of prop bets feeds into a worldwide sports gambling industry that has experienced explosive growth and shows no sign of slowing.

Since the U.S. Supreme Court in 2018 ruled that states could decide on whether to allow sports betting, 39 states plus the District of Columbia have done so.

The leagues and media are more than just bystanders. FanDuel and DraftKings are official sports betting partners of the NBA and the NFL.

In the days after the Supreme Court ruling, I wondered whether journalists would embrace sports betting. These days, ESPN not only has a betting show, but it also has a betting app.

According to the American Gaming Association, sportsbooks collected a record $13.71 billion in revenue in 2024 from about $150 billion in wagers. A study released in February 2025 by Siena and St. Bonaventure universities found that nearly half of American men have an online sports betting account.

But those figures don’t begin to touch the worldwide sports betting market, especially the illegal one. The United Nations, in a 2021 report, reported that up to $1.7 trillion is wagered annually in illegal betting markets.

The U.N. report warned that it had found a “staggering scale, manifestation, and complexity of corruption and organized crime in sport at the global, regional, and national levels.”

Who’s the boss?

In early October 2025, I attended a conference of Play the Game, a Denmark-based organization that promotes “democratic values in world sports.” Its occasional gatherings attract experts from around the world who are interested in keeping sports fair and safe for everyone.

One of the most sobering topics was illegal, online sportsbooks that feature wagering on all levels of sport, from the lowest levels of European soccer on up.

It sounded somewhat familiar. This summer at the Little League World Series, which my students covered for The Associated Press, managers complained about offshore sportsbooks offering lines on the tournament, which is played by 12-year-old amateurs.

And with so much illegal wagering in the world, the issue of match fixing was bound to come up.

One session screened a recent German documentary on match fixing. Meanwhile, Anca-Maria Gherghel, a Ph.D. candidate at Sheffield Hallam University and senior researcher for EPIC Global Solutions, both in northern England, told me how she had interviewed a professional female soccer player for a team in Cyprus. The player described how she and her teammates were routinely approached with lucrative offers to throw matches.

Put it all together – the vast sums of money at play and the relative ease of fixing a prop bet, let alone a match – and you cannot be surprised at the NBA scandal.

I used to think that gambling was just a segment of the larger sports industry. Now, I wonder whether I had it exactly backward.

Has sports just become a segment of the larger gambling industry?

The Conversation

John Affleck does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How the explosion of prop betting threatens the integrity of pro sports – https://theconversation.com/how-the-explosion-of-prop-betting-threatens-the-integrity-of-pro-sports-268340