AI chatbots are becoming everyday tools for mundane tasks, use data shows

Source: The Conversation – USA – By Jeanne Beatrix Law, Professor of English, Kennesaw State University

The average person is more likely to use AI to come up with a meal plan than program a new app. Oscar Wong/Moment via Getty Images

Artificial intelligence is fast becoming part of the furniture. A decade after IBM’s Watson triumphed on “Jeopardy!,” generative AI models are in kitchens and home offices. People often talk about AI in science fiction terms, yet the most consequential change in 2025 may be its banal ubiquity.

To appreciate how ordinary AI use has become, it helps to remember that this trend didn’t start with generative chatbots. A 2017 Knowledge at Wharton newsletter documented how deep learning algorithms were already powering chatbots on social media and photo apps’ facial recognition functions. Digital assistants such as Siri and Alexa were performing everyday tasks, and AI-powered image generators could create images that fooled 40% of viewers.

When ChatGPT became publicly available on Nov. 30, 2022, the shift felt sudden, but it was built on years of incremental integration. AI’s presence is now so mundane that people consult chatbots for recipes, use them as study partners and rely on them for administrative chores. As a writer and professor who studies ways that generative AI can be an everyday collaborator, I find that recent usage reports show how AI has been woven into everyday life. (Full disclosure: I am a member of OpenAI’s Educator Council, an uncompensated group of higher education faculty who provide feedback to OpenAI on educational use cases.)

Who’s using ChatGPT and why?

Economists at OpenAI and Harvard analyzed 1.5 million ChatGPT conversations from November 2022 through July 2025. Their findings show that adoption has broadened beyond early users: It’s being used all over the world, among all types of people. Adoption has grown fastest in low- and middle-income countries, and growth rates in the lowest-income countries are now more than four times those in the richest nations.

Most interactions revolve around mundane activities. Three-quarters of conversations involve practical guidance, information seeking and writing. These categories are for activities such as getting advice on how to cook an unusual type of food, where to find the nearest pharmacy, and getting feedback on email drafts. More than 70% of ChatGPT use is for nonwork tasks, demonstrating AI’s role in people’s personal lives. The economists found that 73% of messages were not related to work as of June 2025, up from 53% in June 2024.

Claude and the geography of adoption

Anthropic’s economic index paints a similar picture of uneven AI adoption. Researchers at the company tracked users’ conversations with the company’s Claude AI chatbot relative to working-age population. The data shows sharp contrasts between nations. Singapore’s per-capita use is 4.6 times higher than expected based on its population size, and Canada’s is 2.9 times higher. India and Nigeria, meanwhile, use Claude at only a quarter of predicted levels.

In the United States, use reflects local economies, with activity tied to regional strengths: tech in California, finance in Florida and documentation in D.C. In lower-use countries, more than half of Claude’s activity involves programming. In higher-use countries, people apply it across education, science and business. High-use countries favor humans working iteratively with AI, such as refining text, while low-use countries rely more on delegating full tasks, such as finding information.

It’s important to note that OpenAI reports between 400 million and 700 million weekly active users in 2025, while third-party analytics estimate Claude at roughly 30 million monthly active users during a similar time period. For comparison, Gemini had approximately 350 million monthly active users and Microsoft reported in July 2025 more than 100 million monthly active users for its Copilot apps. Perplexity’s CEO reported in an interview that the company’s language AI has a “user base of over 30 million active users.”

While these metrics are from a similar time period, mid-2025, it’s important to note the differences in reporting and metrics, particularly weekly versus monthly active users. By any measure, though, ChatGPT’s user base is by far the largest, making it a commonly used generative AI tool for everyday tasks.

Everyday tool

So, what do mundane uses of AI look like at home? Consider these scenarios:

  • Meal planning and recipes: A parent asks ChatGPT for vegan meal ideas that use leftover kale and mushrooms, saving time and reducing waste.
  • Personal finance: ChatGPT drafts a budget, suggests savings strategies or explains the fine print of a credit card offer, translating legalese into plain language.
  • Writing support: Neurodivergent writers use ChatGPT to organize ideas and scaffold drafts. A writer with ADHD can upload notes and ask the model to group them into themes, then expand each into a paragraph while keeping the writer’s tone and reasoning. This helps reduce cognitive overload and supports focus, while the writer retains their own voice.

These scenarios illustrate that AI can help with mundane decisions, act as a sounding board and support creativity. The help with mundane tasks can be a big lift: By handling routine planning and information retrieval, AI frees people to focus on empathy, judgment and reflection.

From extraordinary to ordinary tool

AI has transitioned from a futuristic curiosity to an everyday co-pilot, with voice assistants and generative models helping people write, cook and plan.

Inviting AI to our kitchen tables not as a mysterious oracle but as a helpful assistant means cultivating AI literacy and learning prompting techniques. It means recognizing AI’s strengths, mitigating its risks and shaping a future where intelligence — human and artificial — works for everyone.

The Conversation

Jeanne Beatrix Law serves on the OpenAI Educator Council, an uncompensated group of higher education faculty who provide feedback to OpenAI on educational use cases and occasionally tests models for those use cases.

ref. AI chatbots are becoming everyday tools for mundane tasks, use data shows – https://theconversation.com/ai-chatbots-are-becoming-everyday-tools-for-mundane-tasks-use-data-shows-266670

Solar storms have influenced our history – an environmental historian explains how they could also threaten our future

Source: The Conversation – USA – By Dagomar Degroot, Associate Professor of Environmental History, Georgetown University

Coronal mass ejections from the Sun can cause geomagnetic storms that may damage technology on Earth. NASA/GSFC/SDO

In May 2024, part of the Sun exploded.

The Sun is an immense ball of superheated gas called plasma. Because the plasma is conductive, magnetic fields loop out of the solar surface. Since different parts of the surface rotate at different speeds, the fields get tangled. Eventually, like rubber bands pulled too tight, they can snap – and that is what they did last year.

These titanic plasma explosions, also known as solar flares, each unleashed the energy of a million hydrogen bombs. Parts of the Sun’s magnetic field also broke free as magnetic bubbles loaded with billions of tons of plasma.

These bubbles, called coronal mass ejections, or CMEs, crashed through space at around 6,000 times the speed of a commercial jetliner. After a few days, they smashed one after another into the magnetic field that envelops Earth. The plasma in each CME surged toward us, creating brilliant auroras and powerful electrical currents that rippled through Earth’s crust.

A coronal mass ejection erupting from the Sun.

You might not have noticed. Just like the opposite poles of fridge magnets have to align for them to snap together, the poles of the magnetic field of Earth and the incoming CMEs have to line up just right for the plasma in the CMEs to reach Earth. This time they didn’t, so most of the plasma sailed off into deep space.

Humans have not always been so lucky. I’m an environmental historian and author of the new book “Ripples on the Cosmic Ocean: An Environmental History of Our Place in the Solar System.”

While writing the book, I learned that a series of technological breakthroughs – from telegraphs to satellites – have left modern societies increasingly vulnerable to the influence of solar storms, meaning flares and CMEs.

Since the 19th century, these storms have repeatedly upended life on Earth. Today, there are hints that they threaten the very survival of civilization as we know it.

The telegraph: A first warning

On the morning of Sept. 1, 1859, two young astronomers, Richard Carrington and Richard Hodgson, became the first humans to see a solar flare. To their astonishment, it was so powerful that, for two minutes, it far outshone the rest of the Sun.

About 18 hours later, brilliant, blood-red auroras flickered across the night sky as far south as the equator, while newly built telegraph lines shorted out across Europe and the Americas.

The Carrington Event, as it was later called, revealed that the Sun’s environment could violently change. It also suggested that emerging technologies, such as the electrical telegraph, were beginning to link modern life to the extraordinary violence of the Sun’s most explosive changes.

For more than a century, these connections amounted to little more than inconveniences, like occasional telegraph outages, partly because no solar storm rivaled the power of the Carrington Event. But another part of the reason was that the world’s economies and militaries were only gradually coming to rely more and more on technologies that turned out to be profoundly vulnerable to the Sun’s changes.

A brush with Armageddon

Then came May 1967.

Soviet and American warships collided in the Sea of Japan, American troops crossed into North Vietnam and the Middle East teetered on the brink of the Six-Day War.

It was only a frightening combination of new technologies that kept the United States and Soviet Union from all-out war; nuclear missiles could now destroy a country within minutes, but radar could detect their approach in time for retaliation. A direct attack on either superpower would be suicidal.

Several buildings on an icy plain, with green lights in the sky above.
An aurora – an event created by a solar storm – over Pituffik Space Base, formerly Thule Air Base, in Greenland in 2017. In 1967, nuclear-armed bombers prepared to take off from this base.
Air Force Space Command

Suddenly, on May 23, a series of violent solar flares blasted the Earth with powerful radio waves, knocking out American radar stations in Alaska, Greenland and England.

Forecasters had warned officers at the North American Air Defense Command, or NORAD, to expect a solar storm. But the scale of the radar blackout convinced Air Force officers that the Soviets were responsible. It was exactly the sort of thing the USSR would do before launching a nuclear attack.

American bombers, loaded with nuclear weapons, prepared to retaliate. The solar storm had so scrambled their wireless communications that it might have been impossible to call them back once they took off. In the nick of time, forecasters used observations of the Sun to convince NORAD officers that a solar storm had jammed their radar. We may be alive today because they succeeded.

Blackouts, transformers and collapse

With that brush with nuclear war, solar storms had become a source of existential risk, meaning a potential threat to humanity’s existence. Yet the magnitude of that risk only came into focus in March 1989, when 11 powerful flares preceded the arrival of back-to-back coronal mass ejections.

For more than two decades, North American utility companies had constructed a sprawling transmission system that relayed electricity from power plants to consumers. In 1989, this system turned out to be vulnerable to the currents that coronal mass ejections channeled through Earth’s crust.

Several large pieces of metal machinery lined up in an underground facility.
An engineer performs tests on a substation transformer.
Ptrump16/Wikimedia Commons, CC BY-SA

In Quebec, crystalline bedrock under the city does not easily conduct electricity. Rather than flow through the rock, currents instead surged into the world’s biggest hydroelectric transmission system. It collapsed, leaving millions without power in subzero weather.

Repairs revealed something disturbing: The currents had damaged multiple transformers, which are enormous customized devices that transfer electricity between circuits.

Transformers can take many months to replace. Had the 1989 storm been as powerful as the Carrington Event, hundreds of transformers might have been destroyed. It could have taken years to restore electricity across North America.

Solar storms: An existential risk

But was the Carrington Event really the worst storm that the Sun can unleash?

Scientists assumed that it was until, in 2012, a team of Japanese scientists found evidence of an extraordinary burst of high-energy particles in the growth rings of trees dated to the eighth century CE. The leading explanation for them: huge solar storms dwarfing the Carrington Event. Scientists now estimate that these “Miyake Events” happen once every few centuries.

Astronomers have also discovered that, every century, Sun-like stars can explode in super flares up to 10,000 times more powerful than the strongest solar flares ever observed. Because the Sun is older and rotates more slowly than many of these stars, its super flares may be much rarer, occurring perhaps once every 3,000 years.

Nevertheless, the implications are alarming. Powerful solar storms once influenced humanity only by creating brilliant auroras. Today, civilization depends on electrical networks that allow commodities, information and people to move across our world, from sewer systems to satellite constellations.

What would happen if these systems suddenly collapsed on a continental scale for months, even years? Would millions die? And could a single solar storm bring that about?

Researchers are working on answering these questions. For now, one thing is certain: to protect these networks, scientists must monitor the Sun in real time. That way, operators can reduce or reroute the electricity flowing through grids when a CME approaches. A little preparation may prevent a collapse.

Fortunately, satellites and telescopes on Earth today keep the Sun under constant observation. Yet in the United States, recent efforts to reduce NASA’s science budget have cast doubt on plans to replace aging Sun-monitoring satellites. Even the Daniel K. Inouye Solar Telescope, the world’s premier solar observatory, may soon shut down.

These potential cuts are a reminder of our tendency to discount existential risks – until it’s too late.

The Conversation

Dagomar Degroot has received funding from NASA.

ref. Solar storms have influenced our history – an environmental historian explains how they could also threaten our future – https://theconversation.com/solar-storms-have-influenced-our-history-an-environmental-historian-explains-how-they-could-also-threaten-our-future-258668

The Glozel affair: A sensational archaeological hoax made science front-page news in 1920s France

Source: The Conversation – USA – By Daniel J. Sherman, Lineberger Distinguished Professor of Art History and History, University of North Carolina at Chapel Hill

All eyes were on a commission of professional archaeologists when they visited Glozel. Agence Meurisse/BnF Gallica

In early November 1927, the front pages of newspapers all over France featured photographs not of the usual politicians, aviators or sporting events, but of a group of archaeologists engaged in excavation. The slow, painstaking work of archaeology was rarely headline news. But this was no ordinary dig.

yellowed newspaper page with photos of archaeologists at dig site
A front-page spread in the Excelsior newspaper from Nov. 8, 1927, features archaeologists at work in the field with the headline ‘What the learned commission found at the Glozel excavations.’
Excelsior/BnF Gallica

The archaeologists pictured were members of an international team assembled to assess the authenticity of a remarkable site in France’s Auvergne region.

Three years before, farmers plowing their land at a place called Glozel had come across what seemed to be a prehistoric tomb. Excavations by Antonin Morlet, an amateur archaeologist from Vichy, the nearest town of any size, yielded all kinds of unexpected objects. Morlet began publishing the finds in late 1925, immediately producing lively debate and controversy.

Certain characteristics of the site placed it in the Neolithic era, approximately 10,000 B.C.E. But Morlet also unearthed artifact types thought to have been invented thousands of years later, notably pottery and, most surprisingly, tablets or bricks with what looked like alphabetic characters. Some scholars cried foul, including experts on the inscriptions of the Phoenicians, the people thought to have invented the Western alphabet no earlier than 2000 B.C.E.

Was Glozel a stunning find with the capacity to rewrite prehistory? Or was it an elaborate hoax? By late 1927, the dispute over Glozel’s authenticity had become so strident that an outside investigation seemed warranted.

The Glozel affair now amounts to little more than a footnote in the history of French archaeology. As a historian, I first came across descriptions of it in some histories of French archaeology. With a bit of investigating, it wasn’t hard to find first-person accounts of the affair.

sketch of seven lines of alphabet-like notations on two rectangles
Examples of the kinds of inscriptions found at the Glozel site, as recorded by scholar Salomon Reinach.
‘Éphémérides de Glozel’/Wikimedia Commons

But it was only when I began studying the private papers of one of the leading contemporary skeptics of Glozel, an archaeologist and expert on Phoenician writing named René Dussaud, that I realized the magnitude and intensity of this controversy. After publishing a short book showing that the so-called Glozel alphabet was a mishmash of previously known early alphabetic writing, in October 1927 Dussaud took out a subscription to a clipping service to track mentions of the Glozel affair; in four months he received over 1,500 clippings, in 10 languages.

The Dussaud clippings became the basis for the account of Glozel in my recent book, “Sensations.” That the contours of the affair first became clear to me in a pile of yellowed newspaper clippings is appropriate, because Glozel embodies a complex relationship between science and the media that persists today.

Front page of a newspaper with images of people digging and holding up finds
The newspaper Le Matin, which vigorously promoted Glozel’s authenticity, even sponsored its own dig near the site, led by a journalist.
Le Matin/BnF Gallica

Serious scientists in the trenches

The international commission’s front-page visit to Glozel marked a watershed in the controversy, even if it did not resolve it entirely.

In a painstaking report published in the scholarly Revue anthropologique just before Christmas 1927, the commission recounted the several days of digging it conducted, provided detailed plans of the site, described the objects it unearthed and carefully explained its conclusion that the site was “not ancient.”

shelves with various clay vessels and shards piled on them
Recovered objects displayed in the Fradins’ museum in 1927.
Agence de presse Meurisse/Wikimedia Commons

The report emphasized the importance of proper archaeological method. Early on, the commissioners noted that they were “experienced diggers, all with past fieldwork to their credit,” in different chronological subfields of archaeology. In contrast, they noted that the Glozel site showed clear signs of a lack of order and method.

In their initial meeting in Vichy, the assembled archaeologists agreed that they would give no interviews during their visit to Glozel and would not speak to the press afterward. But, aware of “certain tendentious articles published by a few newspapers,” the visitors issued a communiqué stating that they would neither confirm nor deny any press reports. Their scholarly publication would be their final word on the “non-ancientness” of the site.

The distinction between true science – what the archaeologists were practicing – and the media seemed absolute.

Sensationalist coverage, but careful details, too

And yet matters were not so simple.

Many newspapers devoted extensive and careful coverage to Glozel. They offered explanations of archaeological terminology. They explained the larger stakes of the controversy, which, beyond the invention of the alphabet, involved nothing less than the direction of the development of Western civilization itself, whether from Mesopotamia in the east to Europe in the west or the reverse.

Even articles about seemingly trivial matters, such as the work clothes the archaeologists donned to perform their test excavations at Glozel, served to reinforce the larger point the commissioners made in their report. In contrast to the proper suits and ties they wore for formal photographs marking their arrival, the visitors all put on blue overalls, which for one newspaper “gave them the air of apprentice locksmiths or freshly decked-out electricians.”

The risk, apparent in this jocular reference, of losing the social standing afforded them by their professional degrees and education was worth taking because it drove home these archaeologists’ devotion to their discipline, which their report described as “a daily moral obligation.”

seven people dressed formally standing against a building
Morlet, far left, and the international commission in front of the Fradins’ museum in November 1927. Garrod is third from the left.
Agence Meurisse

Skeptical scientists did rely on journalism

If archaeologists continued to mistrust the many newspapers that sensationalized Glozel, its stakes and their work in general, they could not escape the popular media entirely, so they confided in a few journalists at papers they considered responsible.

Shortly after the publication of the report, which was summarized and excerpted in the daily press, original excavator Morlet accused Dorothy Garrod, the only woman on the commission, of having tampered with the site. A group of archaeologists responded on her behalf, explaining what she had actually been doing and defending her professionalism – in the press.

At the most basic level, media coverage recorded the standard operating procedures of archaeology and its openness to outside scrutiny. This was in contrast to Morlet’s excavations, which limited access only to believers in the authenticity of Glozel.

Under the watchful eyes of reporters and photographers, the outside archaeologists investigating Glozel knew quite well that they were engaged in a kind of performance, one in which their discipline, as much as this particular discovery, was on trial.

Like the signs in my neighborhood proclaiming that “science is real,” the international commission depended on and sought to fortify the public’s confidence in the integrity of scientific inquiry. To do that, it needed the media even while expressing a healthy skepticism about it. It’s a balancing act that persists in today’s era of “trusting science.”

The Conversation

This article draws on research funded by the Institut d’Études Avancées (Paris), the Institute for Advanced Study (Princeton), and the National Endowment for the Humanities, as well as Daniel Sherman’s employer, the University of North Carolina at Chapel Hill.

ref. The Glozel affair: A sensational archaeological hoax made science front-page news in 1920s France – https://theconversation.com/the-glozel-affair-a-sensational-archaeological-hoax-made-science-front-page-news-in-1920s-france-260967

Why the Trump administration’s comparison of antifa to violent terrorist groups doesn’t track

Source: The Conversation – USA – By Art Jipson, Associate Professor of Sociology, University of Dayton

President Donald Trump speaks at the White House during a meeting on antifa, as Attorney General Pam Bondi, left, and Homeland Security Secretary Kristi Noem listen, on Oct. 8, 2025. AP Photo/Evan Vucci

When Homeland Security Secretary Kristi Noem compared antifa to the transnational criminal group MS-13, Hamas and the Islamic State group in October 2025, she equated a nonhierarchical, loosely organized movement of antifascist activists with some of the world’s most violent and organized militant groups.

Antifa is just as dangerous,” she said.

It’s a sweeping claim that ignores crucial distinctions in ideology, organization and scope. Comparing these groups is like comparing apples and bricks: They may both be organizations, but that’s where the resemblance stops.

Noem’s statement echoed the logic of a September 2025 Trump administration executive order that designated antifa as a “domestic terrorist organization.” The order directs all relevant federal agencies to investigate and dismantle any operations, including the funding sources, linked to antifa.

But there is no credible evidence from the FBI or the Department of Homeland Security that supports such a comparison. Independent terrorism experts don’t see the similarities either.

Data shows that the movement can be confrontational and occasionally violent. But antifa is neither a terrorist network nor a major source of organized lethal violence.

Antifa, as understood by scholars and law enforcement, is not an organization in any formal sense. It lacks membership rolls and leadership hierarchies. It doesn’t have centralized funding.

As a scholar of social movements, I know that antifa is a decentralized movement animated by opposition to fascism and far-right extremism. It’s an assortment of small groups that mobilize around specific protests or local issues. And its tactics range from peaceful counterdemonstrations to mutual aid projects.

For example, in Portland, Oregon, local antifa activists organized counterdemonstrations against far-right rallies in 2019.

Antifa groups active in Houston during Hurricane Harvey in 2017 coordinated food, supplies and rescue support for affected residents.

No evidence of terrorism

The FBI and DHS have classified certain anarchist or anti-fascist groups under the broad category of “domestic violent extremists.” But neither agency nor the State Department has ever previously designated antifa as a terrorist organization.

The data on political violence reinforces this point.

A woman holds a yellow sign while walking with a group of people.
A woman holds a sign while protesting immigration raids in San Francisco on Oct. 23, 2025.
AP Photo/Noah Berger

A 2022 report by the Counter Extremism Project found that the overwhelming majority of deadly domestic terrorist incidents in the United States in recent years were linked to right-wing extremists. These groups include white supremacists and anti-government militias that promote racist or authoritarian ideologies. They reject democratic authority and often seek to provoke social chaos or civil conflict to achieve their goals.

Left-wing or anarchist-affiliated violence, including acts attributed to antifa-aligned people, accounts for only a small fraction of domestic extremist incidents and almost none of the fatalities. Similarly, in 2021, the George Washington University Program on Extremism found that anarchist or anti-fascist attacks are typically localized, spontaneous and lacking coordination.

By contrast, the organizations Noem invoked – Hamas, the Islamic State group and MS-13 – share structural and operational characteristics that antifa lacks.

They operate across borders and are hierarchically organized. They are also capable of sustained military or paramilitary operations. They possess training pipelines, funding networks, propaganda infrastructure and territorial control. And they have orchestrated mass casualties such as the 2015 Paris attacks and the 2016 Brussels bombings.

In short, they are military or criminal organizations with strategic intent. Noem’s claim that antifa is “just as dangerous” as these groups is not only empirically indefensible but rhetorically reckless.

Turning dissent into ‘terrorism’

So why make such a claim?

Noem’s statement fits squarely within the Trump administration’s broader political strategy that has sought to inflate the perceived threat of left-wing activism.

Casting antifa as a domestic terrorist equivalent of the Islamic State nation or Hamas serves several functions.

It stokes fear among conservative audiences by linking street protests and progressive dissent to global terror networks. It also provides political cover for expanded domestic surveillance and harsher policing of protests.

Protesters, some holding signs, walk toward a building with a dome.
Demonstrators hold protest signs during a march from the Atlanta Civic Center to the Georgia State Capitol on Oct. 18, 2025, in Atlanta.
Julia Beverly/Getty Images

Additionally, it discredits protest movements critical of the right. In a polarized media environment, such rhetoric performs a symbolic purpose. It divides the moral universe into heroes and enemies, order and chaos, patriots and radicals.

Noem’s comparison reflects a broader pattern in populist politics, where complex social movements are reduced to simple, threatening caricatures. In recent years, some Republican leaders have used antifa as a shorthand for all forms of left-wing unrest or criticism of authority.

Antifa’s decentralized structure makes it a convenient target for blame. That’s because it lacks clear boundaries, leadership and accountability. So any act by someone identifying with antifa can be framed as representing the whole movement, whether or not it does. And by linking antifa to terrorist groups, Noem, the top anti-terror official in the country, turns a political talking point into a claim that appears to carry the weight of national security expertise.

The problem with this kind of rhetoric is not just that it’s inaccurate. Equating protest movements with terrorist organizations blurs important distinctions that allow democratic societies to tolerate dissent. It also risks misdirecting attention and resources away from more serious threats — including organized, ideologically driven groups that remain the primary source of domestic terrorism in the U.S.

As I see it, Noem’s claim reveals less about antifa and more about the political uses of fear.

By invoking the language of terrorism to describe an anti-fascist movement, she taps into a potent emotional current in American politics: the desire for clear enemies, simple explanations and moral certainty in times of division.

But effective homeland security depends on evidence, not ideology. To equate street-level confrontation with organized terror is not only wrong — it undermines the credibility of the very institutions charged with protecting the public.

The Conversation

Art Jipson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the Trump administration’s comparison of antifa to violent terrorist groups doesn’t track – https://theconversation.com/why-the-trump-administrations-comparison-of-antifa-to-violent-terrorist-groups-doesnt-track-267514

« Je suis sorti et j’ai pleuré » : ce que le personnel des établissements pour personnes âgées dit de son chagrin lorsque des résidents décèdent

Source: The Conversation – in French – By Jennifer Tieman, Matthew Flinders Professor and Director of the Research Centre for Palliative Care, Death and Dying, Flinders University

Les expériences répétées de la mort peuvent entraîner un chagrin cumulatif. Maskot/Getty Images

Avec le vieillissement de la population, nous vivons plus longtemps et mourons plus âgés. Les soins de fin de vie occupent donc une place de plus en plus importante dans les soins aux personnes âgées. Au Canada, environ 30 % des personnes âgées de 85 ans et plus vivent dans un établissement de soins infirmiers ou une résidence pour personnes âgées, proportion qui augmente significativement avec l’âge avancé.

Mais qu’est-ce que cela signifie pour ceux qui travaillent dans le secteur des soins aux personnes âgées ? Des recherches suggèrent que le personnel soignant éprouve un type de deuil particulier lorsque les résidents décèdent. Cependant, leur chagrin passe souvent inaperçu et ils peuvent se retrouver sans soutien suffisant.


Cet article fait partie de notre série La Révolution grise. La Conversation vous propose d’analyser sous toutes ses facettes l’impact du vieillissement de l’imposante cohorte des boomers sur notre société, qu’ils transforment depuis leur venue au monde. Manières de se loger, de travailler, de consommer la culture, de s’alimenter, de voyager, de se soigner, de vivre… découvrez avec nous les bouleversements en cours, et à venir.


Nouer des relations au fil du temps

Le personnel des établissements de soins aux personnes âgées ne se contente pas d’aider les résidents à prendre leur douche ou leurs repas, il s’implique activement et tisse des liens avec eux.

Dans le cadre de nos propres recherches, nous avons discuté avec des membres du personnel soignant qui s’occupent de personnes âgées dans des établissements de soins et à leur domicile.

Le personnel soignant est conscient que bon nombre des personnes dont il s’occupe vont mourir et qu’il a un rôle à jouer pour les accompagner vers la fin de leur vie. Dans le cadre de leur travail, ils nouent souvent des relations enrichissantes et gratifiantes avec les personnes âgées dont ils s’occupent.

Par conséquent, le décès d’une personne âgée peut être source d’une profonde tristesse pour le personnel soignant. Comme l’une d’entre elles nous l’a confié :

Je sais que je pleure certains de ceux qui décèdent […] Vous passez du temps avec eux et vous les aimez.

Certains soignants que nous avons interrogés ont évoqué le fait d’être présents auprès des personnes âgées, de leur parler ou de leur tenir la main lorsqu’elles décèdent. D’autres ont expliqué qu’ils versaient des larmes pour la personne décédée, mais aussi en raison de leur perte, car ils connaissaient la personne âgée et avaient été impliqués dans sa vie.

Je pense que ce qui a aggravé les choses, c’est quand sa respiration est devenue très superficielle et que j’ai su qu’elle arrivait à la fin. Je suis sortie. Je lui ai dit que je sortais un instant. Je suis sortie et j’ai pleuré parce que j’aurais voulu pouvoir la sauver, mais je savais que je ne pouvais pas.

Parfois, le personnel soignant n’a pas l’occasion de dire au revoir, ou d’être reconnu comme quelqu’un qui avait subi une perte, même s’il a pris soin de la personne pendant plusieurs mois ou années. Une soignante pour personnes âgées a noté :

Si les gens meurent à l’hôpital, c’est un autre deuil. Parce qu’ils ne peuvent pas dire au revoir. Souvent, l’hôpital ne vous le dit pas.

Le personnel soignant doit souvent aider les familles et leurs proches à accepter la mort d’un parent, d’un proche ou d’un ami. Cela peut alourdir le fardeau émotionnel du personnel qui peut lui-même être en deuil.




À lire aussi :
Que manger après 50 ans pour prévenir les blessures musculaires ?


Chagrin cumulatif

Les expériences répétées de la mort peuvent entraîner un chagrin cumulatif et une tension émotionnelle. Si le personnel interrogé conférait un sens et une valeur à son travail, il trouvait également difficile d’être régulièrement confronté à la mort.

Un membre du personnel nous a confié qu’avec le temps, et après avoir été confronté à de nombreux décès, on peut « se sentir un peu robotisé. Parce qu’il faut devenir ainsi pour pouvoir gérer la situation ».

Les problèmes organisationnels tels que le manque de personnel ou la charge de travail élevée peuvent également exacerber ces sentiments d’épuisement et d’insatisfaction. Le personnel a souligné la nécessité de pouvoir compter sur du soutien pour faire face à cette situation.

Parfois, tout ce que vous voulez, c’est parler. Vous n’avez pas besoin que quelqu’un résolve quoi que ce soit pour vous. Vous voulez juste être écouté.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Aider le personnel à gérer son chagrin

Les organismes de soins aux personnes âgées doivent prendre des mesures pour soutenir le bien-être de leur personnel, notamment en reconnaissant le deuil que beaucoup ressentent lorsque des personnes âgées décèdent.

Après le décès d’une personne âgée, offrir un soutien au personnel qui a travaillé en étroite collaboration avec cette personne et reconnaître les liens émotionnels qui existaient entre eux sont des moyens efficaces de reconnaître et de valider le deuil du personnel. Il suffit de demander au membre du personnel comment il va, ou de lui donner la possibilité de prendre le temps de faire le deuil de la personne décédée.




À lire aussi :
Vieillir en milieu rural est un enjeu collectif qui doit être pris au sérieux


Les lieux de travail devraient également encourager plus largement les pratiques d’autogestion de la santé, en promouvant des activités telles que les pauses programmées, les relations avec les collègues et la priorité accordée au temps de détente et aux activités physiques. Le personnel apprécie les lieux de travail qui encouragent, normalisent et soutiennent leurs pratiques d’autogestion de la santé.

Nous devons également réfléchir à la manière dont nous pouvons normaliser la capacité à parler de la mort et du processus de fin de vie au sein de nos familles et de nos communautés. La réticence à reconnaître la mort comme faisant partie de la vie peut alourdir le fardeau émotionnel du personnel, en particulier si les familles considèrent la mort comme un échec des soins prodigués.

À l’inverse, le personnel chargé des soins aux personnes âgées nous a maintes fois répété à quel point il était important pour lui de recevoir des commentaires positifs et la reconnaissance des familles. Comme l’a rappelé une soignante :

Nous avons eu un décès ce week-end. Il s’agissait d’un résident de très longue date. Et sa fille est venue spécialement ce matin pour me dire à quel point les soins prodigués avaient été fantastiques. Cela me réconforte, cela me confirme que ce que nous faisons est juste.

En tant que membres de familles et de communautés, nous devons reconnaître que les personnes soignantes sont particulièrement vulnérables au sentiment de deuil et de perte, car elles ont souvent noué des relations avec les personnes dont elles s’occupent au fil des mois ou des années. En soutenant le bien-être de ces travailleuses essentielles, nous les aidons à continuer à prendre soin de nous et de nos proches à mesure que nous vieillissons et que nous approchons de la fin de notre vie.

La Conversation Canada

Jennifer Tieman reçoit des financements du ministère de la Santé, du Handicap et du Vieillissement, du ministère de la Santé et du Bien-être (SA) et du Medical Research Future Fund. Des subventions de recherche spécifiques ainsi que des subventions nationales pour des projets tels que ELDAC, CareSearch et palliAGED ont permis la réalisation des recherches et des projets dont les résultats et les ressources sont présentés dans cet article. Jennifer est membre de divers comités et groupes consultatifs de projets, notamment le comité directeur d’Advance Care Planning Australia, le réseau IHACPA Aged Care Network et le groupe consultatif national d’experts de Palliative Care Australia.

Dr Priyanka Vandersman receives funding from Department of Health, Disability and Ageing. She is affiliated with Flinders University, End of Life Directions for Aged Care project. She is a Digital Health adviser for the Australian Digital Health Agency, and serves as committee member for the Nursing and Midwifery in Digital Health group within the Australian Institute of Digital Health, as well as Standards Australia’s MB-027 Ageing Societies committee.

ref. « Je suis sorti et j’ai pleuré » : ce que le personnel des établissements pour personnes âgées dit de son chagrin lorsque des résidents décèdent – https://theconversation.com/je-suis-sorti-et-jai-pleure-ce-que-le-personnel-des-etablissements-pour-personnes-agees-dit-de-son-chagrin-lorsque-des-residents-decedent-263502

Future of nation’s energy grid hurt by Trump’s funding cuts

Source: The Conversation – USA (2) – By Roshanak (Roshi) Nateghi, Associate Professor of Sustainability, Georgetown University

Large-capacity electrical wires carry power from one place to another around the nation. Stephanie Tacy/NurPhoto via Getty Images

The Trump administration’s widespread cancellation and freezing of clean energy funding is also hitting essential work to improve the nation’s power grid. That includes investments in grid modernization, energy storage and efforts to protect communities from outages during extreme weather and cyberattacks. Ending these projects leaves Americans vulnerable to more frequent and longer-lasting power outages.

The Department of Energy has defended the cancellations, saying that “the projects did not adequately advance the nation’s energy needs, were not economically viable and would not provide a positive return on investment of taxpayer dollars.” Yet before any funds are actually released through these programs, each grant must pass evaluations based on the department’s standards. Those included rigorous assessments of technical merits, potential risks and cost-benefit analyses — all designed to ensure alignment with national energy priorities and responsible stewardship of public funds.

I am an associate professor studying sustainability, with over 15 years of experience in energy systems reliability and resilience. In the past, I also served as a Department of Energy program manager focused on grid resilience. I know that many of these canceled grants were foundational investments in the science and infrastructure necessary to keep the lights on, especially when the grid is under stress.

The dollar-value estimates vary, and some of the money has already been spent. A list of canceled projects maintained by energy analysis company Yardsale totals about US$5 billion. An Oct. 2, 2025, announcement from the department touts $7.5 billion in cuts to 321 awards across 223 projects. Additional documents leaked to Politico reportedly identified additional awards under review. Some media reports suggest the full value of at-risk commitments may reach $24 billion — a figure that has not been publicly confirmed or refuted by the Trump administration.

These were not speculative ventures. And some of them were competitively awarded projects that the department funded specifically to enhance grid efficiency, reliability and resilience.

Grid improvement funding

For years, the federal government has been criticized for investing too little in the nation’s electricity grid. The long-term planning — and spending — required to ensure the grid reliably serves the public often falls victim to short-term political cycles and shifting priorities across both parties.

But these recent cuts come amid increasingly frequent extreme weather, increased cybersecurity threats to the systems that keep the lights on, and aging grid equipment that is nearing the end of its life.

These projects sought to make the grid more reliable so it can withstand storms, hackers, accidents and other problems.

National laboratories

In addition to those project cancellations, President Donald Trump’s proposed budget for 2026 contains deep cuts to the Office of Energy Efficiency and Renewable Energy, a primary funding source for several national laboratories, including the National Renewable Energy Laboratory, which may face widespread layoffs.

Among other work, these labs conduct fundamental grid-related research like developing and testing ways to send more electricity over existing power lines, creating computational models to simulate how the U.S. grid responds to extreme weather or cyberattacks, and analyzing real-time operational data to identify vulnerabilities and enhance reliability.

These efforts are necessary to design, operate and manage the grid, and to figure out how best to integrate new technologies.

A group of solar panels sits next to several large metal containers, as a train rolls past in the background.
Solar panels and large-capacity battery storage can support microgrids that keep key services powered despite bad weather or high demand.
Sandy Huffaker/AFP via Getty Images

Grid resilience and modernization

Some of the projects that have lost funding sought to upgrade grid management – including improved sensing of real-time voltage and frequency changes in the electricity sent to homes and businesses.

That program, the Grid Resilience and Innovation Partnerships Program, also funded efforts to automate grid operations, allowing faster response to outages or changes in output from power plants. It also supported developing microgrids – localized systems that can operate independently during outages. The canceled projects in that program, estimated to total $724.6 million, were in 24 states.

For example, a $19.5 million project in the Upper Midwest would have installed smart sensors and software to detect overloaded power lines or equipment failures, helping people respond faster to outages and prevent blackouts.

A $50 million project in California would have boosted the capacity of existing subtransmission lines, improving power stability and grid flexibility by installing a smart substation, without needing new transmission corridors.

Microgrid projects in New York, New Mexico and Hawaii would have kept essential services running during disasters, cyberattacks and planned power outages.

Another canceled project included $11 million to help utilities in 12 states use electric school buses as backup batteries, delivering power during emergencies and peak demand, like on hot summer days.

Several transmission projects were also canceled, including a $464 million effort in the Midwest to coordinate multiple grid connections from new generation sites.

Long-duration energy storage

The grid must meet demand at all times, even when wind and solar generation is low or when extreme weather downs power lines. A key element of that stability involves storing massive amounts of electricity for when it’s needed.

One canceled project would have spent $70 million turning retired coal plants in Minnesota and Colorado into buildings holding iron-air batteries capable of powering several thousand homes for as many as four days.

Two large yellow buses are parked next to each other.
Electric school buses like these could provide meaningful amounts of power to the grid during an outage.
Chris Jackson for The Washington Post via Getty Images

Rural and remote energy systems

Another terminated program sought to help people who live in rural or remote places, who are often served by just one or two power lines rather than a grid that can reroute power around an interruption.

A $30 million small-scale bioenergy project would have helped three rural California communities convert forest and agricultural waste into electricity.

Not all of the terminated initiatives were explicitly designed for resilience. Some would have strengthened grid stability as a byproduct of their main goals. The rollback of $1.2 billion in hydrogen hub investments, for example, undermines projects that would have paired industrial decarbonization with large-scale energy storage to balance renewable power. Similarly, several canceled industrial modernization projects, such as hybrid electric furnaces and low-carbon cement plants, were structured to manage power demand and integrate clean energy, to improve grid stability and flexibility.

The reliability paradox

The administration has said that these cuts will save money. In practice, however, they shift spending from prevention of extended outages to recovery from them.

Without advances in technology and equipment, grid operators face more frequent outages, longer restoration times and rising maintenance costs. Without investment in systems that can withstand storms or hackers, taxpayers and ratepayers will ultimately bear the costs of repairing the damage.

Some of the projects now on hold were intended to allow hospitals, schools and emergency centers to reduce blackout risks and speed power restoration. These are essential reliability and public safety functions, not partisan initiatives.

Canceling programs to improve the grid leaves utilities and their customers dependent on emergency stopgaps — diesel generators, rolling blackouts and reactive maintenance — instead of forward-looking solutions.

The Conversation

Roshanak (Roshi) Nateghi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Future of nation’s energy grid hurt by Trump’s funding cuts – https://theconversation.com/future-of-nations-energy-grid-hurt-by-trumps-funding-cuts-267504

Children learn to read with books that are just right for them – but that might not be the best approach

Source: The Conversation – USA (2) – By Timothy E Shanahan, Distinguished Professor Emeritus of Literacy, University of Illinois Chicago

Children and an adult read books at the Altadena Main Library in Altadena, Calif., in March 2025. Hans Gutknecht/MediaNews Group/Los Angeles Daily News via Getty Images

After decades of stagnating reading performance, American literacy levels have begun to drop, according to the National Assessment of Educational Progress, a program of the Department of Education.

The average reading scores of 12th graders in 2024 were 3 points lower than they were in 2019. More kids are failing to even reach basic levels of reading that would allow them to successfully do their schoolwork, according to the assessment.

There is much blaming and finger-pointing as to why the U.S. isn’t doing better. Some experts say that parents are allowing kids to spend too much time on screens, while others argue that elementary teachers aren’t teaching enough phonics, or that schools closing during the COVID-19 pandemic has had lingering effects.

As a scholar of reading, I think the best explanation is that most American schools are teaching reading using an approach that new research shows severely limits students’ opportunities to learn.

A person's hands partially cover a stack of children's books.
Students often learn to read with books that are preselected so they can easily understand most of the words in them.
Jacqueline Nix/iStock/Getty Images Plus

A Goldilocks approach to books

In the 1940s, Emmett Betts, a scholar of education and theory, proposed the idea that if the books used to teach reading were either too easy or too hard, then students’ learning would be stifled.

The thinking went that kids should be taught to read with books that were just the right fit for them.

The theory was backed by research and included specific criteria for determining the best books for each child.

The idea is that kids should work with books they could already read with 95% word accuracy and 75% to 89% comprehension.

Most American schools continue to use this approach to teaching reading, nearly a century later.

A popular method

To implement this approach, schools usually test children multiple times each year to determine which books they should be allowed to read in school. Teachers and librarians will label and organize books into color-coded bins, based on their level of difficulty. This practice helps ensure that no child strays into a book judged too difficult for them to easily follow. Teachers then divide their class into reading groups based on the book levels the students are assigned.

Most elementary teachers and middle school teachers say they try to teach at their students’ reading levels, as do more than 40% of high school English teachers.

This approach might sound good, but it means that students work with books they can already read pretty well. And they might not have very much to learn from those books.

New research challenges these widely used instructional practices. My July 2025 book, “Leveled Reading, Leveled Lives,” explains that students learn more when taught with more difficult texts. In other words, this popular approach to teaching has been holding kids back rather than helping them succeed.

Many students will read at levels that match the grades they are in. But kids who cannot already read those grade-level texts with high comprehension are demoted to below-grade-level books in the hopes that this will help them make more progress.

Often, parents do not know that their children are reading at a level lower than the grade they are in.

Perhaps that is why, while more than one-third of American elementary students read below grade level, 90% of parents think their kids are at or above grade level.

What’s in a reading level?

The approach to “just right” reading has long roots in American history.

In the 1840s, U.S. schools were divided into grade levels based on children’s ages. In response, textbook publishing companies organized their reading textbooks the same way. There was a first grade book, a second grade book and so on.

These reading levels admittedly were somewhat arbitrary. The grade-level reading diet proposed by one company may have differed from its competitors’ offerings.

That changed in 2010 with the Common Core state standards, a multistate educational initiative that set K-12 learning goals in reading and math in more than 40 states.

At the time, too many students were leaving high school without the ability to read the kinds of books and papers used in college, the workplace or the military.

Accordingly, Common Core set ranges of text levels for each grade to ensure that by high school graduation, students would be able to easily handle reading they will encounter in college and other places after graduation. Many states have replaced or revised those standards over the past 15 years, but most continue to keep those text levels as a key learning goal.

That means that most states have set reading levels that their students should be able to accomplish by each grade. Students who do this should graduate from high school with sufficient literacy to participate fully in American society.

But this instructional level theory can stand in the way of getting kids to those goals. If students cannot already read those grade level texts reasonably well, the teacher is to provide easier books than adjusting the instruction to help them catch up.

But that raises a question: If children spend their time while they are in the fourth grade reading second grade books, will they ever catch up?

Two young children sit at a desk and read books.
New research suggests that children could benefit more from reading books that are slightly advanced for them, even if they cannot immediately grasp almost all of the words.
Jerry Holt/Star Tribune via Getty Images

What the research says

For more than 40 years, there was little research into the effectiveness of teaching reading with books that were easy for kids to follow. Still, the numbers of schools buying into the idea burgeoned.

Research into effectiveness – or, actually, ineffectiveness – of this method has finally begun to accumulate. These studies show that teaching students at their reading levels, rather than their grade levels, either offers no benefit or can slow how much children learn.

Since 2000, the federal government has spent tens of billions of dollars trying to increase children’s literacy rates. State expenditures toward this goal have been considerable, as well.

Despite these efforts, there have been no improvements in U.S. reading achievement for middle school or high school students since 1970.

I believe it is important to consider the emerging research that shows there will not be considerable reading gains until kids are taught to read with sufficiently challenging and meaty texts.

The Conversation

Timothy E Shanahan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Children learn to read with books that are just right for them – but that might not be the best approach – https://theconversation.com/children-learn-to-read-with-books-that-are-just-right-for-them-but-that-might-not-be-the-best-approach-267510

How the Philadelphia Art Museum is reinventing itself for the Instagram age

Source: The Conversation – USA (2) – By Sheri Lambert, Professor of Marketing, Temple University

Modernizing a century-old cultural brand in Philly can be risky. Rob Cusick/Philadelphia Art Museum

On Philadelphia’s famed Benjamin Franklin Parkway, where stone, symmetry and civic ambition meet, something subtle yet seismic has happened.

The city’s grandest temple to art has shed a preposition.

After nearly a century as the Philadelphia Museum of Art, or PMA, the institution now calls itself simply the Philadelphia Art Museum – or PhAM, as the new logo and public rollout invite us to say.

The change may seem cosmetic, but as a marketing scholar at Temple University whose research focuses on branding and digital marketing strategy, I know that in the tight geometry of naming and branding, every word matters.

The museum’s new identity signals not just a typographic update but a transformation in tone, purpose and reach. It’s as if the museum has taken a long, deep breath … and decided to loosen its collar.

Insta-friendly design

For decades, the museum’s granite facade has represented permanence. Its pediments crowned with griffins – mythological creatures that are part lion and part eagle – have looked out across the parkway like silent sentinels of culture. The rebrand dares to make those sentinels dance.

In its new form, PhAM is deliberately more flexible, less marble, more motion. The logo revives the griffin but places it with a bold, circular emblem that is unmistakably digital. The new logo is chunkier, more assertive and designed to hold its own on a phone screen.

Like the 2015 Metropolitan Museum of Art’s digital overhaul in New York, the Philadelphia Art Museum is leaning into an era where visitors first encounter culture through screens, not doors. As the Met’s former chief digital officer Sree Sreenivasan stated, “Our competition is Netflix and Candy Crush,” not other museums.

The PhAM’s visual language is redesigned for environments filled with scrolling, swiping and sharing. Through this marketer’s lens, the goal is clear: to ensure that the museum lives not only on the parkway but in the algorithm.

A wall with colorful white, pink, blue, teal and black signs with modern graphic designs
The museum’s new branding and signage aims to appeal to younger and more diverse audiences.
Rob Cusick/Philadelphia Art Museum

A little younger, more cheeky

There is something refreshing about a legacy institution willing to meet its existing or future audience where they already are. The museum’s leadership frames the change as a broader renewal – a commitment to accessibility, community and openness.

The rebrand showcases “Philadelphia” and it takes center stage in the new name and logo, a subtle but potent reminder that the museum’s roots are here. In the previous design, the word “Art” was much larger and more bolded than “Philadelphia.”

And then there’s the nickname: PhAM. It’s playful – think, “Hey, fam!” or a Batman comic-style Pow! Bam! PhAM! – compact, easy to say and just cheeky enough to intrigue a new generation. It’s Instagrammable and hashtaggable. It’s got trending power. I asked a lecture hall full of marketing students in their 20s what they thought of it, and they generally loved it. They thought it was “fun,” “hip” and had enough “play” in the name to make them want to visit.

It’s also a nod to the way folks from Philly actually talk about the place. No one in Philadelphia ever says, “Let’s go to the Museum of Art.” They call it “the Art Museum.” The brand finally caught up with the vernacular.

A balancing act

Rebrands in the cultural sector are rarely simple makeovers. They are identity reckonings.

The Tate Modern in London mastered this dance in 2016 when it modernized its graphics and digital outreach while keeping the weight of its bones intact.

Others have stumbled.

When the Whitney Museum in New York debuted a minimalist “W” in 2013, reactions were mixed. To me, it felt more like a tech startup than a place of art.

PhAM now faces that same paradox. How does the cultural institution appear modern without erasing its majesty? Museums, after all, trade in authority as much as accessibility.

The new name carries subtle risks. Some longtime patrons may bristle at the casual tone. And the phrase “Museum of Art” carries an academic formality that “Art Museum” softens.

And the more flexible a brand’s logo or voice becomes, the more it risks dissolving into the noise of digital sameness. While brands must adapt their visuals and tone to fit different social media platforms and audiences, there is a fine line between flexibility and dilution. The more a brand’s logo, voice or visual identity bends to accommodate every digital trend or platform aesthetic, the greater the risk that it loses its edge.

For example, when too many brands adopt a minimalistic sans-serif logo – as we’ve seen with fashion brands such as Burberry and Saint Laurent – the result is a uniform aesthetic that makes it difficult for any single identity to stand out.

Flexibility should serve differentiation, not erode it.

In the end, I appreciate how PhAM’s revival of the griffin, steeped in the building’s history, keeps the brand tethered to its architectural DNA.

For now, the rebrand communicates both humility and confidence. It acknowledges that even icons must learn to speak new languages. The gesture isn’t just aesthetic; it’s generational. By softening its posture and modernizing its voice, the Philadelphia Art Museum appears intent on courting a new cohort of museumgoers used to stories unfolding on screens. This is a rebrand not merely for the faithful but for those who might never have thought of the museum as “for them” in the first place.

A billboard on urban street reads 'Art for all, All for art'
Philadelphia Art Museum’s new branding on display on N. 5th Street in North Philadelphia.
Sheri Lambert, CC BY-NC-SA

Read more of our stories about Philadelphia and Pennsylvania, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Sheri Lambert does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How the Philadelphia Art Museum is reinventing itself for the Instagram age – https://theconversation.com/how-the-philadelphia-art-museum-is-reinventing-itself-for-the-instagram-age-267945

Why you can salvage moldy cheese but never spoiled meat − a toxicologist advises on what to watch out for

Source: The Conversation – USA (3) – By Brad Reisfeld, Professor Emeritus of Chemical and Biological Engineering, Biomedical Engineering, and Public Health, Colorado State University

Molds on foods produce a range of microbial toxins and biochemical byproducts that can be harmful. JulieAlexK/iStock via Getty Images

When you open the refrigerator and find a wedge of cheese flecked with green mold, or a package of chicken that smells faintly sour, it can be tempting to gamble with your stomach rather than waste food.

But the line between harmless fermentation and dangerous spoilage is sharp. Consuming spoiled foods exposes the body to a range of microbial toxins and biochemical by-products, many of which can interfere with essential biological processes. The health effects can vary from mild gastrointestinal discomfort to severe conditions such as liver cancer.

I am a toxicologist and researcher specializing in how foreign chemicals such as those released during food spoilage affect the body. Many spoiled foods contain specific microorganisms that produce toxins. Because individual sensitivity to these chemicals varies, and the amount present in spoiled foods can also vary widely, there are no absolute guidelines on what is safe to eat. However, it’s always a good idea to know your enemies so you can take steps to avoid them.

Nuts and grains

In plant-based foods such as grains and nuts, fungi are the main culprits behind spoilage, forming fuzzy patches of mold in shades of green, yellow, black or white that usually give off a musty smell. Colorful though they may be, many of these molds produce toxic chemicals called mycotoxins.

Two common fungi found on grains and nuts such as corn, sorghum, rice and peanuts are Aspergillus flavus and A. parasiticus. They can produce mycotoxins known as aflatoxins, which form molecules called epoxides that can trigger mutations when they bind to DNA. Repeated exposure to aflatoxins can damage the liver and has been linked to liver cancer, especially for people who already have other risk factors for it, such as hepatitis B infection.

Mold on corn cobs
Fusarium molds can grow on corn and other grains.
Orest Lyzhechka/iStock via Getty Images Plus

Fusarium is another group of fungal pathogens that can grow as mold on grains such as wheat, barley and corn, especially at high humidity. Infected grains may appear discolored or have a pinkish or reddish hue, and they might emit a musty odor. Fusarium fungi produce mycotoxins called trichothecenes, which can damage cells and irritate the digestive tract. They also make another toxin, fumonisin B1, which disrupts how cells build and maintain their outer membranes. Over time, these effects can harm the liver and kidneys.

If grains or nuts look moldy, discolored or shriveled, or if they have an unusual smell, it’s best to err on the side of caution and throw them out. Aflotoxins, especially, are known to be potent cancer-causing agents, so they have no safe level of exposure.

Fruits

Fruits can also harbor mycotoxins. When they become bruised or overripe, or are stored in damp conditions, mold can easily take hold and begin producing these harmful substances.

One biggie is a blue mold called Penicillium expansum, which is best known for infecting apples but also attacks pears, cherries, peaches and other fruit. This fungus produces patulin, a toxin that interferes with key enzymes in cells to hobble normal cell functions and generate unstable molecules called reactive oxygen species that can harm DNA, proteins and fats. In large amounts, patulin can injure major organs such as the kidneys, liver, digestive tract and immune system.

P. expansum’s blue and green cousins, Penicillium italicum and Penicillium digitatum, are frequent flyers on oranges, lemons and other citrus fruits. It’s not clear whether they produce dangerous toxins, but they taste awful.

Green and white mold on an orange
Penicillium digitatum forms a pretty green growth on citrus fruits that makes them taste terrible.
James Scott via Wikimedia Commons, CC BY-SA

It is tempting to just cut off the moldy parts of a fruit and eat the rest. However, molds can send out microscopic, rootlike structures called hyphae that penetrate deeply into food, potentially releasing toxins even in seemingly unaffected bits. Especially for soft fruits, where hyphae can grow more easily, it’s safest to toss moldy specimens. Do it at your own risk, but for hard fruits I do sometimes just cut off the moldy bits.

Cheese

Cheese showcases the benefits of controlled microbial growth. In fact, mold is a crucial component in many of the cheeses you know and love. Blue cheeses such as Roquefort and Stilton get their distinctive, tangy flavor from chemicals produced by a fungus called Penicillium roqueforti. And the soft, white rind on cheeses such as Brie or Camembert contributes to their flavor and texture.

On the other hand, unwanted molds look fuzzy or powdery and may take on unusual colors. Greenish-black or reddish molds, sometimes caused by Aspergillus species, can be toxic and should be discarded. Also, species such as Penicillium commune produce cyclopiazonic acid, a mycotoxin that disrupts calcium flow across cell membranes, potentially impairing muscle and nerve function. At high enough levels, it may cause tremors or other nervous system symptoms. Fortunately, such cases are rare, and spoiled dairy products usually give themselves away by their sharp, sour, rank odor.

Cheesemaker examining cheeses
Mold is a crucial component of blue cheeses, adding a distinctive, tangy taste.
Peter Cade/Photodisc via Getty Images

As a general rule, discard soft cheeses such as ricotta, cream cheese and cottage cheese at the first sign of mold. Because these cheeses contain more moisture, the mold’s filaments can spread easily.

Hard cheeses, including cheddar, Parmesan and Swiss, are less porous. So cutting away at least one inch around the moldy spot is more of a safe bet – just take care not to touch the mold with your knife.

Meat

While molds are the primary concern for plant and dairy spoilage, bacteria are the main agents of meat decomposition. Telltale signs of meat spoilage include a slimy texture, discoloration that’s often greenish or brownish and a sour or putrid odor.

Some harmful bacteria do not produce noticeable changes in smell, appearance or texture, making it difficult to assess the safety of meat based on sensory cues alone. That stink, though, is caused by chemicals such as cadaverine and putrescine that are formed as meat decomposes, and they can cause nausea, vomiting and abdominal cramps, as well as headaches, flushing or drops in blood pressure.

Spoiled meats are rife with bacterial dangers. Escherichia coli, a common contaminant of beef, produces shiga toxin, which chokes off some cells’ ability to make proteins and can cause a dangerous kidney disease called hemolytic uremic syndrome. Poultry often carries the bacterium Campylobacter jejuni, which produces a toxin that invades gastrointestinal cells, often leading to diarrhea, abdominal cramps and fever. It can also provoke the body’s immune system to attack its own nerves, potentially sparking a rare condition called Guillain–Barré syndrome, which can lead to temporary paralysis.

Salmonella, found in eggs and undercooked chicken, is one of the most common types of food poisoning, causing diarrhea, nausea and abdominal cramps. It releases toxins into the lining of the small and large intestines that drive extensive inflammation. Clostridium perfringens also attacks the gut, but its toxins work by damaging cell membranes. And Clostridium botulinum, which can lurk in improperly stored or canned meats, produces botulinum toxin, one of the most potent biological poisonslethal even in tiny amounts.

It is impossible for meat to be totally free of bacteria, but the longer it sits in your refrigerator – or worse, on your counter or in your grocery bag – the more those bacteria multiply. And you can’t cook the yuck away. Most bacteria die at meat-safe temperatures – between 145 and 165 degrees Fahrenheit (63-74 C) – but many bacterial toxins are heat stable and survive cooking.

The Conversation

Brad Reisfeld does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why you can salvage moldy cheese but never spoiled meat − a toxicologist advises on what to watch out for – https://theconversation.com/why-you-can-salvage-moldy-cheese-but-never-spoiled-meat-a-toxicologist-advises-on-what-to-watch-out-for-263908

Logement : les partis municipaux prisonniers de la logique du marché

Source: The Conversation – in French – By Renaud Goyer, Professeur, politiques et programmes sociaux, École de travail social, Université du Québec à Montréal (UQAM)

À quelques semaines des élections municipales, prévues le 3 novembre prochain, la question du logement s’impose comme l’un des enjeux centraux de la campagne au Québec. Dans un contexte de crise d’abordabilité et de hausse des expulsions, les partis municipaux rivalisent de promesses pour accroître l’offre de logements, mais leurs propositions restent souvent prisonnières d’une même logique : miser sur le marché pour résoudre une crise qu’il a contribué à créer.


Cet article fait partie de notre série Nos villes d’hier à demain. Le tissu urbain connait de multiples mutations, avec chacune ses implications culturelles, économiques, sociales et – tout particulièrement en cette année électorale – politiques. Pour éclairer ces divers enjeux, La Conversation invite les chercheuses et chercheurs à aborder l’actualité de nos villes.


Un règlement inefficace

En 2005, bien avant même l’arrivée de Projet Montréal à la mairie, le conseil municipal (alors dirigé par Gérald Tremblay) adoptait une politique de construction de logements sociaux et abordables au sein des projets de développement privés, en misant sur la négociation avec les promoteurs immobiliers. À son arrivée, Projet Montréal a renforcé la politique en imposant l’inclusion de logements abordables, familiaux et/ou sociaux. Aujourd’hui, cette politique est jugée inefficace par l’ensemble des formations politiques, y compris celles qui en avaient été à l’origine.

En fait, tant la politique d’inclusion que le règlement pour une métropole mixte n’ont permis de construire des logements abordables, familiaux et/ou sociaux en nombre suffisant pour répondre aux besoins. Les promoteurs préfèrent, dans 97 % des cas, payer la maigre compensation prévue plutôt que de les bâtir : à peine 250 unités construites annuellement, alors que les mises en chantier représentaient au moins 20 fois plus d’unités.

Tous les partis formulent la même critique : la politique d’inclusion serait trop contraignante pour les promoteurs immobiliers, qui hésiteraient à lancer de nouveaux projets. Ce frein réglementaire aurait, selon eux, ralenti le développement. Or, les chiffres racontent une tout autre histoire : depuis l’adoption du règlement, Montréal a connu des années records de mises en chantier.

La prégnance de la politique de l’offre

En réalité, cette politique, tout comme les propositions électorales actuelles, repose sur une même idée : pour résoudre la crise du logement, il suffirait de construire davantage, peu importe le type de logements.

Dans cette optique, les partis reprennent la stratégie de la SCHL : faciliter la vie aux promoteurs en réduisant les barrières à la construction et la « paperasserie ». Projet Montréal, par exemple, a annoncé la désignation de zones « prêtes à bâtir » alors qu’Ensemble Montréal promet de construire 50 000 logements en cinq ans par l’accélération des procédures de permis.

Lorsqu’il est question de logements abordables ou sociaux, les intentions demeurent plus vagues. Tous souhaitent en accroître le nombre – sauf Action Montréal –, mais peu avancent des mesures concrètes. Les solutions proposées sont surtout financières : garanties municipales (Projet Montréal), fonds privé-public pour les OBNL pour élargir le spectre de l’offre (Futur Montréal), ou microcrédit pour protéger les locataires vulnérables (Ensemble Montréal). Ces approches révèlent une contradiction : on cherche à mobiliser les mécanismes du marché pour produire du logement… hors marché.

Cette logique n’est d’ailleurs pas nouvelle. Les anciennes politiques d’inclusion reposaient elles aussi sur la collaboration avec le secteur privé pour construire du logement social ou abordable. Or, ce modèle a contribué à marginaliser ce type d’unités dans le parc immobilier montréalais : la part des HLM, notamment, a reculé au cours des dix dernières années.




À lire aussi :
Élections municipales : les enjeux des villes changent, mais pas leurs pouvoirs


Une crise à travers le prisme de l’itinérance

Pour la plupart des partis politiques, la crise actuelle n’est pas d’abord une crise du logement, mais une crise de l’itinérance. Cette dernière est bien réelle, bien sûr, mais elle sert trop souvent à détourner le regard du problème plus large : l’accès au logement pour l’ensemble de la population. Peu de propositions visent à loger le plus grand nombre ou à renforcer le parc de logements sociaux et abordables.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Cette approche révèle un certain malaise politique. Les partis peinent à défendre le logement social sans recourir à la figure des personnes marginalisées ou non logées, présentées implicitement comme les « indésirables » de la ville qu’il faudrait soustraire à la vue. Les politiques de logement deviennent ainsi un outil de gestion de la visibilité de la pauvreté, plutôt qu’une réponse structurelle à la crise.

Ce glissement explique sans doute l’absence, dans la campagne actuelle, d’un débat sur la cohabitation urbaine, de la perspective des personnes non logées. Ce silence étonne, alors même que l’Office de consultation publique de Montréal a déposé un rapport sur la question cet été. Les commissaires y rappellent dans un premier temps que les enjeux de cohabitation découlent de la crise de logement et nourrissent la stigmatisation, l’exclusion et la criminalisation des personnes en situation d’itinérance. Dans un deuxième temps, ils interpellaient élus et candidats pour qu’ils exercent un leadership inclusif sur cette question.

La non-responsabilité comme modus operandi

Les élus municipaux ne prennent pas leurs responsabilités concernant la cohabitation, le logement et l’itinérance ; ils donnent l’impression que ce n’est pas de leur ressort et que la responsabilité revient plutôt à Québec ou à Ottawa.

Pourtant, tant en matière de logement que d’itinérance, le palier municipal peut agir. D’ailleurs, le parti Transition Montréal rappelle que la Ville a maintenant de nouveaux pouvoirs de taxation pour financer des initiatives en matière de logement – même si le parti reste vague sur la manière dont il entendrait les utiliser.

À l’instar de Vancouver en Colombie-Britannique, ou même de Montréal dans les années 1980, la Ville pourrait devenir maître d’œuvre de projets en matière de logement à travers une organisation qui existe déjà : la Société d’habitation de Montréal. Créée par la Ville, cette dernière possède 5000 logements hors marché et pourrait être mobilisée, avec un financement indexé, pour démarchandiser des logements existants ou construire de nouvelles unités.

Une telle démarche permettrait de diversifier les modes d’intervention, qui ont surtout reposé sur le privé et le marché au cours des 20 dernières années, et de confier au secteur communautaire la tâche de gérer la crise et d’y remédier. Au pire, elle permettrait d’ouvrir le débat sur le pouvoir des villes en matière de logement.

À quelques semaines du scrutin, le choix qui se profile est moins celui de la couleur politique que de la vision du rôle de la ville : simple facilitatrice du marché ou véritable maître d’œuvre du logement ?

La Conversation Canada

Renaud Goyer a reçu des financements Conseil de recherches en sciences humaines du Canada.

Louis Gaudreau est chercheur-associé à l’Institut de recherche et d’informations socio-économiques (IRIS). Il reçoit présentement du financement du CRSH.

Léanne Tardif ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Logement : les partis municipaux prisonniers de la logique du marché – https://theconversation.com/logement-les-partis-municipaux-prisonniers-de-la-logique-du-marche-268069