Scary stories for kids: A Series of Unfortunate Events taught me that grief can’t be understood but can be managed

Source: The Conversation – UK – By Rebecca Wynne-Walsh, Lecturer in Film, English and Creative Arts, Edge Hill University

Brett Helquist/HarperCollins

Sourcing family friendly frightening fiction can be a bit challenging. That said, while straightforward horror texts rarely serve family audiences, the gothic is a mode of storytelling that has a long history of delighting and disgusting parents and children alike.

Naturally, there is intellectual and stylistic value to both classic horror and the gothic. However, while horror interacts more directly with fear, the gothic favours observing the tension surrounding the source of fear.




Read more:
Scary stories for kids: these tales of terror made me a hit at sleepovers as a pre-teen


The stereotypical gothic heroine is not only trapped in the haunted house, she desires to understand it. Children’s books which use the gothic mode of storytelling encourage a similar investigative impulse in children. This is the modus operandi of the Scooby Doo gang, for example: research, exploration and answer-seeking rather than simply succumbing to fear.

Some iconic examples of children’s gothic literature include Neil Gaiman’s Coraline (2002), Roald Dahl’s The Witches (1983), The Spiderwick Chronicles (Tony DiTerlizzi and Holly Black, 2003 to 2009) and The Saga of Darren Shan (Darren Shan, 2000 to 2004).


This article is part of a series of expert recommendations of spooky stories – on screen and in print – for brave young souls. From the surprisingly dark depths of Watership Down to Tim Burton’s delightfully eerie kid-friendly films, there’s a whole haunted world out there just waiting for kids to explore. Dare to dive in here.


While these are all excellent tales, the spooky story which impacted me most as a child, and still does as an adult, is Lemony Snicket’s A Series of Unfortunate Events (1999 to 2006). This 13-book series follows three orphaned siblings, Violet, Klaus and Sunny Baudelaire as they are forced to navigate the homes of various (increasingly odd and occasionally villainous) guardians. All the while they try evade capture at the hand of evil Count Olaf who seeks their family fortune, and solve the mystery of what the VFD (Volunteer Fire Department) organisation is – the answer to which might hold the key their parents’ mysterious past.

I was five years old when I received a copy of the first book in the series, aptly titled, The Bad Beginning. That first foray into the dark world of the Baudelaires meant that for the next few years the days I got to go to the bookshop to get the next book were some of the most exciting I experienced.

Aside from being devilishly delightful tales full of mysteries, adventure, danger, songs and a surprising amount of food recipes, these books never shied away from the harsher elements of real life. Among many important lessons, Snicket also taught me that horseradish and wasabi are in the same family, that first impressions of new people aren’t always accurate and that grief may never be understood but can be managed.

As he writes in the second book, The Reptile Room:

[Grief] is like walking up the stairs to your bedroom in the dark, and thinking there is one more stair than there is. Your foot falls down, through the air, and there is a sickly moment of dark surprise as you try and readjust the way you thought of things.

During many of the most challenging parts of my childhood (and now my adulthood), these books offered me agency, riddles to solve, new words to learn, puzzles to put together and complex histories to understand. This is the core joy of these books – Snicket treats his intended readers (children) like people, instead of talking down to them.

The quirky and interactive elements of these books are a major factor in their enduring popularity. In an era of ever decreasing attention spans, Snicket offers an interactive reading experience in which no two chapters, and even no two pages are the same.

In one of the books, the Baudelaire children fall down a broken elevator shaft, a plot point illustrated literally by the three pitch black pages which “narrate” their descent. Another book sees the children receive a coded message, this chapter must be read in front of a mirror to decipher the backwards text. And most, if not all the books, incorporate poetry, songs, plays and paintings – which the Baudelaire orphans, and the readers, must use to decipher the mysteries surrounding the titular unfortunate events.

From the outset the reader is presented with total agency, invited to “shut the book” in a manner which directly encourages child autonomy. Nonetheless, children and adults alike have continued to engage with this franchise in all of its forms. Whether that be the original books, the 2004 feature film, the Netflix series released in 2017, the audiobooks narrated by Tim Curry, the concept album based on the books by The Gothic Archies or the ever updated Lemony Snicket website with multiple extra materials.

In short, the spooky gothic fun never has to end. As someone who has read these books annually since their original release, I can confidently attest to this as I continue to try and solve the eternal mystery of the VFD and the reason why Snicket’s villains are so damn villainous.

If you have not yet had the chance to enter the wild, woeful and wonderous world of the Baudelaire children and the mysteries surrounding their series of unfortunate events, I encourage readers of all ages to ignore Snicket’s suggestion to shut the books. Indeed, look to these tales, in the words of Snicket, to find a “small, safe place in a troubling world”.

The Series of Unfortunate Events is suitable for children aged 8 to 14.

This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Rebecca Wynne-Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Scary stories for kids: A Series of Unfortunate Events taught me that grief can’t be understood but can be managed – https://theconversation.com/scary-stories-for-kids-a-series-of-unfortunate-events-taught-me-that-grief-cant-be-understood-but-can-be-managed-267786

How banks affect the environment and the role your money plays in it

Source: The Conversation – UK – By Styliani Panetsidou, Assistant Professor of Finance, Coventry University

Inside Creative House/Shutterstock

When you think about your environmental footprint, what comes to mind first? Maybe the flights you take, the car you drive or whether you choose the train instead. Perhaps it is the plastic you try to avoid, the clothes you buy or the food on your plate. But what about your money – how often do you think about where it is kept and what it supports?

Banks are a part of our everyday lives. We use them to receive salaries, make transactions, pay bills or take out loans and mortgages. Yet behind every transaction lies a financial system that quietly shapes not only our economy but also – less visibly – our planet. The way banks operate can influence which industries thrive, which decline and how businesses affect the environment.

Banks worldwide function on what is called “fractional reserve banking”. Under this system, when we make a deposit the money is not simply stored in a vault. Banks use most deposits to issue loans – for housing, businesses or infrastructure – keeping only a small portion as reserves.

Some central banks require a fraction of the deposits to be held as minimum reserves, but many countries, including the UK and the US no longer impose such a requirement. As a result, banks decide how much of the deposits they will hold as reserves while the remainder facilitates lending to borrowers.

But decisions about lending are powerful. Since banks can decide where credit goes, they can also influence where new money enters the economy. To put it simply, lending for housing can expand the property market, financing renewable energy can support low-carbon infrastructure, while funding coal mines or oil and gas extraction may risk locking in future carbon emissions over decades.

These choices affect which sectors see lower borrowing costs and greater capital flows. Banks serve as stewards of economic growth and, as such, as stewards of environmental impact.

an oil rig in the north sea at sunset
The world’s biggest banks still pump more money into fossil fuels than renewables.
Frode Koppang/Shutterstock

Yet a large share of bank lending goes to carbon-intensive sectors. For example, between 2021 and 2024, the 65 largest banks worldwide have allocated around US$3.29 trillion (£2.45 trillion) to fossil fuels, compared to about US$1.37 trillion to sustainable power including solar, wind and related infrastructure.

Similarly, BloombergNEF’s recent Energy Supply Banking Ratio shows that for every dollar that the world’s leading banks invest in oil, natural gas or coal, only 89 cents are invested in low-carbon energy companies. Even in the face of the climate crisis, green financing still lags behind.

Does it matter where we bank?

Banks have traditionally favoured fossil fuel projects due to the sector’s strong profitability and reliable credit ratings. However, as more capital flows into renewable projects, it could accelerate the low-carbon transition, reducing financing costs and lowering perceived risks.

With this in mind, perhaps it is time to consider whether the bank we select could subtly influence environmental outcomes.

Individuals might feel small compared with the might of the banking sector, but they really could influence these dynamics through their choices. Most people would assume that their deposits play only a minor role, but collectively they represent vast sums of money.

To illustrate this, in August 2025 alone, UK households’ deposits with banks and building societies increased by £5.4 billion, following a net increase of £7.1 billion in July 2025. These deposits would include funds in current accounts, savings accounts and ISAs.

The sums involved are huge, yet our banking decisions are rarely framed as environmental ones – even though they are part of the broader system that directs capital flows. Each depositor’s choice contributes, however modestly, to the overall pattern of where credit flows.

An individual account may not shift global outcomes on its own. But many small choices, made by millions of people over time, can shape incentives and expectations. Understanding how banks operate, what they finance and how transparent they are, is another way our financial decisions intersect with climate realities.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. How banks affect the environment and the role your money plays in it – https://theconversation.com/how-banks-affect-the-environment-and-the-role-your-money-plays-in-it-267661

Why we used to sleep in two segments – and how the modern shift changed our sense of time

Source: The Conversation – UK – By Darren Rhodes, Lecturer in Cognitive Psychology and Environmental Temporal Cognition Lab Director, Keele University, Keele University

Albert Joseph Moore/Shutterstock

Continuous sleep is a modern habit, not an evolutionary constant, which helps explain why many of us still wake at 3am and wonder if something’s wrong. It might help to know that this is a deeply human experience.

For most of human history, a continuous eight-hour snooze was not the norm. Instead, people commonly slept in two shifts each night, often called a “first sleep” and “second sleep.” Each of these sleeps lasted several hours, separated by a gap of wakefulness for an hour or more in the middle of the night. Historical records from Europe, Africa, Asia and beyond describe how, after nightfall, families would go to bed early, then wake around midnight for a while before returning to sleep until dawn.

Breaking the night into two parts probably changed how time felt. The quiet interval gave nights a clear middle, which can make long winter evenings feel less continuous and easier to manage.

The midnight interval was not dead time; it was noticed time, which shapes how long nights are experienced. Some people would get up to tend to chores like stirring the fire or checking on animals. Others stayed in bed to pray or contemplate dreams they’d just had. Letters and diaries from pre-industrial times mention people using the quiet hours to read, write or even socialise quietly with family or neighbours. Many couples took advantage of this midnight wakefulness for intimacy.

Literature from as far back as ancient Greek poet Homer and Roman poet Virgil contains references to an “hour which terminates the first sleep,” indicating how commonplace the two-shift night was.

How we lost the ‘second sleep’

The disappearance of the second sleep happened over the past two centuries due to profound societal changes. Artificial lighting is one of them. In the 1700s and 1800s, first oil lamps, then gas lighting, and eventually electric light, began turning night into more usable waking time. Instead of going to bed shortly after sunset, people started staying up later into the evening under lamplight.

Biologically, bright light at night also shifted our internal clocks (our circadian rhythm) and made our bodies less inclined to wake after a few hours of sleep. Light timing matters. Ordinary “room” light before bedtime suppresses and delays melatonin, which pushes the onset of sleep later.

The Industrial Revolution transformed not just how people worked but how they
slept. Factory schedules encouraged a single block of rest. By the early 20th century, the idea of eight uninterrupted hours had replaced the centuries-old rhythm of two sleeps.

In multi-week sleep studies that simulate long winter nights in darkness and remove clocks or evening light, people in lab studies often end up adopting two sleeps with a calm waking interval. A 2017 study of a Madagascan agricultural community without electricity found people still mostly slept in two segments, rising at about midnight.

Woman sleeping on sofa wearing silk dress.
Dreaming of a second sleep?
John Singer Sargent/Shutterstock

Long, dark winters

Light sets our internal clock and influences how fast we feel time passing. When those cues fade, as in winter or under artificial lighting, we drift.

In winter, later and weaker morning light makes circadian alignment harder. Morning light is particularly important for regulating circadian rhythms because it contains a higher amount of blue light, which is the most effective wavelength for stimulating the body’s production of cortisol and suppressing melatonin.

In time-isolation labs and cave studies, people have lived for weeks without natural light or clocks, or even lived in constant darkness. Many people in these studies miscounted the passing of days, showing how easily time slips without light cues.

Similar distortions occur in the polar winter, where the absence of sunrise and sunset can make time feel suspended. People native to high latitudes, and long-term residents with stable routines, often cope better with polar light cycles than short-term visitors, but this varies by population and context. Residents adapt better when their community shares a regular daily schedule, for instance. And a 1993 study of Icelandic populations and their descendants who emigrated to Canada found these people showed unusually low winter seasonal affective disorder (SAD) rates. The study suggested genetics may help this population cope with the long Arctic winter.

Research from the Environmental Temporal Cognition Lab at Keele University, where I am the director, shows how strong this link between light, mood and time perception is. In 360-degree virtual reality, we matched UK and Sweden scenes for setting, light level cues, and time of day. Participants viewed six clips of about two minutes. They judged the two minute intervals as lasting longer in evening or low-light scenes compared with daytime or brighter scenes. The effect was strongest in those participants who reported low mood.

A new perspective on insomnia

Sleep clinicians note that brief awakenings are normal, often appearing at stage transitions, including near REM sleep, which is associated with vivid dreaming. What matters is how we respond.

The brain’s sense of duration is elastic: anxiety, boredom, or low light tend to make time stretch, while engagement and calm can compress it. Without that interval where you got up and did something or chatted with your partner, waking at 3am often makes time feel slow. In this context, attention focuses on time and the minutes that pass may seem longer.

Cognitive behavioural therapy for insomnia (CBT-I) advises people to leave bed after about 20 minutes awake, do a quiet activity in dim light such as reading, then return when sleepy.

Sleep experts also suggest covering the clock and letting go of time measurement when you’re struggling to sleep. A calm acceptance of wakefulness, paired with an understanding of how our minds perceive time, may be the surest way to rest again.

The Conversation

Darren Rhodes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why we used to sleep in two segments – and how the modern shift changed our sense of time – https://theconversation.com/why-we-used-to-sleep-in-two-segments-and-how-the-modern-shift-changed-our-sense-of-time-267909

Climate change is making cities hotter. Here’s how planting trees can help

Source: The Conversation – Canada – By Lingshan Li, PhD candidate, Department of Geography, Planning and Environment, Concordia University

Canada’s climate is warming twice as fast as the global average, and many cities will experience at least four times as many extreme heat events (days above 30 C) per year in the coming decades.

In Québec alone, elevated summer temperatures were associated with about 470 deaths, 225 hospitalizations, 36,000 emergency room visits, 7,200 ambulance transports and 15,000 calls to a health hotline every year.

To tackle the crisis of climate change, the government of Canada proposed the 2 Billion Trees program that aims to plant two billion trees by 2031 over a period of 10 years.

But such ambitions come with important questions:

  • Where and how to plant these trees?
  • How to manage the trees to provide more cooling for the people?
  • How to direct the cooling to the most underserved communities?

Colleagues and I recently published a study in Montréal that explores how urban green spaces can reduce surface temperature and help promote environmental justice. We found that even small increases in green spaces can make a notable difference in city temperatures.




Read more:
Urban trees vs. cool roofs: What’s the best way for cities to beat the heat?


Why the placement of trees is important

If you’ve ever passed under the shade of a tree on a hot summer day and felt the temperature drop, you know how valuable they are in cities. Both the amount and layout of urban green spaces affect how much they can cool a city.

The way trees, parks and other green areas are arranged can change how they provide shade and release moisture into the air, which together determine how much they can lower the surrounding temperature.

Where urban green spaces are located is also related to an important social issue: environmental justice. Unequally distributed green spaces can restrict residents’ access to cooling in certain neighbourhoods, contributing to social inequalities within a city.

Those living in low-income neighbourhoods feeling the harshest impacts of urban heat can struggle to find green spaces where they can cool off. Young children and the elderly are also more susceptible to the dangers of prolonged heat exposure.

There is a need for municipal governments to get a better view of how well these vulnerable groups receive the cooling provided by urban green infrastructure and what factors have driven the unbalanced distribution.

What we found

Using satellite imagery and laser imaging, we found that having more trees, grass and shrubs in an area can notably reduce temperatures. We developed a model to estimate the cooling effect provided by urban green infrastructure based on several indicators. Those indicators reflect the quantity and quality aspect of the urban greenery.

Our model showed that a 10 per cent increase in tree coverage can lower land surface temperature by approximately 1.4 C. A similar increase in shrubs and grass lowers temperatures by about 0.8 C.

The result also indicated that large, continuous groups of trees cool their surroundings better than small, scattered patches. A 10 per cent increase in the aggregation level of tree cluster (area of the largest patch of trees divided by the total area of trees within a landscape unit) can lower land surface temperature by about 0.2 C.

We also found that the cooling provided by green spaces in many parts of Montréal do no meet the needs of local residents. This mismatch varies a lot between census tracts.

Areas in the city abundant with green spaces include boroughs like Le Plateau-Mont-Royal, Outremont, L’Île-Bizard–Sainte-Geneviève and the village of Senneville. Meanwhile, areas such as Montréal-Est, Saint-Leonard and Saint-Laurent have the least amount of green space.

In addition, areas like Pointe-Claire and Montréal-Nord have good green space, but their mismatch index is still low because many vulnerable people live there. The mismatch index is calculated by supply index minus demand index; that means a higher demand index would lead to a lower mismatch index.

Neighbourhoods with higher median incomes and more highly educated people were mostly associated with positive supply-demand values. That indicates their supply of cooling services as provided by urban green spaces was higher than their demand.

In contrast, census tracts with higher proportion of racialized people and people with a lower level of education tend to lack enough green spaces where residents can cool off.

Vulnerable people (young and elderly individuals) with a higher socio-economic status received more cooling services provided by the urban green spaces. In contrast, those on the other end of the socio-economic spectrum were more likely to struggle to easily find a place to cool off.

What we can do in the future

For cities with similar humid summer months like Montréal, urban planners who want to reduce daytime heat should consolidate tree patches into large, continuous areas where possible.

It is also helpful to design smaller-scale green spaces with more irregularly shaped tree patches and create enhanced connectivity, especially for grass, to support small-scale cooling.

In Montréal and other cities where green spaces are unequally distributed, municipal officials should develop ranked action plans for greening efforts that consider environmental justice and prioritize areas where the need for cooling is greatest.

The Conversation

Lingshan Li receives funding from The Trottier Family Foundation and the Natural Sciences and Engineering Research Council of Canada.

ref. Climate change is making cities hotter. Here’s how planting trees can help – https://theconversation.com/climate-change-is-making-cities-hotter-heres-how-planting-trees-can-help-267827

AI chatbots are becoming everyday tools for mundane tasks, use data shows

Source: The Conversation – USA – By Jeanne Beatrix Law, Professor of English, Kennesaw State University

The average person is more likely to use AI to come up with a meal plan than program a new app. Oscar Wong/Moment via Getty Images

Artificial intelligence is fast becoming part of the furniture. A decade after IBM’s Watson triumphed on “Jeopardy!,” generative AI models are in kitchens and home offices. People often talk about AI in science fiction terms, yet the most consequential change in 2025 may be its banal ubiquity.

To appreciate how ordinary AI use has become, it helps to remember that this trend didn’t start with generative chatbots. A 2017 Knowledge at Wharton newsletter documented how deep learning algorithms were already powering chatbots on social media and photo apps’ facial recognition functions. Digital assistants such as Siri and Alexa were performing everyday tasks, and AI-powered image generators could create images that fooled 40% of viewers.

When ChatGPT became publicly available on Nov. 30, 2022, the shift felt sudden, but it was built on years of incremental integration. AI’s presence is now so mundane that people consult chatbots for recipes, use them as study partners and rely on them for administrative chores. As a writer and professor who studies ways that generative AI can be an everyday collaborator, I find that recent usage reports show how AI has been woven into everyday life. (Full disclosure: I am a member of OpenAI’s Educator Council, an uncompensated group of higher education faculty who provide feedback to OpenAI on educational use cases.)

Who’s using ChatGPT and why?

Economists at OpenAI and Harvard analyzed 1.5 million ChatGPT conversations from November 2022 through July 2025. Their findings show that adoption has broadened beyond early users: It’s being used all over the world, among all types of people. Adoption has grown fastest in low- and middle-income countries, and growth rates in the lowest-income countries are now more than four times those in the richest nations.

Most interactions revolve around mundane activities. Three-quarters of conversations involve practical guidance, information seeking and writing. These categories are for activities such as getting advice on how to cook an unusual type of food, where to find the nearest pharmacy, and getting feedback on email drafts. More than 70% of ChatGPT use is for nonwork tasks, demonstrating AI’s role in people’s personal lives. The economists found that 73% of messages were not related to work as of June 2025, up from 53% in June 2024.

Claude and the geography of adoption

Anthropic’s economic index paints a similar picture of uneven AI adoption. Researchers at the company tracked users’ conversations with the company’s Claude AI chatbot relative to working-age population. The data shows sharp contrasts between nations. Singapore’s per-capita use is 4.6 times higher than expected based on its population size, and Canada’s is 2.9 times higher. India and Nigeria, meanwhile, use Claude at only a quarter of predicted levels.

In the United States, use reflects local economies, with activity tied to regional strengths: tech in California, finance in Florida and documentation in D.C. In lower-use countries, more than half of Claude’s activity involves programming. In higher-use countries, people apply it across education, science and business. High-use countries favor humans working iteratively with AI, such as refining text, while low-use countries rely more on delegating full tasks, such as finding information.

It’s important to note that OpenAI reports between 400 million and 700 million weekly active users in 2025, while third-party analytics estimate Claude at roughly 30 million monthly active users during a similar time period. For comparison, Gemini had approximately 350 million monthly active users and Microsoft reported in July 2025 more than 100 million monthly active users for its Copilot apps. Perplexity’s CEO reported in an interview that the company’s language AI has a “user base of over 30 million active users.”

While these metrics are from a similar time period, mid-2025, it’s important to note the differences in reporting and metrics, particularly weekly versus monthly active users. By any measure, though, ChatGPT’s user base is by far the largest, making it a commonly used generative AI tool for everyday tasks.

Everyday tool

So, what do mundane uses of AI look like at home? Consider these scenarios:

  • Meal planning and recipes: A parent asks ChatGPT for vegan meal ideas that use leftover kale and mushrooms, saving time and reducing waste.
  • Personal finance: ChatGPT drafts a budget, suggests savings strategies or explains the fine print of a credit card offer, translating legalese into plain language.
  • Writing support: Neurodivergent writers use ChatGPT to organize ideas and scaffold drafts. A writer with ADHD can upload notes and ask the model to group them into themes, then expand each into a paragraph while keeping the writer’s tone and reasoning. This helps reduce cognitive overload and supports focus, while the writer retains their own voice.

These scenarios illustrate that AI can help with mundane decisions, act as a sounding board and support creativity. The help with mundane tasks can be a big lift: By handling routine planning and information retrieval, AI frees people to focus on empathy, judgment and reflection.

From extraordinary to ordinary tool

AI has transitioned from a futuristic curiosity to an everyday co-pilot, with voice assistants and generative models helping people write, cook and plan.

Inviting AI to our kitchen tables not as a mysterious oracle but as a helpful assistant means cultivating AI literacy and learning prompting techniques. It means recognizing AI’s strengths, mitigating its risks and shaping a future where intelligence — human and artificial — works for everyone.

The Conversation

Jeanne Beatrix Law serves on the OpenAI Educator Council, an uncompensated group of higher education faculty who provide feedback to OpenAI on educational use cases and occasionally tests models for those use cases.

ref. AI chatbots are becoming everyday tools for mundane tasks, use data shows – https://theconversation.com/ai-chatbots-are-becoming-everyday-tools-for-mundane-tasks-use-data-shows-266670

Solar storms have influenced our history – an environmental historian explains how they could also threaten our future

Source: The Conversation – USA – By Dagomar Degroot, Associate Professor of Environmental History, Georgetown University

Coronal mass ejections from the Sun can cause geomagnetic storms that may damage technology on Earth. NASA/GSFC/SDO

In May 2024, part of the Sun exploded.

The Sun is an immense ball of superheated gas called plasma. Because the plasma is conductive, magnetic fields loop out of the solar surface. Since different parts of the surface rotate at different speeds, the fields get tangled. Eventually, like rubber bands pulled too tight, they can snap – and that is what they did last year.

These titanic plasma explosions, also known as solar flares, each unleashed the energy of a million hydrogen bombs. Parts of the Sun’s magnetic field also broke free as magnetic bubbles loaded with billions of tons of plasma.

These bubbles, called coronal mass ejections, or CMEs, crashed through space at around 6,000 times the speed of a commercial jetliner. After a few days, they smashed one after another into the magnetic field that envelops Earth. The plasma in each CME surged toward us, creating brilliant auroras and powerful electrical currents that rippled through Earth’s crust.

A coronal mass ejection erupting from the Sun.

You might not have noticed. Just like the opposite poles of fridge magnets have to align for them to snap together, the poles of the magnetic field of Earth and the incoming CMEs have to line up just right for the plasma in the CMEs to reach Earth. This time they didn’t, so most of the plasma sailed off into deep space.

Humans have not always been so lucky. I’m an environmental historian and author of the new book “Ripples on the Cosmic Ocean: An Environmental History of Our Place in the Solar System.”

While writing the book, I learned that a series of technological breakthroughs – from telegraphs to satellites – have left modern societies increasingly vulnerable to the influence of solar storms, meaning flares and CMEs.

Since the 19th century, these storms have repeatedly upended life on Earth. Today, there are hints that they threaten the very survival of civilization as we know it.

The telegraph: A first warning

On the morning of Sept. 1, 1859, two young astronomers, Richard Carrington and Richard Hodgson, became the first humans to see a solar flare. To their astonishment, it was so powerful that, for two minutes, it far outshone the rest of the Sun.

About 18 hours later, brilliant, blood-red auroras flickered across the night sky as far south as the equator, while newly built telegraph lines shorted out across Europe and the Americas.

The Carrington Event, as it was later called, revealed that the Sun’s environment could violently change. It also suggested that emerging technologies, such as the electrical telegraph, were beginning to link modern life to the extraordinary violence of the Sun’s most explosive changes.

For more than a century, these connections amounted to little more than inconveniences, like occasional telegraph outages, partly because no solar storm rivaled the power of the Carrington Event. But another part of the reason was that the world’s economies and militaries were only gradually coming to rely more and more on technologies that turned out to be profoundly vulnerable to the Sun’s changes.

A brush with Armageddon

Then came May 1967.

Soviet and American warships collided in the Sea of Japan, American troops crossed into North Vietnam and the Middle East teetered on the brink of the Six-Day War.

It was only a frightening combination of new technologies that kept the United States and Soviet Union from all-out war; nuclear missiles could now destroy a country within minutes, but radar could detect their approach in time for retaliation. A direct attack on either superpower would be suicidal.

Several buildings on an icy plain, with green lights in the sky above.
An aurora – an event created by a solar storm – over Pituffik Space Base, formerly Thule Air Base, in Greenland in 2017. In 1967, nuclear-armed bombers prepared to take off from this base.
Air Force Space Command

Suddenly, on May 23, a series of violent solar flares blasted the Earth with powerful radio waves, knocking out American radar stations in Alaska, Greenland and England.

Forecasters had warned officers at the North American Air Defense Command, or NORAD, to expect a solar storm. But the scale of the radar blackout convinced Air Force officers that the Soviets were responsible. It was exactly the sort of thing the USSR would do before launching a nuclear attack.

American bombers, loaded with nuclear weapons, prepared to retaliate. The solar storm had so scrambled their wireless communications that it might have been impossible to call them back once they took off. In the nick of time, forecasters used observations of the Sun to convince NORAD officers that a solar storm had jammed their radar. We may be alive today because they succeeded.

Blackouts, transformers and collapse

With that brush with nuclear war, solar storms had become a source of existential risk, meaning a potential threat to humanity’s existence. Yet the magnitude of that risk only came into focus in March 1989, when 11 powerful flares preceded the arrival of back-to-back coronal mass ejections.

For more than two decades, North American utility companies had constructed a sprawling transmission system that relayed electricity from power plants to consumers. In 1989, this system turned out to be vulnerable to the currents that coronal mass ejections channeled through Earth’s crust.

Several large pieces of metal machinery lined up in an underground facility.
An engineer performs tests on a substation transformer.
Ptrump16/Wikimedia Commons, CC BY-SA

In Quebec, crystalline bedrock under the city does not easily conduct electricity. Rather than flow through the rock, currents instead surged into the world’s biggest hydroelectric transmission system. It collapsed, leaving millions without power in subzero weather.

Repairs revealed something disturbing: The currents had damaged multiple transformers, which are enormous customized devices that transfer electricity between circuits.

Transformers can take many months to replace. Had the 1989 storm been as powerful as the Carrington Event, hundreds of transformers might have been destroyed. It could have taken years to restore electricity across North America.

Solar storms: An existential risk

But was the Carrington Event really the worst storm that the Sun can unleash?

Scientists assumed that it was until, in 2012, a team of Japanese scientists found evidence of an extraordinary burst of high-energy particles in the growth rings of trees dated to the eighth century CE. The leading explanation for them: huge solar storms dwarfing the Carrington Event. Scientists now estimate that these “Miyake Events” happen once every few centuries.

Astronomers have also discovered that, every century, Sun-like stars can explode in super flares up to 10,000 times more powerful than the strongest solar flares ever observed. Because the Sun is older and rotates more slowly than many of these stars, its super flares may be much rarer, occurring perhaps once every 3,000 years.

Nevertheless, the implications are alarming. Powerful solar storms once influenced humanity only by creating brilliant auroras. Today, civilization depends on electrical networks that allow commodities, information and people to move across our world, from sewer systems to satellite constellations.

What would happen if these systems suddenly collapsed on a continental scale for months, even years? Would millions die? And could a single solar storm bring that about?

Researchers are working on answering these questions. For now, one thing is certain: to protect these networks, scientists must monitor the Sun in real time. That way, operators can reduce or reroute the electricity flowing through grids when a CME approaches. A little preparation may prevent a collapse.

Fortunately, satellites and telescopes on Earth today keep the Sun under constant observation. Yet in the United States, recent efforts to reduce NASA’s science budget have cast doubt on plans to replace aging Sun-monitoring satellites. Even the Daniel K. Inouye Solar Telescope, the world’s premier solar observatory, may soon shut down.

These potential cuts are a reminder of our tendency to discount existential risks – until it’s too late.

The Conversation

Dagomar Degroot has received funding from NASA.

ref. Solar storms have influenced our history – an environmental historian explains how they could also threaten our future – https://theconversation.com/solar-storms-have-influenced-our-history-an-environmental-historian-explains-how-they-could-also-threaten-our-future-258668

The Glozel affair: A sensational archaeological hoax made science front-page news in 1920s France

Source: The Conversation – USA – By Daniel J. Sherman, Lineberger Distinguished Professor of Art History and History, University of North Carolina at Chapel Hill

All eyes were on a commission of professional archaeologists when they visited Glozel. Agence Meurisse/BnF Gallica

In early November 1927, the front pages of newspapers all over France featured photographs not of the usual politicians, aviators or sporting events, but of a group of archaeologists engaged in excavation. The slow, painstaking work of archaeology was rarely headline news. But this was no ordinary dig.

yellowed newspaper page with photos of archaeologists at dig site
A front-page spread in the Excelsior newspaper from Nov. 8, 1927, features archaeologists at work in the field with the headline ‘What the learned commission found at the Glozel excavations.’
Excelsior/BnF Gallica

The archaeologists pictured were members of an international team assembled to assess the authenticity of a remarkable site in France’s Auvergne region.

Three years before, farmers plowing their land at a place called Glozel had come across what seemed to be a prehistoric tomb. Excavations by Antonin Morlet, an amateur archaeologist from Vichy, the nearest town of any size, yielded all kinds of unexpected objects. Morlet began publishing the finds in late 1925, immediately producing lively debate and controversy.

Certain characteristics of the site placed it in the Neolithic era, approximately 10,000 B.C.E. But Morlet also unearthed artifact types thought to have been invented thousands of years later, notably pottery and, most surprisingly, tablets or bricks with what looked like alphabetic characters. Some scholars cried foul, including experts on the inscriptions of the Phoenicians, the people thought to have invented the Western alphabet no earlier than 2000 B.C.E.

Was Glozel a stunning find with the capacity to rewrite prehistory? Or was it an elaborate hoax? By late 1927, the dispute over Glozel’s authenticity had become so strident that an outside investigation seemed warranted.

The Glozel affair now amounts to little more than a footnote in the history of French archaeology. As a historian, I first came across descriptions of it in some histories of French archaeology. With a bit of investigating, it wasn’t hard to find first-person accounts of the affair.

sketch of seven lines of alphabet-like notations on two rectangles
Examples of the kinds of inscriptions found at the Glozel site, as recorded by scholar Salomon Reinach.
‘Éphémérides de Glozel’/Wikimedia Commons

But it was only when I began studying the private papers of one of the leading contemporary skeptics of Glozel, an archaeologist and expert on Phoenician writing named René Dussaud, that I realized the magnitude and intensity of this controversy. After publishing a short book showing that the so-called Glozel alphabet was a mishmash of previously known early alphabetic writing, in October 1927 Dussaud took out a subscription to a clipping service to track mentions of the Glozel affair; in four months he received over 1,500 clippings, in 10 languages.

The Dussaud clippings became the basis for the account of Glozel in my recent book, “Sensations.” That the contours of the affair first became clear to me in a pile of yellowed newspaper clippings is appropriate, because Glozel embodies a complex relationship between science and the media that persists today.

Front page of a newspaper with images of people digging and holding up finds
The newspaper Le Matin, which vigorously promoted Glozel’s authenticity, even sponsored its own dig near the site, led by a journalist.
Le Matin/BnF Gallica

Serious scientists in the trenches

The international commission’s front-page visit to Glozel marked a watershed in the controversy, even if it did not resolve it entirely.

In a painstaking report published in the scholarly Revue anthropologique just before Christmas 1927, the commission recounted the several days of digging it conducted, provided detailed plans of the site, described the objects it unearthed and carefully explained its conclusion that the site was “not ancient.”

shelves with various clay vessels and shards piled on them
Recovered objects displayed in the Fradins’ museum in 1927.
Agence de presse Meurisse/Wikimedia Commons

The report emphasized the importance of proper archaeological method. Early on, the commissioners noted that they were “experienced diggers, all with past fieldwork to their credit,” in different chronological subfields of archaeology. In contrast, they noted that the Glozel site showed clear signs of a lack of order and method.

In their initial meeting in Vichy, the assembled archaeologists agreed that they would give no interviews during their visit to Glozel and would not speak to the press afterward. But, aware of “certain tendentious articles published by a few newspapers,” the visitors issued a communiqué stating that they would neither confirm nor deny any press reports. Their scholarly publication would be their final word on the “non-ancientness” of the site.

The distinction between true science – what the archaeologists were practicing – and the media seemed absolute.

Sensationalist coverage, but careful details, too

And yet matters were not so simple.

Many newspapers devoted extensive and careful coverage to Glozel. They offered explanations of archaeological terminology. They explained the larger stakes of the controversy, which, beyond the invention of the alphabet, involved nothing less than the direction of the development of Western civilization itself, whether from Mesopotamia in the east to Europe in the west or the reverse.

Even articles about seemingly trivial matters, such as the work clothes the archaeologists donned to perform their test excavations at Glozel, served to reinforce the larger point the commissioners made in their report. In contrast to the proper suits and ties they wore for formal photographs marking their arrival, the visitors all put on blue overalls, which for one newspaper “gave them the air of apprentice locksmiths or freshly decked-out electricians.”

The risk, apparent in this jocular reference, of losing the social standing afforded them by their professional degrees and education was worth taking because it drove home these archaeologists’ devotion to their discipline, which their report described as “a daily moral obligation.”

seven people dressed formally standing against a building
Morlet, far left, and the international commission in front of the Fradins’ museum in November 1927. Garrod is third from the left.
Agence Meurisse

Skeptical scientists did rely on journalism

If archaeologists continued to mistrust the many newspapers that sensationalized Glozel, its stakes and their work in general, they could not escape the popular media entirely, so they confided in a few journalists at papers they considered responsible.

Shortly after the publication of the report, which was summarized and excerpted in the daily press, original excavator Morlet accused Dorothy Garrod, the only woman on the commission, of having tampered with the site. A group of archaeologists responded on her behalf, explaining what she had actually been doing and defending her professionalism – in the press.

At the most basic level, media coverage recorded the standard operating procedures of archaeology and its openness to outside scrutiny. This was in contrast to Morlet’s excavations, which limited access only to believers in the authenticity of Glozel.

Under the watchful eyes of reporters and photographers, the outside archaeologists investigating Glozel knew quite well that they were engaged in a kind of performance, one in which their discipline, as much as this particular discovery, was on trial.

Like the signs in my neighborhood proclaiming that “science is real,” the international commission depended on and sought to fortify the public’s confidence in the integrity of scientific inquiry. To do that, it needed the media even while expressing a healthy skepticism about it. It’s a balancing act that persists in today’s era of “trusting science.”

The Conversation

This article draws on research funded by the Institut d’Études Avancées (Paris), the Institute for Advanced Study (Princeton), and the National Endowment for the Humanities, as well as Daniel Sherman’s employer, the University of North Carolina at Chapel Hill.

ref. The Glozel affair: A sensational archaeological hoax made science front-page news in 1920s France – https://theconversation.com/the-glozel-affair-a-sensational-archaeological-hoax-made-science-front-page-news-in-1920s-france-260967

AI reveals which predators chewed ancient humans’ bones – challenging ideas on which ‘Homo’ species was the first tool-using hunter

Source: The Conversation – USA – By Manuel Domínguez-Rodrigo, Professor of Anthropology, Rice University

If *Homo habilis* was often chomped by leopards, it probably wasn’t the top predator. Made with AI (DALL-E 4)

Almost 2 million years ago, a young ancient human died beside a spring near a lake in what is now Tanzania, in eastern Africa. After archaeologists uncovered his fossilized bones in 1960, they used them to define Homo habilis – the earliest known member of our own genus.

Paleoanthropologists define the first examples of the genus Homo based largely on their bigger brains – and, sometimes, smaller teeth – compared with other, earlier ancestors such as the australopithecines – the most famous of these being Lucy. There were at least three types of early humans: Homo habilis, Homo rudolfensis and the best documented species, Homo erectus. At least one of them created sites now in the archaeological record, where they brought and shared food, and made and used some of the earliest stone tools.

These archaeological sites date to between 2.6 to 1.8 million years ago. The artifacts within them suggest greater cognitive complexity in early Homo than documented among any nonhuman primate. For example, at Nyayanga, a site in Kenya, anthropologists recently found that early humans were using tools they transported over distances of up to 8 miles (13 kilometers). This action indicates forethought and planning.

Traditionally, paleoanthropologists believed that Homo habilis, as the earliest big-brained humans, was responsible for the earliest sites with tools. The idea has been that Homo habilis was the ancestor of later and even bigger-brained Homo erectus, whose descendants eventually led to us.

This narrative made sense when the oldest known Homo erectus remains were younger than 1.6 million years old. But given recent discoveries, this seems like a shaky foundation.

In 2015, my team discovered a 1.85 million-year-old hand bone at Olduvai Gorge, the same place the original Homo habilis had been found. But unlike the hand of that Homo habilis juvenile, this fossil looked like it belonged to a larger, more modern, fully land-based rather than tree-based human species: Homo erectus.

Over the past decade, new finds have continued to push back the earliest dates for Homo erectus: about 2 million years ago in South Africa, Kenya and Ethiopia. Taken together, these discoveries reveal that H. erectus is slightly older than the known H. habilis fossils. We cannot simply assume that H. habilis gave rise to H. erectus. Instead, the human family tree looks far bushier than we once thought.

What do all these finds suggest? Only one Homo species is our likely ancestor, and probably only one can be responsible for the complex behaviors revealed at the Olduvai Gorge sites. My colleagues and I hit on a way to test whether Homo habilis was top dog at Olduvai Gorge, so to speak, based on whether they were the hunters or the hunted.

Who was hunting who?

At Olduvai Gorge, there is overwhelming evidence that early humans were consuming animals as big as a gazelle or even a zebra. Not only did they hunt, but they repeatedly brought these animals back to the same location for communal consumption. This is the concept of a “central provisioning place,” much like a campsite or home today. Dating to 1.85 million years ago, this is the oldest evidence of frequent meat-eating – and of early humans regularly acting as predators rather than prey.

All animals occupy a position on a food web, from top to lower ranks. Top-ranking predators, such as lions, are usually not preyed upon by lower ranking carnivores, such as hyenas.

If Homo habilis was acquiring large animal carcasses, either by hunting or by chasing lions away from their own kills, it seems logical that these hominids could effectively cope with predation risks. That is, a hunter usually isn’t hunted.

In African savannas, apex predators like lions do not usually die from other predator attacks. Humans today also occupy a top predatory niche: For example, Hadza hunter-gatherers in Tanzania not only hunt game, but also fend off lions from their kills, and successfully defend themselves from attacks by other predators, such as leopards.

But, if Homo habilis was not yet a top predator, then you would expect them to have occasionally been prey to lower-on-the-food-chain carnivorous cats – such as leopards – who often hunt primates.

Most known human fossils at this stage of evolution do bear traces of carnivore damage, including the two best preserved H. habilis fossils from Olduvai Gorge. Was it caused after death, by a scavenging carnivore? Or did a big cat at the top of the food chain kill these early humans?

My colleagues and I set out to address the question of which predators were getting their teeth on H. habilis and presumably whether before or after the ancient humans died.

AI suggests H. habilis wasn’t an apex predator

Here’s where artificial intelligence comes in. Using computer vision, we trained AI on hundreds of microscopic images showing tooth marks left by the main carnivores in Africa today: lions, leopards, hyenas and crocodiles. The AI learned to recognize the subtle differences between the marks made by the different predators and was able to classify the marks with high accuracy.

four different magnified craters on brownish backgrounds
Tooth marks left by the four types of carnivores recorded. A: crocodile tooth pit; B: hyena tooth pit; C: lion tooth pit; and D: leopard tooth pit.
Domínguez-Rodrigo, M., et al. Sci Rep 14, 6881 (2024)

When we combined different AI approaches, they all pointed to the same result: The tooth marks on the Homo habilis bones matched those made by leopards. The size and shape of the marks on the fossils from those two early Homo habilis individuals line up with what leopards leave today when feeding on prey.

Our discovery challenges the long-standing view of Homo habilis as the first skilled toolmaker, hunter and meat-eater.

But maybe it shouldn’t be too surprising. The only complete skeleton of this species found at Olduvai Gorge belonged to a very small individual – just about 3 feet tall (less than 1 meter) – with a body that still showed features suited for climbing trees. That hardly matches the image of a hunter able to bring down large animals or steal carcasses from lions.

If it wasn’t Homo habilis performing these feats, maybe it was Homo erectus, a species with a larger body and more modern anatomy. But that opens up other mysteries for future researchers: What was Homo habilis doing at the archaeological sites of Olduvai Gorge if it was not responsible for the tools and signs of hunting we find there? Where exactly did Homo erectus come from, and how did it evolve?

My team and others will be returning to places like Olduvai Gorge to ask these questions in the years to come.

The Conversation

Manuel Domínguez-Rodrigo receives funding from the Spanish Ministry of Science and Universities

ref. AI reveals which predators chewed ancient humans’ bones – challenging ideas on which ‘Homo’ species was the first tool-using hunter – https://theconversation.com/ai-reveals-which-predators-chewed-ancient-humans-bones-challenging-ideas-on-which-homo-species-was-the-first-tool-using-hunter-266561

Why the Trump administration’s comparison of antifa to violent terrorist groups doesn’t track

Source: The Conversation – USA – By Art Jipson, Associate Professor of Sociology, University of Dayton

President Donald Trump speaks at the White House during a meeting on antifa, as Attorney General Pam Bondi, left, and Homeland Security Secretary Kristi Noem listen, on Oct. 8, 2025. AP Photo/Evan Vucci

When Homeland Security Secretary Kristi Noem compared antifa to the transnational criminal group MS-13, Hamas and the Islamic State group in October 2025, she equated a nonhierarchical, loosely organized movement of antifascist activists with some of the world’s most violent and organized militant groups.

Antifa is just as dangerous,” she said.

It’s a sweeping claim that ignores crucial distinctions in ideology, organization and scope. Comparing these groups is like comparing apples and bricks: They may both be organizations, but that’s where the resemblance stops.

Noem’s statement echoed the logic of a September 2025 Trump administration executive order that designated antifa as a “domestic terrorist organization.” The order directs all relevant federal agencies to investigate and dismantle any operations, including the funding sources, linked to antifa.

But there is no credible evidence from the FBI or the Department of Homeland Security that supports such a comparison. Independent terrorism experts don’t see the similarities either.

Data shows that the movement can be confrontational and occasionally violent. But antifa is neither a terrorist network nor a major source of organized lethal violence.

Antifa, as understood by scholars and law enforcement, is not an organization in any formal sense. It lacks membership rolls and leadership hierarchies. It doesn’t have centralized funding.

As a scholar of social movements, I know that antifa is a decentralized movement animated by opposition to fascism and far-right extremism. It’s an assortment of small groups that mobilize around specific protests or local issues. And its tactics range from peaceful counterdemonstrations to mutual aid projects.

For example, in Portland, Oregon, local antifa activists organized counterdemonstrations against far-right rallies in 2019.

Antifa groups active in Houston during Hurricane Harvey in 2017 coordinated food, supplies and rescue support for affected residents.

No evidence of terrorism

The FBI and DHS have classified certain anarchist or anti-fascist groups under the broad category of “domestic violent extremists.” But neither agency nor the State Department has ever previously designated antifa as a terrorist organization.

The data on political violence reinforces this point.

A woman holds a yellow sign while walking with a group of people.
A woman holds a sign while protesting immigration raids in San Francisco on Oct. 23, 2025.
AP Photo/Noah Berger

A 2022 report by the Counter Extremism Project found that the overwhelming majority of deadly domestic terrorist incidents in the United States in recent years were linked to right-wing extremists. These groups include white supremacists and anti-government militias that promote racist or authoritarian ideologies. They reject democratic authority and often seek to provoke social chaos or civil conflict to achieve their goals.

Left-wing or anarchist-affiliated violence, including acts attributed to antifa-aligned people, accounts for only a small fraction of domestic extremist incidents and almost none of the fatalities. Similarly, in 2021, the George Washington University Program on Extremism found that anarchist or anti-fascist attacks are typically localized, spontaneous and lacking coordination.

By contrast, the organizations Noem invoked – Hamas, the Islamic State group and MS-13 – share structural and operational characteristics that antifa lacks.

They operate across borders and are hierarchically organized. They are also capable of sustained military or paramilitary operations. They possess training pipelines, funding networks, propaganda infrastructure and territorial control. And they have orchestrated mass casualties such as the 2015 Paris attacks and the 2016 Brussels bombings.

In short, they are military or criminal organizations with strategic intent. Noem’s claim that antifa is “just as dangerous” as these groups is not only empirically indefensible but rhetorically reckless.

Turning dissent into ‘terrorism’

So why make such a claim?

Noem’s statement fits squarely within the Trump administration’s broader political strategy that has sought to inflate the perceived threat of left-wing activism.

Casting antifa as a domestic terrorist equivalent of the Islamic State nation or Hamas serves several functions.

It stokes fear among conservative audiences by linking street protests and progressive dissent to global terror networks. It also provides political cover for expanded domestic surveillance and harsher policing of protests.

Protesters, some holding signs, walk toward a building with a dome.
Demonstrators hold protest signs during a march from the Atlanta Civic Center to the Georgia State Capitol on Oct. 18, 2025, in Atlanta.
Julia Beverly/Getty Images

Additionally, it discredits protest movements critical of the right. In a polarized media environment, such rhetoric performs a symbolic purpose. It divides the moral universe into heroes and enemies, order and chaos, patriots and radicals.

Noem’s comparison reflects a broader pattern in populist politics, where complex social movements are reduced to simple, threatening caricatures. In recent years, some Republican leaders have used antifa as a shorthand for all forms of left-wing unrest or criticism of authority.

Antifa’s decentralized structure makes it a convenient target for blame. That’s because it lacks clear boundaries, leadership and accountability. So any act by someone identifying with antifa can be framed as representing the whole movement, whether or not it does. And by linking antifa to terrorist groups, Noem, the top anti-terror official in the country, turns a political talking point into a claim that appears to carry the weight of national security expertise.

The problem with this kind of rhetoric is not just that it’s inaccurate. Equating protest movements with terrorist organizations blurs important distinctions that allow democratic societies to tolerate dissent. It also risks misdirecting attention and resources away from more serious threats — including organized, ideologically driven groups that remain the primary source of domestic terrorism in the U.S.

As I see it, Noem’s claim reveals less about antifa and more about the political uses of fear.

By invoking the language of terrorism to describe an anti-fascist movement, she taps into a potent emotional current in American politics: the desire for clear enemies, simple explanations and moral certainty in times of division.

But effective homeland security depends on evidence, not ideology. To equate street-level confrontation with organized terror is not only wrong — it undermines the credibility of the very institutions charged with protecting the public.

The Conversation

Art Jipson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the Trump administration’s comparison of antifa to violent terrorist groups doesn’t track – https://theconversation.com/why-the-trump-administrations-comparison-of-antifa-to-violent-terrorist-groups-doesnt-track-267514

Future of nation’s energy grid hurt by Trump’s funding cuts

Source: The Conversation – USA (2) – By Roshanak (Roshi) Nateghi, Associate Professor of Sustainability, Georgetown University

Large-capacity electrical wires carry power from one place to another around the nation. Stephanie Tacy/NurPhoto via Getty Images

The Trump administration’s widespread cancellation and freezing of clean energy funding is also hitting essential work to improve the nation’s power grid. That includes investments in grid modernization, energy storage and efforts to protect communities from outages during extreme weather and cyberattacks. Ending these projects leaves Americans vulnerable to more frequent and longer-lasting power outages.

The Department of Energy has defended the cancellations, saying that “the projects did not adequately advance the nation’s energy needs, were not economically viable and would not provide a positive return on investment of taxpayer dollars.” Yet before any funds are actually released through these programs, each grant must pass evaluations based on the department’s standards. Those included rigorous assessments of technical merits, potential risks and cost-benefit analyses — all designed to ensure alignment with national energy priorities and responsible stewardship of public funds.

I am an associate professor studying sustainability, with over 15 years of experience in energy systems reliability and resilience. In the past, I also served as a Department of Energy program manager focused on grid resilience. I know that many of these canceled grants were foundational investments in the science and infrastructure necessary to keep the lights on, especially when the grid is under stress.

The dollar-value estimates vary, and some of the money has already been spent. A list of canceled projects maintained by energy analysis company Yardsale totals about US$5 billion. An Oct. 2, 2025, announcement from the department touts $7.5 billion in cuts to 321 awards across 223 projects. Additional documents leaked to Politico reportedly identified additional awards under review. Some media reports suggest the full value of at-risk commitments may reach $24 billion — a figure that has not been publicly confirmed or refuted by the Trump administration.

These were not speculative ventures. And some of them were competitively awarded projects that the department funded specifically to enhance grid efficiency, reliability and resilience.

Grid improvement funding

For years, the federal government has been criticized for investing too little in the nation’s electricity grid. The long-term planning — and spending — required to ensure the grid reliably serves the public often falls victim to short-term political cycles and shifting priorities across both parties.

But these recent cuts come amid increasingly frequent extreme weather, increased cybersecurity threats to the systems that keep the lights on, and aging grid equipment that is nearing the end of its life.

These projects sought to make the grid more reliable so it can withstand storms, hackers, accidents and other problems.

National laboratories

In addition to those project cancellations, President Donald Trump’s proposed budget for 2026 contains deep cuts to the Office of Energy Efficiency and Renewable Energy, a primary funding source for several national laboratories, including the National Renewable Energy Laboratory, which may face widespread layoffs.

Among other work, these labs conduct fundamental grid-related research like developing and testing ways to send more electricity over existing power lines, creating computational models to simulate how the U.S. grid responds to extreme weather or cyberattacks, and analyzing real-time operational data to identify vulnerabilities and enhance reliability.

These efforts are necessary to design, operate and manage the grid, and to figure out how best to integrate new technologies.

A group of solar panels sits next to several large metal containers, as a train rolls past in the background.
Solar panels and large-capacity battery storage can support microgrids that keep key services powered despite bad weather or high demand.
Sandy Huffaker/AFP via Getty Images

Grid resilience and modernization

Some of the projects that have lost funding sought to upgrade grid management – including improved sensing of real-time voltage and frequency changes in the electricity sent to homes and businesses.

That program, the Grid Resilience and Innovation Partnerships Program, also funded efforts to automate grid operations, allowing faster response to outages or changes in output from power plants. It also supported developing microgrids – localized systems that can operate independently during outages. The canceled projects in that program, estimated to total $724.6 million, were in 24 states.

For example, a $19.5 million project in the Upper Midwest would have installed smart sensors and software to detect overloaded power lines or equipment failures, helping people respond faster to outages and prevent blackouts.

A $50 million project in California would have boosted the capacity of existing subtransmission lines, improving power stability and grid flexibility by installing a smart substation, without needing new transmission corridors.

Microgrid projects in New York, New Mexico and Hawaii would have kept essential services running during disasters, cyberattacks and planned power outages.

Another canceled project included $11 million to help utilities in 12 states use electric school buses as backup batteries, delivering power during emergencies and peak demand, like on hot summer days.

Several transmission projects were also canceled, including a $464 million effort in the Midwest to coordinate multiple grid connections from new generation sites.

Long-duration energy storage

The grid must meet demand at all times, even when wind and solar generation is low or when extreme weather downs power lines. A key element of that stability involves storing massive amounts of electricity for when it’s needed.

One canceled project would have spent $70 million turning retired coal plants in Minnesota and Colorado into buildings holding iron-air batteries capable of powering several thousand homes for as many as four days.

Two large yellow buses are parked next to each other.
Electric school buses like these could provide meaningful amounts of power to the grid during an outage.
Chris Jackson for The Washington Post via Getty Images

Rural and remote energy systems

Another terminated program sought to help people who live in rural or remote places, who are often served by just one or two power lines rather than a grid that can reroute power around an interruption.

A $30 million small-scale bioenergy project would have helped three rural California communities convert forest and agricultural waste into electricity.

Not all of the terminated initiatives were explicitly designed for resilience. Some would have strengthened grid stability as a byproduct of their main goals. The rollback of $1.2 billion in hydrogen hub investments, for example, undermines projects that would have paired industrial decarbonization with large-scale energy storage to balance renewable power. Similarly, several canceled industrial modernization projects, such as hybrid electric furnaces and low-carbon cement plants, were structured to manage power demand and integrate clean energy, to improve grid stability and flexibility.

The reliability paradox

The administration has said that these cuts will save money. In practice, however, they shift spending from prevention of extended outages to recovery from them.

Without advances in technology and equipment, grid operators face more frequent outages, longer restoration times and rising maintenance costs. Without investment in systems that can withstand storms or hackers, taxpayers and ratepayers will ultimately bear the costs of repairing the damage.

Some of the projects now on hold were intended to allow hospitals, schools and emergency centers to reduce blackout risks and speed power restoration. These are essential reliability and public safety functions, not partisan initiatives.

Canceling programs to improve the grid leaves utilities and their customers dependent on emergency stopgaps — diesel generators, rolling blackouts and reactive maintenance — instead of forward-looking solutions.

The Conversation

Roshanak (Roshi) Nateghi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Future of nation’s energy grid hurt by Trump’s funding cuts – https://theconversation.com/future-of-nations-energy-grid-hurt-by-trumps-funding-cuts-267504