Ecoball: how to turn picking up litter into a game for kids

Source: The Conversation – Africa – By Solaja Mayowa Oludele, Lecturing, Olabisi Onabanjo University

Wikimedia Commons, CC BY

Every year humanity produces nearly 300 million tonnes of plastic. Only a fraction ever gets recycled. Most ends up in rivers, oceans and soil, slowly breaking down into tiny, invisible microplastics that get into what we eat and drink.

Decades of recycling drives and policy bans have not altered the deep-rooted behaviours behind this crisis.
But what if the next big environmental solution isn’t a new law or technology – but a game?

I am an environmental sociologist and behaviour change researcher from Nigeria. I developed a game called EcoBall in 2023 as a social innovation that makes sport a tool for sustainability.

The concept is discussed in my peer-reviewed paper.

EcoBall reimagines football as a discipline of training for environmental stewardship. Instead of chasing goals alone, teams compete to collect, sort and creatively reuse plastic waste. Each match becomes a live demonstration of the circular economy – the idea that materials should be reused, not discarded.

Here I describe how the game works, why it influences people’s behaviour, and what we found when we tested it in Nigerian schools and youth clubs.

Three zones, one planet

An EcoBall match uses a real ball made from tightly woven recycled plastic bags – the “EcoBall” itself. Two or more teams compete across three timed “learning zones”, combining physical play with environmental tasks.

• Collection zone (10-15 minutes): To start play, the ball is placed at the centre of the field. Players pass and dribble it like they would in football or handball. The pitch or play area is scattered with lightweight, clean plastic litter. Teams race to gather the litter from the designated area and place it in a team bag or collection net along the sidelines before rejoining the game. Points are awarded for the amount and diversity of plastics collected.

• Sorting zone: Back on the pitch, players classify the plastics correctly (PET bottles, sachets, nylon wrappers and so on). Accurate sorting earns additional points and practical recycling knowledge. Teams earn points for goals and for the quantity or weight of litter collected.

• Creative zone: After each game, the collected plastics are sorted and delivered to recycling or upcycling partners. Using selected materials, teams craft new items – from art pieces to flower planters or even another EcoBall. Judges score on creativity, teamwork and utility.

Participants also engage in short reflective or educational sessions to discuss plastic pollution, sustainability habits, and collective responsibility.

The champion is not only the fastest but also the team with the most environmental impact.

What seems to be a game is really learning through doing. Participants learn sustainability not by being preached at but by doing it, competing and relishing their achievements together.

The psychology behind the game

EcoBall draws on two social-science ideas: the theory of planned behaviour and social capital theory. The first explains why people adopt sustainable habits. By making recycling fun, social and rewarding, EcoBall reshapes attitudes and perceived norms – the key drivers of behaviour.

The second highlights the power of trust and networks. EcoBall builds these bonds as teams collaborate and share victories, creating social momentum that keeps environmental action alive long after the game ends.

In designing and evaluating EcoBall, I combined these theories with research on sport-for-development and environmental education. Where I was both participant-observer and referee, the assessment compared data from questionnaires, focus groups and observation diaries. The design allowed for transparency, credibility, and contextual validity in interpretation of EcoBall’s impact on environmental attitudes and behaviours.

Tested on the field

Pilot sessions were conducted at several schools and youth clubs across Ogun State to ascertain the level to which EcoBall enhances environmental awareness, cooperation and pro-active participation in plastic litter removal.

The pilots were community-led and research-motivated and were supported by small donations from local NGOs and schools, and recycling businesses which provided gloves, collection bags and bins.




Read more:
Plastic pollution in Nigeria: whose job is it to clean up the mess?


Instructors reported increased cooperation and leadership. Players described being more responsible for their surroundings, and some of them formed neighbourhood clean-up clubs which extended weeks beyond the games. While the long-term effect is yet to be studied, these early findings show that EcoBall is likely to induce actual behavioural change.

From waste to wealth

EcoBall also shows that environmental action can create livelihoods.
In one pilot, students built benches and flower planters from bottles gathered during matches. Others began selling up-cycled crafts, while the organisation of events – coaching, logistics and recycling partnerships – generated new work opportunities.

Such experiences echo the circular-economy principle of turning waste into worth.

Uniting generations and communities

Because EcoBall requires little equipment – just gloves, bags and open space – it thrives in low-resource communities.

The design was intentionally simple, ensuring accessibility and inclusion where conventional sports infrastructure is absent.

Although EcoBall is inexpensive to initiate, its long-term delivery as a structured sport-for-development and environmental education programme requires sustained funding. Investment is needed for facilitator training, community engagement, and monitoring activities. This is typical of community interventions: low-cost to launch but funding-dependent to sustain and scale.

Children, parents, and grandparents can play together, bridging generations and backgrounds. This shared passion generates a feeling of ownership of public spaces and renewed pride in keeping them clean.




Read more:
Not sure how to keep your kids busy and happy these holidays? Here are five tips.


Schools are able to incorporate EcoBall into extracurricular activities, municipalities can organise tournaments tied in with cleanup initiatives, and corporations can make it part of their corporate social responsibility initiatives.

Following early successes, two NGOs that work with youth development have begun using EcoBall in their environmental clubs, and discussions are underway with the National Youth Service Corps to introduce it into community services.

Challenges and opportunities

No innovation is challenge-free. EcoBall needs consistent funding, materials and cultural adaptation. Keeping players engaged may require creative incentives – such as mobile apps to track points or online leaderboards connecting communities globally.

Yet these hurdles create opportunities. A “World EcoBall Cup” could one day unite cities or nations, rewarding those who divert the most plastic from the environment.

Instead of medals, winners would boast cleaner beaches and thriving circular economies.

Play for the planet

The global plastic crisis demands solutions that move people, not just policies.

EcoBall does exactly that – bringing sport together with green purpose and demonstrating that climate action has the power to be human, inclusive and fun.




Read more:
Informal waste collection shouldn’t let plastic polluters off the hook: here’s why


It is not the sole responsibility of scientists or policymakers to fight pollution. It belongs to everyone willing to pick up a ball – or a bottle – and make a difference.

The Conversation

Solaja Mayowa Oludele does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Ecoball: how to turn picking up litter into a game for kids – https://theconversation.com/ecoball-how-to-turn-picking-up-litter-into-a-game-for-kids-267888

Luxury tourism is a risky strategy for African economies – new study of Botswana, Mauritius, Rwanda

Source: The Conversation – Africa – By Pritish Behuria, Reader in Politics, Governance and Development, Global Development Institute, University of Manchester

Mauritius led the luxury tourism trend in Africa with all-inclusive resorts. Heritage Awali/yourgolftravel.com, CC BY-NC-ND

How successful is luxury tourism in Africa? What happens if it fails to produce higher tourism revenues: can it be reversed? And does it depend on what kind of government is in place?

Pritish Behuria is a scholar of the political economy of development who has conducted a study in Botswana, Mauritius and Rwanda to find answers to questions like this. We asked him about his findings.


What is luxury tourism and how prevalent is it in Africa?

Luxury tourism aims to attract high-spending tourists to stay at premium resorts and lodges or visit exclusive attractions. It’s a strategy that’s being adopted widely by governments around the world and also in African countries.

It’s been promoted by multilateral agencies like the World Bank and the United Nations, as well as environmental and conservation organisations.

The logic underlying luxury tourism is that if fewer, high-spending tourists visit, this will result in less environmental impact. It’s often labelled as a “high-value, low-impact” approach.




Read more:
Why your holiday flight is still not being powered by sustainable aviation fuel


However, studies have shown that luxury tourism does not lead to reduced environmental impact. Luxury tourists are more likely to use private jets. Private jets are more carbon intense than economy class travel. Supporters of luxury tourism also ignore that it reinforces economic inequalities, commercialises nature and restricts land access for indigenous populations.

In some ways, of course, the motives of African countries seem understandable. They remain starved of much-needed foreign exchange in the face of rising trade deficits. The allure of luxury tourism seems almost impossible to resist.

How did you go about your study?

I have been studying the political economy of Rwanda for nearly 15 years. The government there made tourism a central part of its national vision.

Over the years, many government officials and tourism stakeholders highlighted the challenges of luxury tourism strategies. Even so, there remains a single-mindedness to prioritise luxury tourism.

I found that, in Rwanda, luxury tourism resulted in a reliance on foreign-owned hotels and foreign travel agents, exposing potential leakages in tourism revenues. Crucially, tourism was not creating enough employment. There was also a skills lag in the sector. Employees were not being trained quickly enough to meet the surge of investments in hotels.




Read more:
What cost-of-living crisis? Luxury travel is booming – and set to grow further


So I decided to investigate the effects of luxury tourism in other African countries. I wanted to know who benefits and how it is being reversed in countries that are turning away from it.

I interviewed government officials, hotel owners and other private sector representatives, aviation officials, consultants and journalists in all three countries. Added to this was a thorough review of economic data, industry reports and grey literature (including newspaper articles).

What are your take-aways from Mauritius?

Mauritius was the first of the three countries to explicitly adopt a luxury tourism strategy. In the late 1970s and early 1980s the government began to encourage European visitors to the island’s “sun-sand-sea” attractions. Large domestic business houses became lead investors, building luxury hotels and buying coastal land.

Over the years, tourism has provided significant revenues for the Mauritian economy. By 2019, the economy was earning over US$2 billion from the sector (before dropping during the COVID pandemic).

However, tourism has also been symbolic of the inequality that has characterised Mauritius’ growth. The all-inclusive resort model – where luxury hotels take care of all of a visitor’s food and travel needs themselves – has meant that the money being spent by tourists doesn’t always enter the local economy. A large share of profits remains outside the country or with large hotels.

After the pandemic, the Mauritian government took steps to loosen its focus on luxury tourism. It opened its air space to attract a broader range of tourists and re-started direct flights to Asia. There’s growing agreement within government that the opening up of tourism will go some way towards sustaining revenues and employment in the sector. Especially as some other key sectors (like offshore finance) may face an uncertain future.

And from Botswana?

Botswana followed Mauritius by formally adopting a luxury tourism strategy in 1990. Its focus was on its wilderness areas (the Okavango Delta) and wildlife safari lodges. For decades, there were criticisms from scholars about the inequalities in the sector.

Most lodges and hotels were foreign owned. Most travel agencies that booked all-inclusive trips operated outside Botswana. There were very few domestic linkages. Very little domestic agricultural or industrial production was used within the sector.

An aerial photo of a vast land of water and rocky. Small boats cross the water.
Guides take tourists across Botswana’s Okavango delta in boats.
Diego Delso/Wikimedia Commons, CC BY-SA

However, I found that the direction of tourism policies had also become increasingly political. Certain politicians were aligned with conservation organisations and foreign investors in prioritising luxury tourism. Former president Ian Khama, for example, banned trophy hunting on ethical grounds in 2014. He pushed photographic tourism, where travellers visit destinations mainly to take photos. But critics allege he and his allies benefited from the push for photographic tourism.

Photographic tourism is closely linked with the problematic promotion of “unspoilt” wilderness areas that conform to foreign ideas about the “myth of wild Africa”.

President Mokgweetsi Masisi reversed the hunting ban once he took power. He argued it had adverse effects on rural communities and increased human-wildlife conflict. He believed that regulated hunting could be a tool for better wildlife management and could produce more benefits for communities.

Since the latter 2010s, Botswana’s government has loosened the emphasis on luxury tourism and tried to diversify tourism offerings. It has relaxed visa regulations for Asian countries, for example, to allow a wider range of tourists to visit more easily.

What about Rwanda?

Of the three cases, Rwanda was the most recent to adopt a luxury tourism strategy. However, it has remained the most committed to this strategy. Rwanda’s model is centred on mountain gorilla trekking and premium wildlife experiences. It’s augmented by Rwanda’s attempt to become a hub for business and sports tourism through high-profile conferences and events.

A statue in a breen-leafed area of a male, female, and baby gorilla.
Gorillas are a key attraction for luxury tourists in Rwanda.
Gatete Pacifique/Wikimedia Commons, CC BY-SA

Rwanda invited global hotel brands (like the Hyatt and Marriott) to build hotels and invested heavily in the country’s “nation brand” through sponsoring sports teams. The “luxury” element is managed through maintaining a high price to visit the country’s main tourist attraction: mountain gorillas. Rwanda is one of the few countries where mountain gorillas live.

After the pandemic, the government lowered prices to visit mountain gorillas but has also regularly stated its commitment to luxury tourism.

What did you learn by comparing the three?

I wanted to know why some countries reverse luxury tourism strategies once they fail while others don’t.

It is quite clear that luxury tourism strategies will always have disadvantages. As this study shows, luxury tourism repeatedly benefits only very few actors (often foreign investors or foreign-owned entities) and does not create sufficient employment or provide wider benefits for domestic populations. My research shows that the political pressure faced by democratic governments (like Botswana and Mauritius) forced them to loosen their luxury tourism strategies. This was not the case in more authoritarian Rwanda.




Read more:
Travelling in 2025? Here’s how to become a ‘regenerative’ tourist


Rwanda’s position goes against a lot of recent literature on African political economy, which argues that parties with a stronger hold on power would be able to deliver better development outcomes.

While that may be case in some sectors, the findings of this study suggest that weaker political parties may actually be more responsive to changing policies that are creating inequality than countries with stronger political parties in power.

The Conversation

Pritish Behuria is a recipient of the British Academy Mid-Career Fellowship 2024-2025 (MFSS24/240043).

ref. Luxury tourism is a risky strategy for African economies – new study of Botswana, Mauritius, Rwanda – https://theconversation.com/luxury-tourism-is-a-risky-strategy-for-african-economies-new-study-of-botswana-mauritius-rwanda-267877

Nigeria’s government is using digital technology to repress citizens. A researcher explains how

Source: The Conversation – Africa (2) – By Chibuzo Achinivu, Visiting Assistant Professor of Political Science, Vassar College

Digital authoritarianism is a new way governments are trying to control citizens using digital and information technology. It is a growing concern for advocacy groups and those interested in freedom and democracy. It is especially worrying for those who initially heralded digital and information technologies as liberating tools that would spread information more easily for citizens.

I have studied the rise of digital authoritarianism in Africa over the last two decades. My most recent study focused on Nigeria, and its turn to digital tools for control after the 2020 #EndSARS Movement protests.

I found that local conflict and development needs drive the Nigerian government’s demand for digital authoritarianism technologies. Foreign suppliers of these technologies are motivated by both economic gain and influence in the region.

The findings are important. Firstly, it signals that the trend of using digital spaces to control populations has reached the African continent. It also shows that the trend is facilitated by foreign actors that provide governments with the technology and expertise.

What is digital authoritarianism?

One way to understand the concept of digital authoritarianism is as a form of governance or set of actions aimed at undermining accountability. It is the use of digital technologies for this goal.

Technology is used to repress voices, keep people under surveillance, and manipulate populations for regime goals and survival.

It includes but is not limited to internet and social media shutdowns. It prioritises the use of spyware to hack and monitor people through their devices. There is mass surveillance using artificial intelligence for facial recognition, and misinformation and disinformation propaganda campaigns.

What drives it in Africa

In Africa these actions are popping up in democracies like Nigeria and in autocracies alike. Perhaps the noticeable difference between these two types of governments is the subtlety of their form of digital authoritarianism and the legal recourse when such actions are unearthed.

Both governance types make claims of national security and public safety to justify these tactics. For instance, former Nigerian information minister Lai Mohammed claimed the 2020 Twitter ban was due to “the persistent use of the platform for activities that are capable of undermining Nigeria’s corporate existence”.

Autocracies are often cruder with their use of blatant tactics. They employ internet and social media shutdowns. This is often due to their unsophisticated digital authoritarianism apparatuses. Democracies often rely on more subtle surveillance and misinformation campaigns to reach their goals.

This all begs the question: what are the drivers of this trend? There are four clear ones:

  • regime survival/political control

  • security and counterterrorism

  • electoral competition and information manipulation

  • modernisation agendas (development).

On the rise

In the African context digital authoritarianism is on the rise. There’s a cohesive relationship between the foreign suppliers of the hardware, expertise and domestic demand. This demand stems from authoritarian regimes as well as regimes accessing digital systems to consolidate and modernise. There are also hybrid regimes, which are countries with a mixture democratic and authoritarian institutions.

States like China, Russia, Israel, France and the US supply both the technology and instruction or best practices to African regimes. Reasons for supply include economic gain and regional influence.

On the demand side, African regimes seek out digital authoritarianism tools mainly for development needs and for conflict resolution. Some of the largest consumers are Kenya, Rwanda, Uganda, Nigeria and Ghana.

The study

I found there was evidence that Nigeria’s development goals and efforts to quell conflicts drive the use of technology to repress its people. Using the example of the #EndSARS movement, social media platform shutdowns and efforts to build a firewall akin to China’s great firewall serve as evidence for this.

In the days following Twitter’s removal of a post by President Muhammadu Buhari, Twitter was banned in Nigeria. The administration cited its use to further unrest, instability, and secessionist movements. There were claims that this step was taken to maintain internet sovereignty.

However, the ban also undermined social movements that were successfully holding the government accountable. Following domestic and international outcry over the ban, there were reports that the Nigerian government had approached China. The purpose of the contact was to replicate their “Great Firewall” in Nigeria’s internet control apparatus. (The focus of China’s project is to monitor and censor what can and cannot be seen through an online network in China.) This would allow the state to manage access to certain cites and block unwanted content from reaching Nigerians.

On the supply side, China’s economic commitments to the country and concerted efforts to cultivate certain norms in the country and region offer insights into the motivations for supply in this case and the broader continent.

Again, regime type dictates just how these technologies will be used. Interviews conducted with permanent secretaries and ministers of Nigerian ministries were particularly revealing. They confirmed that repressive government practices in the real world are informing their activity in digital spaces.

For instance, they intimated that the repression that occurs during protests in the streets in order to manage “lawlessness” is being replicated online. Its purpose is to ensure peace and stability.

For development needs, countries like Nigeria initially seek out foreign suppliers to furnish them with state of the art technology systems. The objective is to establish or refurbish their information and communications technology apparatuses.

These include but are not limited to national broadband networks) such as fiber optic networks, mobile telecommunications networks and smart city governance systems. Though these are often not repressive in nature, they are capable of dual use. Thus, these development needs provide technologies that are then utilized in an authoritarian fashion for state building goals.

There is also evidence that some suppliers provide instruction on how to use these technologies for repression. In some instances, under the guise of development needs, regimes seek out more repressive tools such as spyware alongside these infrastructural development programs. At this stage, the boundary between development and security blurs, as modernization becomes a vehicle for national security, cyber defense, regime protection, and information control.

What can be done?

I propose a three-pronged approach to address the three drivers. First of all, more has to be done on the international front to curb the sale of repressive tools to states. There must be a conversation about the norms of these technologies and their use for repression in both democracies and autocracies.

On the demand side, it appears those practices that have plagued the hopes of freedom and democracy in the real world have to be addressed. Naturally, no movement on the digital front is complete without a real world manifestation. It seems logical that eradicating digital repression necessitates addressing repression in general.

Finally, regulatory legal and institutional oversight alongside human rights benchmarks must be achieved. These will accompany digital and privacy rights in cyberspace.

The Conversation

Chibuzo Achinivu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Nigeria’s government is using digital technology to repress citizens. A researcher explains how – https://theconversation.com/nigerias-government-is-using-digital-technology-to-repress-citizens-a-researcher-explains-how-267032

Climate change is making cities hotter. Here’s how planting trees can help

Source: The Conversation – Canada – By Lingshan Li, PhD candidate, Department of Geography, Planning and Environment, Concordia University

Canada’s climate is warming twice as fast as the global average, and many cities will experience at least four times as many extreme heat events (days above 30 C) per year in the coming decades.

In Québec alone, elevated summer temperatures were associated with about 470 deaths, 225 hospitalizations, 36,000 emergency room visits, 7,200 ambulance transports and 15,000 calls to a health hotline every year.

To tackle the crisis of climate change, the government of Canada proposed the 2 Billion Trees program that aims to plant two billion trees by 2031 over a period of 10 years.

But such ambitions come with important questions:

  • Where and how to plant these trees?
  • How to manage the trees to provide more cooling for the people?
  • How to direct the cooling to the most underserved communities?

Colleagues and I recently published a study in Montréal that explores how urban green spaces can reduce surface temperature and help promote environmental justice. We found that even small increases in green spaces can make a notable difference in city temperatures.




Read more:
Urban trees vs. cool roofs: What’s the best way for cities to beat the heat?


Why the placement of trees is important

If you’ve ever passed under the shade of a tree on a hot summer day and felt the temperature drop, you know how valuable they are in cities. Both the amount and layout of urban green spaces affect how much they can cool a city.

The way trees, parks and other green areas are arranged can change how they provide shade and release moisture into the air, which together determine how much they can lower the surrounding temperature.

Where urban green spaces are located is also related to an important social issue: environmental justice. Unequally distributed green spaces can restrict residents’ access to cooling in certain neighbourhoods, contributing to social inequalities within a city.

Those living in low-income neighbourhoods feeling the harshest impacts of urban heat can struggle to find green spaces where they can cool off. Young children and the elderly are also more susceptible to the dangers of prolonged heat exposure.

There is a need for municipal governments to get a better view of how well these vulnerable groups receive the cooling provided by urban green infrastructure and what factors have driven the unbalanced distribution.

What we found

Using satellite imagery and laser imaging, we found that having more trees, grass and shrubs in an area can notably reduce temperatures. We developed a model to estimate the cooling effect provided by urban green infrastructure based on several indicators. Those indicators reflect the quantity and quality aspect of the urban greenery.

Our model showed that a 10 per cent increase in tree coverage can lower land surface temperature by approximately 1.4 C. A similar increase in shrubs and grass lowers temperatures by about 0.8 C.

The result also indicated that large, continuous groups of trees cool their surroundings better than small, scattered patches. A 10 per cent increase in the aggregation level of tree cluster (area of the largest patch of trees divided by the total area of trees within a landscape unit) can lower land surface temperature by about 0.2 C.

We also found that the cooling provided by green spaces in many parts of Montréal do no meet the needs of local residents. This mismatch varies a lot between census tracts.

Areas in the city abundant with green spaces include boroughs like Le Plateau-Mont-Royal, Outremont, L’Île-Bizard–Sainte-Geneviève and the village of Senneville. Meanwhile, areas such as Montréal-Est, Saint-Leonard and Saint-Laurent have the least amount of green space.

In addition, areas like Pointe-Claire and Montréal-Nord have good green space, but their mismatch index is still low because many vulnerable people live there. The mismatch index is calculated by supply index minus demand index; that means a higher demand index would lead to a lower mismatch index.

Neighbourhoods with higher median incomes and more highly educated people were mostly associated with positive supply-demand values. That indicates their supply of cooling services as provided by urban green spaces was higher than their demand.

In contrast, census tracts with higher proportion of racialized people and people with a lower level of education tend to lack enough green spaces where residents can cool off.

Vulnerable people (young and elderly individuals) with a higher socio-economic status received more cooling services provided by the urban green spaces. In contrast, those on the other end of the socio-economic spectrum were more likely to struggle to easily find a place to cool off.

What we can do in the future

For cities with similar humid summer months like Montréal, urban planners who want to reduce daytime heat should consolidate tree patches into large, continuous areas where possible.

It is also helpful to design smaller-scale green spaces with more irregularly shaped tree patches and create enhanced connectivity, especially for grass, to support small-scale cooling.

In Montréal and other cities where green spaces are unequally distributed, municipal officials should develop ranked action plans for greening efforts that consider environmental justice and prioritize areas where the need for cooling is greatest.

The Conversation

Lingshan Li receives funding from The Trottier Family Foundation and the Natural Sciences and Engineering Research Council of Canada.

ref. Climate change is making cities hotter. Here’s how planting trees can help – https://theconversation.com/climate-change-is-making-cities-hotter-heres-how-planting-trees-can-help-267827

Solar storms have influenced our history – an environmental historian explains how they could also threaten our future

Source: The Conversation – USA – By Dagomar Degroot, Associate Professor of Environmental History, Georgetown University

Coronal mass ejections from the Sun can cause geomagnetic storms that may damage technology on Earth. NASA/GSFC/SDO

In May 2024, part of the Sun exploded.

The Sun is an immense ball of superheated gas called plasma. Because the plasma is conductive, magnetic fields loop out of the solar surface. Since different parts of the surface rotate at different speeds, the fields get tangled. Eventually, like rubber bands pulled too tight, they can snap – and that is what they did last year.

These titanic plasma explosions, also known as solar flares, each unleashed the energy of a million hydrogen bombs. Parts of the Sun’s magnetic field also broke free as magnetic bubbles loaded with billions of tons of plasma.

These bubbles, called coronal mass ejections, or CMEs, crashed through space at around 6,000 times the speed of a commercial jetliner. After a few days, they smashed one after another into the magnetic field that envelops Earth. The plasma in each CME surged toward us, creating brilliant auroras and powerful electrical currents that rippled through Earth’s crust.

A coronal mass ejection erupting from the Sun.

You might not have noticed. Just like the opposite poles of fridge magnets have to align for them to snap together, the poles of the magnetic field of Earth and the incoming CMEs have to line up just right for the plasma in the CMEs to reach Earth. This time they didn’t, so most of the plasma sailed off into deep space.

Humans have not always been so lucky. I’m an environmental historian and author of the new book “Ripples on the Cosmic Ocean: An Environmental History of Our Place in the Solar System.”

While writing the book, I learned that a series of technological breakthroughs – from telegraphs to satellites – have left modern societies increasingly vulnerable to the influence of solar storms, meaning flares and CMEs.

Since the 19th century, these storms have repeatedly upended life on Earth. Today, there are hints that they threaten the very survival of civilization as we know it.

The telegraph: A first warning

On the morning of Sept. 1, 1859, two young astronomers, Richard Carrington and Richard Hodgson, became the first humans to see a solar flare. To their astonishment, it was so powerful that, for two minutes, it far outshone the rest of the Sun.

About 18 hours later, brilliant, blood-red auroras flickered across the night sky as far south as the equator, while newly built telegraph lines shorted out across Europe and the Americas.

The Carrington Event, as it was later called, revealed that the Sun’s environment could violently change. It also suggested that emerging technologies, such as the electrical telegraph, were beginning to link modern life to the extraordinary violence of the Sun’s most explosive changes.

For more than a century, these connections amounted to little more than inconveniences, like occasional telegraph outages, partly because no solar storm rivaled the power of the Carrington Event. But another part of the reason was that the world’s economies and militaries were only gradually coming to rely more and more on technologies that turned out to be profoundly vulnerable to the Sun’s changes.

A brush with Armageddon

Then came May 1967.

Soviet and American warships collided in the Sea of Japan, American troops crossed into North Vietnam and the Middle East teetered on the brink of the Six-Day War.

It was only a frightening combination of new technologies that kept the United States and Soviet Union from all-out war; nuclear missiles could now destroy a country within minutes, but radar could detect their approach in time for retaliation. A direct attack on either superpower would be suicidal.

Several buildings on an icy plain, with green lights in the sky above.
An aurora – an event created by a solar storm – over Pituffik Space Base, formerly Thule Air Base, in Greenland in 2017. In 1967, nuclear-armed bombers prepared to take off from this base.
Air Force Space Command

Suddenly, on May 23, a series of violent solar flares blasted the Earth with powerful radio waves, knocking out American radar stations in Alaska, Greenland and England.

Forecasters had warned officers at the North American Air Defense Command, or NORAD, to expect a solar storm. But the scale of the radar blackout convinced Air Force officers that the Soviets were responsible. It was exactly the sort of thing the USSR would do before launching a nuclear attack.

American bombers, loaded with nuclear weapons, prepared to retaliate. The solar storm had so scrambled their wireless communications that it might have been impossible to call them back once they took off. In the nick of time, forecasters used observations of the Sun to convince NORAD officers that a solar storm had jammed their radar. We may be alive today because they succeeded.

Blackouts, transformers and collapse

With that brush with nuclear war, solar storms had become a source of existential risk, meaning a potential threat to humanity’s existence. Yet the magnitude of that risk only came into focus in March 1989, when 11 powerful flares preceded the arrival of back-to-back coronal mass ejections.

For more than two decades, North American utility companies had constructed a sprawling transmission system that relayed electricity from power plants to consumers. In 1989, this system turned out to be vulnerable to the currents that coronal mass ejections channeled through Earth’s crust.

Several large pieces of metal machinery lined up in an underground facility.
An engineer performs tests on a substation transformer.
Ptrump16/Wikimedia Commons, CC BY-SA

In Quebec, crystalline bedrock under the city does not easily conduct electricity. Rather than flow through the rock, currents instead surged into the world’s biggest hydroelectric transmission system. It collapsed, leaving millions without power in subzero weather.

Repairs revealed something disturbing: The currents had damaged multiple transformers, which are enormous customized devices that transfer electricity between circuits.

Transformers can take many months to replace. Had the 1989 storm been as powerful as the Carrington Event, hundreds of transformers might have been destroyed. It could have taken years to restore electricity across North America.

Solar storms: An existential risk

But was the Carrington Event really the worst storm that the Sun can unleash?

Scientists assumed that it was until, in 2012, a team of Japanese scientists found evidence of an extraordinary burst of high-energy particles in the growth rings of trees dated to the eighth century CE. The leading explanation for them: huge solar storms dwarfing the Carrington Event. Scientists now estimate that these “Miyake Events” happen once every few centuries.

Astronomers have also discovered that, every century, Sun-like stars can explode in super flares up to 10,000 times more powerful than the strongest solar flares ever observed. Because the Sun is older and rotates more slowly than many of these stars, its super flares may be much rarer, occurring perhaps once every 3,000 years.

Nevertheless, the implications are alarming. Powerful solar storms once influenced humanity only by creating brilliant auroras. Today, civilization depends on electrical networks that allow commodities, information and people to move across our world, from sewer systems to satellite constellations.

What would happen if these systems suddenly collapsed on a continental scale for months, even years? Would millions die? And could a single solar storm bring that about?

Researchers are working on answering these questions. For now, one thing is certain: to protect these networks, scientists must monitor the Sun in real time. That way, operators can reduce or reroute the electricity flowing through grids when a CME approaches. A little preparation may prevent a collapse.

Fortunately, satellites and telescopes on Earth today keep the Sun under constant observation. Yet in the United States, recent efforts to reduce NASA’s science budget have cast doubt on plans to replace aging Sun-monitoring satellites. Even the Daniel K. Inouye Solar Telescope, the world’s premier solar observatory, may soon shut down.

These potential cuts are a reminder of our tendency to discount existential risks – until it’s too late.

The Conversation

Dagomar Degroot has received funding from NASA.

ref. Solar storms have influenced our history – an environmental historian explains how they could also threaten our future – https://theconversation.com/solar-storms-have-influenced-our-history-an-environmental-historian-explains-how-they-could-also-threaten-our-future-258668

The Glozel affair: A sensational archaeological hoax made science front-page news in 1920s France

Source: The Conversation – USA – By Daniel J. Sherman, Lineberger Distinguished Professor of Art History and History, University of North Carolina at Chapel Hill

All eyes were on a commission of professional archaeologists when they visited Glozel. Agence Meurisse/BnF Gallica

In early November 1927, the front pages of newspapers all over France featured photographs not of the usual politicians, aviators or sporting events, but of a group of archaeologists engaged in excavation. The slow, painstaking work of archaeology was rarely headline news. But this was no ordinary dig.

yellowed newspaper page with photos of archaeologists at dig site
A front-page spread in the Excelsior newspaper from Nov. 8, 1927, features archaeologists at work in the field with the headline ‘What the learned commission found at the Glozel excavations.’
Excelsior/BnF Gallica

The archaeologists pictured were members of an international team assembled to assess the authenticity of a remarkable site in France’s Auvergne region.

Three years before, farmers plowing their land at a place called Glozel had come across what seemed to be a prehistoric tomb. Excavations by Antonin Morlet, an amateur archaeologist from Vichy, the nearest town of any size, yielded all kinds of unexpected objects. Morlet began publishing the finds in late 1925, immediately producing lively debate and controversy.

Certain characteristics of the site placed it in the Neolithic era, approximately 10,000 B.C.E. But Morlet also unearthed artifact types thought to have been invented thousands of years later, notably pottery and, most surprisingly, tablets or bricks with what looked like alphabetic characters. Some scholars cried foul, including experts on the inscriptions of the Phoenicians, the people thought to have invented the Western alphabet no earlier than 2000 B.C.E.

Was Glozel a stunning find with the capacity to rewrite prehistory? Or was it an elaborate hoax? By late 1927, the dispute over Glozel’s authenticity had become so strident that an outside investigation seemed warranted.

The Glozel affair now amounts to little more than a footnote in the history of French archaeology. As a historian, I first came across descriptions of it in some histories of French archaeology. With a bit of investigating, it wasn’t hard to find first-person accounts of the affair.

sketch of seven lines of alphabet-like notations on two rectangles
Examples of the kinds of inscriptions found at the Glozel site, as recorded by scholar Salomon Reinach.
‘Éphémérides de Glozel’/Wikimedia Commons

But it was only when I began studying the private papers of one of the leading contemporary skeptics of Glozel, an archaeologist and expert on Phoenician writing named René Dussaud, that I realized the magnitude and intensity of this controversy. After publishing a short book showing that the so-called Glozel alphabet was a mishmash of previously known early alphabetic writing, in October 1927 Dussaud took out a subscription to a clipping service to track mentions of the Glozel affair; in four months he received over 1,500 clippings, in 10 languages.

The Dussaud clippings became the basis for the account of Glozel in my recent book, “Sensations.” That the contours of the affair first became clear to me in a pile of yellowed newspaper clippings is appropriate, because Glozel embodies a complex relationship between science and the media that persists today.

Front page of a newspaper with images of people digging and holding up finds
The newspaper Le Matin, which vigorously promoted Glozel’s authenticity, even sponsored its own dig near the site, led by a journalist.
Le Matin/BnF Gallica

Serious scientists in the trenches

The international commission’s front-page visit to Glozel marked a watershed in the controversy, even if it did not resolve it entirely.

In a painstaking report published in the scholarly Revue anthropologique just before Christmas 1927, the commission recounted the several days of digging it conducted, provided detailed plans of the site, described the objects it unearthed and carefully explained its conclusion that the site was “not ancient.”

shelves with various clay vessels and shards piled on them
Recovered objects displayed in the Fradins’ museum in 1927.
Agence de presse Meurisse/Wikimedia Commons

The report emphasized the importance of proper archaeological method. Early on, the commissioners noted that they were “experienced diggers, all with past fieldwork to their credit,” in different chronological subfields of archaeology. In contrast, they noted that the Glozel site showed clear signs of a lack of order and method.

In their initial meeting in Vichy, the assembled archaeologists agreed that they would give no interviews during their visit to Glozel and would not speak to the press afterward. But, aware of “certain tendentious articles published by a few newspapers,” the visitors issued a communiqué stating that they would neither confirm nor deny any press reports. Their scholarly publication would be their final word on the “non-ancientness” of the site.

The distinction between true science – what the archaeologists were practicing – and the media seemed absolute.

Sensationalist coverage, but careful details, too

And yet matters were not so simple.

Many newspapers devoted extensive and careful coverage to Glozel. They offered explanations of archaeological terminology. They explained the larger stakes of the controversy, which, beyond the invention of the alphabet, involved nothing less than the direction of the development of Western civilization itself, whether from Mesopotamia in the east to Europe in the west or the reverse.

Even articles about seemingly trivial matters, such as the work clothes the archaeologists donned to perform their test excavations at Glozel, served to reinforce the larger point the commissioners made in their report. In contrast to the proper suits and ties they wore for formal photographs marking their arrival, the visitors all put on blue overalls, which for one newspaper “gave them the air of apprentice locksmiths or freshly decked-out electricians.”

The risk, apparent in this jocular reference, of losing the social standing afforded them by their professional degrees and education was worth taking because it drove home these archaeologists’ devotion to their discipline, which their report described as “a daily moral obligation.”

seven people dressed formally standing against a building
Morlet, far left, and the international commission in front of the Fradins’ museum in November 1927. Garrod is third from the left.
Agence Meurisse

Skeptical scientists did rely on journalism

If archaeologists continued to mistrust the many newspapers that sensationalized Glozel, its stakes and their work in general, they could not escape the popular media entirely, so they confided in a few journalists at papers they considered responsible.

Shortly after the publication of the report, which was summarized and excerpted in the daily press, original excavator Morlet accused Dorothy Garrod, the only woman on the commission, of having tampered with the site. A group of archaeologists responded on her behalf, explaining what she had actually been doing and defending her professionalism – in the press.

At the most basic level, media coverage recorded the standard operating procedures of archaeology and its openness to outside scrutiny. This was in contrast to Morlet’s excavations, which limited access only to believers in the authenticity of Glozel.

Under the watchful eyes of reporters and photographers, the outside archaeologists investigating Glozel knew quite well that they were engaged in a kind of performance, one in which their discipline, as much as this particular discovery, was on trial.

Like the signs in my neighborhood proclaiming that “science is real,” the international commission depended on and sought to fortify the public’s confidence in the integrity of scientific inquiry. To do that, it needed the media even while expressing a healthy skepticism about it. It’s a balancing act that persists in today’s era of “trusting science.”

The Conversation

This article draws on research funded by the Institut d’Études Avancées (Paris), the Institute for Advanced Study (Princeton), and the National Endowment for the Humanities, as well as Daniel Sherman’s employer, the University of North Carolina at Chapel Hill.

ref. The Glozel affair: A sensational archaeological hoax made science front-page news in 1920s France – https://theconversation.com/the-glozel-affair-a-sensational-archaeological-hoax-made-science-front-page-news-in-1920s-france-260967

AI reveals which predators chewed ancient humans’ bones – challenging ideas on which ‘Homo’ species was the first tool-using hunter

Source: The Conversation – USA – By Manuel Domínguez-Rodrigo, Professor of Anthropology, Rice University

If *Homo habilis* was often chomped by leopards, it probably wasn’t the top predator. Made with AI (DALL-E 4)

Almost 2 million years ago, a young ancient human died beside a spring near a lake in what is now Tanzania, in eastern Africa. After archaeologists uncovered his fossilized bones in 1960, they used them to define Homo habilis – the earliest known member of our own genus.

Paleoanthropologists define the first examples of the genus Homo based largely on their bigger brains – and, sometimes, smaller teeth – compared with other, earlier ancestors such as the australopithecines – the most famous of these being Lucy. There were at least three types of early humans: Homo habilis, Homo rudolfensis and the best documented species, Homo erectus. At least one of them created sites now in the archaeological record, where they brought and shared food, and made and used some of the earliest stone tools.

These archaeological sites date to between 2.6 to 1.8 million years ago. The artifacts within them suggest greater cognitive complexity in early Homo than documented among any nonhuman primate. For example, at Nyayanga, a site in Kenya, anthropologists recently found that early humans were using tools they transported over distances of up to 8 miles (13 kilometers). This action indicates forethought and planning.

Traditionally, paleoanthropologists believed that Homo habilis, as the earliest big-brained humans, was responsible for the earliest sites with tools. The idea has been that Homo habilis was the ancestor of later and even bigger-brained Homo erectus, whose descendants eventually led to us.

This narrative made sense when the oldest known Homo erectus remains were younger than 1.6 million years old. But given recent discoveries, this seems like a shaky foundation.

In 2015, my team discovered a 1.85 million-year-old hand bone at Olduvai Gorge, the same place the original Homo habilis had been found. But unlike the hand of that Homo habilis juvenile, this fossil looked like it belonged to a larger, more modern, fully land-based rather than tree-based human species: Homo erectus.

Over the past decade, new finds have continued to push back the earliest dates for Homo erectus: about 2 million years ago in South Africa, Kenya and Ethiopia. Taken together, these discoveries reveal that H. erectus is slightly older than the known H. habilis fossils. We cannot simply assume that H. habilis gave rise to H. erectus. Instead, the human family tree looks far bushier than we once thought.

What do all these finds suggest? Only one Homo species is our likely ancestor, and probably only one can be responsible for the complex behaviors revealed at the Olduvai Gorge sites. My colleagues and I hit on a way to test whether Homo habilis was top dog at Olduvai Gorge, so to speak, based on whether they were the hunters or the hunted.

Who was hunting who?

At Olduvai Gorge, there is overwhelming evidence that early humans were consuming animals as big as a gazelle or even a zebra. Not only did they hunt, but they repeatedly brought these animals back to the same location for communal consumption. This is the concept of a “central provisioning place,” much like a campsite or home today. Dating to 1.85 million years ago, this is the oldest evidence of frequent meat-eating – and of early humans regularly acting as predators rather than prey.

All animals occupy a position on a food web, from top to lower ranks. Top-ranking predators, such as lions, are usually not preyed upon by lower ranking carnivores, such as hyenas.

If Homo habilis was acquiring large animal carcasses, either by hunting or by chasing lions away from their own kills, it seems logical that these hominids could effectively cope with predation risks. That is, a hunter usually isn’t hunted.

In African savannas, apex predators like lions do not usually die from other predator attacks. Humans today also occupy a top predatory niche: For example, Hadza hunter-gatherers in Tanzania not only hunt game, but also fend off lions from their kills, and successfully defend themselves from attacks by other predators, such as leopards.

But, if Homo habilis was not yet a top predator, then you would expect them to have occasionally been prey to lower-on-the-food-chain carnivorous cats – such as leopards – who often hunt primates.

Most known human fossils at this stage of evolution do bear traces of carnivore damage, including the two best preserved H. habilis fossils from Olduvai Gorge. Was it caused after death, by a scavenging carnivore? Or did a big cat at the top of the food chain kill these early humans?

My colleagues and I set out to address the question of which predators were getting their teeth on H. habilis and presumably whether before or after the ancient humans died.

AI suggests H. habilis wasn’t an apex predator

Here’s where artificial intelligence comes in. Using computer vision, we trained AI on hundreds of microscopic images showing tooth marks left by the main carnivores in Africa today: lions, leopards, hyenas and crocodiles. The AI learned to recognize the subtle differences between the marks made by the different predators and was able to classify the marks with high accuracy.

four different magnified craters on brownish backgrounds
Tooth marks left by the four types of carnivores recorded. A: crocodile tooth pit; B: hyena tooth pit; C: lion tooth pit; and D: leopard tooth pit.
Domínguez-Rodrigo, M., et al. Sci Rep 14, 6881 (2024)

When we combined different AI approaches, they all pointed to the same result: The tooth marks on the Homo habilis bones matched those made by leopards. The size and shape of the marks on the fossils from those two early Homo habilis individuals line up with what leopards leave today when feeding on prey.

Our discovery challenges the long-standing view of Homo habilis as the first skilled toolmaker, hunter and meat-eater.

But maybe it shouldn’t be too surprising. The only complete skeleton of this species found at Olduvai Gorge belonged to a very small individual – just about 3 feet tall (less than 1 meter) – with a body that still showed features suited for climbing trees. That hardly matches the image of a hunter able to bring down large animals or steal carcasses from lions.

If it wasn’t Homo habilis performing these feats, maybe it was Homo erectus, a species with a larger body and more modern anatomy. But that opens up other mysteries for future researchers: What was Homo habilis doing at the archaeological sites of Olduvai Gorge if it was not responsible for the tools and signs of hunting we find there? Where exactly did Homo erectus come from, and how did it evolve?

My team and others will be returning to places like Olduvai Gorge to ask these questions in the years to come.

The Conversation

Manuel Domínguez-Rodrigo receives funding from the Spanish Ministry of Science and Universities

ref. AI reveals which predators chewed ancient humans’ bones – challenging ideas on which ‘Homo’ species was the first tool-using hunter – https://theconversation.com/ai-reveals-which-predators-chewed-ancient-humans-bones-challenging-ideas-on-which-homo-species-was-the-first-tool-using-hunter-266561

AI chatbots are becoming everyday tools for mundane tasks, use data shows

Source: The Conversation – USA – By Jeanne Beatrix Law, Professor of English, Kennesaw State University

The average person is more likely to use AI to come up with a meal plan than program a new app. Oscar Wong/Moment via Getty Images

Artificial intelligence is fast becoming part of the furniture. A decade after IBM’s Watson triumphed on “Jeopardy!,” generative AI models are in kitchens and home offices. People often talk about AI in science fiction terms, yet the most consequential change in 2025 may be its banal ubiquity.

To appreciate how ordinary AI use has become, it helps to remember that this trend didn’t start with generative chatbots. A 2017 Knowledge at Wharton newsletter documented how deep learning algorithms were already powering chatbots on social media and photo apps’ facial recognition functions. Digital assistants such as Siri and Alexa were performing everyday tasks, and AI-powered image generators could create images that fooled 40% of viewers.

When ChatGPT became publicly available on Nov. 30, 2022, the shift felt sudden, but it was built on years of incremental integration. AI’s presence is now so mundane that people consult chatbots for recipes, use them as study partners and rely on them for administrative chores. As a writer and professor who studies ways that generative AI can be an everyday collaborator, I find that recent usage reports show how AI has been woven into everyday life. (Full disclosure: I am a member of OpenAI’s Educator Council, an uncompensated group of higher education faculty who provide feedback to OpenAI on educational use cases.)

Who’s using ChatGPT and why?

Economists at OpenAI and Harvard analyzed 1.5 million ChatGPT conversations from November 2022 through July 2025. Their findings show that adoption has broadened beyond early users: It’s being used all over the world, among all types of people. Adoption has grown fastest in low- and middle-income countries, and growth rates in the lowest-income countries are now more than four times those in the richest nations.

Most interactions revolve around mundane activities. Three-quarters of conversations involve practical guidance, information seeking and writing. These categories are for activities such as getting advice on how to cook an unusual type of food, where to find the nearest pharmacy, and getting feedback on email drafts. More than 70% of ChatGPT use is for nonwork tasks, demonstrating AI’s role in people’s personal lives. The economists found that 73% of messages were not related to work as of June 2025, up from 53% in June 2024.

Claude and the geography of adoption

Anthropic’s economic index paints a similar picture of uneven AI adoption. Researchers at the company tracked users’ conversations with the company’s Claude AI chatbot relative to working-age population. The data shows sharp contrasts between nations. Singapore’s per-capita use is 4.6 times higher than expected based on its population size, and Canada’s is 2.9 times higher. India and Nigeria, meanwhile, use Claude at only a quarter of predicted levels.

In the United States, use reflects local economies, with activity tied to regional strengths: tech in California, finance in Florida and documentation in D.C. In lower-use countries, more than half of Claude’s activity involves programming. In higher-use countries, people apply it across education, science and business. High-use countries favor humans working iteratively with AI, such as refining text, while low-use countries rely more on delegating full tasks, such as finding information.

It’s important to note that OpenAI reports between 400 million and 700 million weekly active users in 2025, while third-party analytics estimate Claude at roughly 30 million monthly active users during a similar time period. For comparison, Gemini had approximately 350 million monthly active users and Microsoft reported in July 2025 more than 100 million monthly active users for its Copilot apps. Perplexity’s CEO reported in an interview that the company’s language AI has a “user base of over 30 million active users.”

While these metrics are from a similar time period, mid-2025, it’s important to note the differences in reporting and metrics, particularly weekly versus monthly active users. By any measure, though, ChatGPT’s user base is by far the largest, making it a commonly used generative AI tool for everyday tasks.

Everyday tool

So, what do mundane uses of AI look like at home? Consider these scenarios:

  • Meal planning and recipes: A parent asks ChatGPT for vegan meal ideas that use leftover kale and mushrooms, saving time and reducing waste.
  • Personal finance: ChatGPT drafts a budget, suggests savings strategies or explains the fine print of a credit card offer, translating legalese into plain language.
  • Writing support: Neurodivergent writers use ChatGPT to organize ideas and scaffold drafts. A writer with ADHD can upload notes and ask the model to group them into themes, then expand each into a paragraph while keeping the writer’s tone and reasoning. This helps reduce cognitive overload and supports focus, while the writer retains their own voice.

These scenarios illustrate that AI can help with mundane decisions, act as a sounding board and support creativity. The help with mundane tasks can be a big lift: By handling routine planning and information retrieval, AI frees people to focus on empathy, judgment and reflection.

From extraordinary to ordinary tool

AI has transitioned from a futuristic curiosity to an everyday co-pilot, with voice assistants and generative models helping people write, cook and plan.

Inviting AI to our kitchen tables not as a mysterious oracle but as a helpful assistant means cultivating AI literacy and learning prompting techniques. It means recognizing AI’s strengths, mitigating its risks and shaping a future where intelligence — human and artificial — works for everyone.

The Conversation

Jeanne Beatrix Law serves on the OpenAI Educator Council, an uncompensated group of higher education faculty who provide feedback to OpenAI on educational use cases and occasionally tests models for those use cases.

ref. AI chatbots are becoming everyday tools for mundane tasks, use data shows – https://theconversation.com/ai-chatbots-are-becoming-everyday-tools-for-mundane-tasks-use-data-shows-266670

Why the Trump administration’s comparison of antifa to violent terrorist groups doesn’t track

Source: The Conversation – USA – By Art Jipson, Associate Professor of Sociology, University of Dayton

President Donald Trump speaks at the White House during a meeting on antifa, as Attorney General Pam Bondi, left, and Homeland Security Secretary Kristi Noem listen, on Oct. 8, 2025. AP Photo/Evan Vucci

When Homeland Security Secretary Kristi Noem compared antifa to the transnational criminal group MS-13, Hamas and the Islamic State group in October 2025, she equated a nonhierarchical, loosely organized movement of antifascist activists with some of the world’s most violent and organized militant groups.

Antifa is just as dangerous,” she said.

It’s a sweeping claim that ignores crucial distinctions in ideology, organization and scope. Comparing these groups is like comparing apples and bricks: They may both be organizations, but that’s where the resemblance stops.

Noem’s statement echoed the logic of a September 2025 Trump administration executive order that designated antifa as a “domestic terrorist organization.” The order directs all relevant federal agencies to investigate and dismantle any operations, including the funding sources, linked to antifa.

But there is no credible evidence from the FBI or the Department of Homeland Security that supports such a comparison. Independent terrorism experts don’t see the similarities either.

Data shows that the movement can be confrontational and occasionally violent. But antifa is neither a terrorist network nor a major source of organized lethal violence.

Antifa, as understood by scholars and law enforcement, is not an organization in any formal sense. It lacks membership rolls and leadership hierarchies. It doesn’t have centralized funding.

As a scholar of social movements, I know that antifa is a decentralized movement animated by opposition to fascism and far-right extremism. It’s an assortment of small groups that mobilize around specific protests or local issues. And its tactics range from peaceful counterdemonstrations to mutual aid projects.

For example, in Portland, Oregon, local antifa activists organized counterdemonstrations against far-right rallies in 2019.

Antifa groups active in Houston during Hurricane Harvey in 2017 coordinated food, supplies and rescue support for affected residents.

No evidence of terrorism

The FBI and DHS have classified certain anarchist or anti-fascist groups under the broad category of “domestic violent extremists.” But neither agency nor the State Department has ever previously designated antifa as a terrorist organization.

The data on political violence reinforces this point.

A woman holds a yellow sign while walking with a group of people.
A woman holds a sign while protesting immigration raids in San Francisco on Oct. 23, 2025.
AP Photo/Noah Berger

A 2022 report by the Counter Extremism Project found that the overwhelming majority of deadly domestic terrorist incidents in the United States in recent years were linked to right-wing extremists. These groups include white supremacists and anti-government militias that promote racist or authoritarian ideologies. They reject democratic authority and often seek to provoke social chaos or civil conflict to achieve their goals.

Left-wing or anarchist-affiliated violence, including acts attributed to antifa-aligned people, accounts for only a small fraction of domestic extremist incidents and almost none of the fatalities. Similarly, in 2021, the George Washington University Program on Extremism found that anarchist or anti-fascist attacks are typically localized, spontaneous and lacking coordination.

By contrast, the organizations Noem invoked – Hamas, the Islamic State group and MS-13 – share structural and operational characteristics that antifa lacks.

They operate across borders and are hierarchically organized. They are also capable of sustained military or paramilitary operations. They possess training pipelines, funding networks, propaganda infrastructure and territorial control. And they have orchestrated mass casualties such as the 2015 Paris attacks and the 2016 Brussels bombings.

In short, they are military or criminal organizations with strategic intent. Noem’s claim that antifa is “just as dangerous” as these groups is not only empirically indefensible but rhetorically reckless.

Turning dissent into ‘terrorism’

So why make such a claim?

Noem’s statement fits squarely within the Trump administration’s broader political strategy that has sought to inflate the perceived threat of left-wing activism.

Casting antifa as a domestic terrorist equivalent of the Islamic State nation or Hamas serves several functions.

It stokes fear among conservative audiences by linking street protests and progressive dissent to global terror networks. It also provides political cover for expanded domestic surveillance and harsher policing of protests.

Protesters, some holding signs, walk toward a building with a dome.
Demonstrators hold protest signs during a march from the Atlanta Civic Center to the Georgia State Capitol on Oct. 18, 2025, in Atlanta.
Julia Beverly/Getty Images

Additionally, it discredits protest movements critical of the right. In a polarized media environment, such rhetoric performs a symbolic purpose. It divides the moral universe into heroes and enemies, order and chaos, patriots and radicals.

Noem’s comparison reflects a broader pattern in populist politics, where complex social movements are reduced to simple, threatening caricatures. In recent years, some Republican leaders have used antifa as a shorthand for all forms of left-wing unrest or criticism of authority.

Antifa’s decentralized structure makes it a convenient target for blame. That’s because it lacks clear boundaries, leadership and accountability. So any act by someone identifying with antifa can be framed as representing the whole movement, whether or not it does. And by linking antifa to terrorist groups, Noem, the top anti-terror official in the country, turns a political talking point into a claim that appears to carry the weight of national security expertise.

The problem with this kind of rhetoric is not just that it’s inaccurate. Equating protest movements with terrorist organizations blurs important distinctions that allow democratic societies to tolerate dissent. It also risks misdirecting attention and resources away from more serious threats — including organized, ideologically driven groups that remain the primary source of domestic terrorism in the U.S.

As I see it, Noem’s claim reveals less about antifa and more about the political uses of fear.

By invoking the language of terrorism to describe an anti-fascist movement, she taps into a potent emotional current in American politics: the desire for clear enemies, simple explanations and moral certainty in times of division.

But effective homeland security depends on evidence, not ideology. To equate street-level confrontation with organized terror is not only wrong — it undermines the credibility of the very institutions charged with protecting the public.

The Conversation

Art Jipson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the Trump administration’s comparison of antifa to violent terrorist groups doesn’t track – https://theconversation.com/why-the-trump-administrations-comparison-of-antifa-to-violent-terrorist-groups-doesnt-track-267514

« Je suis sorti et j’ai pleuré » : ce que le personnel des établissements pour personnes âgées dit de son chagrin lorsque des résidents décèdent

Source: The Conversation – in French – By Jennifer Tieman, Matthew Flinders Professor and Director of the Research Centre for Palliative Care, Death and Dying, Flinders University

Les expériences répétées de la mort peuvent entraîner un chagrin cumulatif. Maskot/Getty Images

Avec le vieillissement de la population, nous vivons plus longtemps et mourons plus âgés. Les soins de fin de vie occupent donc une place de plus en plus importante dans les soins aux personnes âgées. Au Canada, environ 30 % des personnes âgées de 85 ans et plus vivent dans un établissement de soins infirmiers ou une résidence pour personnes âgées, proportion qui augmente significativement avec l’âge avancé.

Mais qu’est-ce que cela signifie pour ceux qui travaillent dans le secteur des soins aux personnes âgées ? Des recherches suggèrent que le personnel soignant éprouve un type de deuil particulier lorsque les résidents décèdent. Cependant, leur chagrin passe souvent inaperçu et ils peuvent se retrouver sans soutien suffisant.


Cet article fait partie de notre série La Révolution grise. La Conversation vous propose d’analyser sous toutes ses facettes l’impact du vieillissement de l’imposante cohorte des boomers sur notre société, qu’ils transforment depuis leur venue au monde. Manières de se loger, de travailler, de consommer la culture, de s’alimenter, de voyager, de se soigner, de vivre… découvrez avec nous les bouleversements en cours, et à venir.


Nouer des relations au fil du temps

Le personnel des établissements de soins aux personnes âgées ne se contente pas d’aider les résidents à prendre leur douche ou leurs repas, il s’implique activement et tisse des liens avec eux.

Dans le cadre de nos propres recherches, nous avons discuté avec des membres du personnel soignant qui s’occupent de personnes âgées dans des établissements de soins et à leur domicile.

Le personnel soignant est conscient que bon nombre des personnes dont il s’occupe vont mourir et qu’il a un rôle à jouer pour les accompagner vers la fin de leur vie. Dans le cadre de leur travail, ils nouent souvent des relations enrichissantes et gratifiantes avec les personnes âgées dont ils s’occupent.

Par conséquent, le décès d’une personne âgée peut être source d’une profonde tristesse pour le personnel soignant. Comme l’une d’entre elles nous l’a confié :

Je sais que je pleure certains de ceux qui décèdent […] Vous passez du temps avec eux et vous les aimez.

Certains soignants que nous avons interrogés ont évoqué le fait d’être présents auprès des personnes âgées, de leur parler ou de leur tenir la main lorsqu’elles décèdent. D’autres ont expliqué qu’ils versaient des larmes pour la personne décédée, mais aussi en raison de leur perte, car ils connaissaient la personne âgée et avaient été impliqués dans sa vie.

Je pense que ce qui a aggravé les choses, c’est quand sa respiration est devenue très superficielle et que j’ai su qu’elle arrivait à la fin. Je suis sortie. Je lui ai dit que je sortais un instant. Je suis sortie et j’ai pleuré parce que j’aurais voulu pouvoir la sauver, mais je savais que je ne pouvais pas.

Parfois, le personnel soignant n’a pas l’occasion de dire au revoir, ou d’être reconnu comme quelqu’un qui avait subi une perte, même s’il a pris soin de la personne pendant plusieurs mois ou années. Une soignante pour personnes âgées a noté :

Si les gens meurent à l’hôpital, c’est un autre deuil. Parce qu’ils ne peuvent pas dire au revoir. Souvent, l’hôpital ne vous le dit pas.

Le personnel soignant doit souvent aider les familles et leurs proches à accepter la mort d’un parent, d’un proche ou d’un ami. Cela peut alourdir le fardeau émotionnel du personnel qui peut lui-même être en deuil.




À lire aussi :
Que manger après 50 ans pour prévenir les blessures musculaires ?


Chagrin cumulatif

Les expériences répétées de la mort peuvent entraîner un chagrin cumulatif et une tension émotionnelle. Si le personnel interrogé conférait un sens et une valeur à son travail, il trouvait également difficile d’être régulièrement confronté à la mort.

Un membre du personnel nous a confié qu’avec le temps, et après avoir été confronté à de nombreux décès, on peut « se sentir un peu robotisé. Parce qu’il faut devenir ainsi pour pouvoir gérer la situation ».

Les problèmes organisationnels tels que le manque de personnel ou la charge de travail élevée peuvent également exacerber ces sentiments d’épuisement et d’insatisfaction. Le personnel a souligné la nécessité de pouvoir compter sur du soutien pour faire face à cette situation.

Parfois, tout ce que vous voulez, c’est parler. Vous n’avez pas besoin que quelqu’un résolve quoi que ce soit pour vous. Vous voulez juste être écouté.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Aider le personnel à gérer son chagrin

Les organismes de soins aux personnes âgées doivent prendre des mesures pour soutenir le bien-être de leur personnel, notamment en reconnaissant le deuil que beaucoup ressentent lorsque des personnes âgées décèdent.

Après le décès d’une personne âgée, offrir un soutien au personnel qui a travaillé en étroite collaboration avec cette personne et reconnaître les liens émotionnels qui existaient entre eux sont des moyens efficaces de reconnaître et de valider le deuil du personnel. Il suffit de demander au membre du personnel comment il va, ou de lui donner la possibilité de prendre le temps de faire le deuil de la personne décédée.




À lire aussi :
Vieillir en milieu rural est un enjeu collectif qui doit être pris au sérieux


Les lieux de travail devraient également encourager plus largement les pratiques d’autogestion de la santé, en promouvant des activités telles que les pauses programmées, les relations avec les collègues et la priorité accordée au temps de détente et aux activités physiques. Le personnel apprécie les lieux de travail qui encouragent, normalisent et soutiennent leurs pratiques d’autogestion de la santé.

Nous devons également réfléchir à la manière dont nous pouvons normaliser la capacité à parler de la mort et du processus de fin de vie au sein de nos familles et de nos communautés. La réticence à reconnaître la mort comme faisant partie de la vie peut alourdir le fardeau émotionnel du personnel, en particulier si les familles considèrent la mort comme un échec des soins prodigués.

À l’inverse, le personnel chargé des soins aux personnes âgées nous a maintes fois répété à quel point il était important pour lui de recevoir des commentaires positifs et la reconnaissance des familles. Comme l’a rappelé une soignante :

Nous avons eu un décès ce week-end. Il s’agissait d’un résident de très longue date. Et sa fille est venue spécialement ce matin pour me dire à quel point les soins prodigués avaient été fantastiques. Cela me réconforte, cela me confirme que ce que nous faisons est juste.

En tant que membres de familles et de communautés, nous devons reconnaître que les personnes soignantes sont particulièrement vulnérables au sentiment de deuil et de perte, car elles ont souvent noué des relations avec les personnes dont elles s’occupent au fil des mois ou des années. En soutenant le bien-être de ces travailleuses essentielles, nous les aidons à continuer à prendre soin de nous et de nos proches à mesure que nous vieillissons et que nous approchons de la fin de notre vie.

La Conversation Canada

Jennifer Tieman reçoit des financements du ministère de la Santé, du Handicap et du Vieillissement, du ministère de la Santé et du Bien-être (SA) et du Medical Research Future Fund. Des subventions de recherche spécifiques ainsi que des subventions nationales pour des projets tels que ELDAC, CareSearch et palliAGED ont permis la réalisation des recherches et des projets dont les résultats et les ressources sont présentés dans cet article. Jennifer est membre de divers comités et groupes consultatifs de projets, notamment le comité directeur d’Advance Care Planning Australia, le réseau IHACPA Aged Care Network et le groupe consultatif national d’experts de Palliative Care Australia.

Dr Priyanka Vandersman receives funding from Department of Health, Disability and Ageing. She is affiliated with Flinders University, End of Life Directions for Aged Care project. She is a Digital Health adviser for the Australian Digital Health Agency, and serves as committee member for the Nursing and Midwifery in Digital Health group within the Australian Institute of Digital Health, as well as Standards Australia’s MB-027 Ageing Societies committee.

ref. « Je suis sorti et j’ai pleuré » : ce que le personnel des établissements pour personnes âgées dit de son chagrin lorsque des résidents décèdent – https://theconversation.com/je-suis-sorti-et-jai-pleure-ce-que-le-personnel-des-etablissements-pour-personnes-agees-dit-de-son-chagrin-lorsque-des-residents-decedent-263502