The oldest rocks on Earth are more than four billion years old

Source: The Conversation – Canada – By Hanika Rizo, Associate Professor, Department of Earth Sciences, Carleton University

Earth formed about 4.6 billion years ago, during the geological eon known as the Hadean. The name “Hadean” comes from the Greek god of the underworld, reflecting the extreme heat that likely characterized the planet at the time.

By 4.35 billion years ago, the Earth might have cooled down enough for the first crust to form and life to emerge.

However, very little is known about this early chapter in Earth’s history, as rocks and minerals from that time are extremely rare. This lack of preserved geological records makes it difficult to reconstruct what the Earth looked like during the Hadean Eon, leaving many questions about its earliest evolution unanswered.

We are part of a research team that has confirmed the oldest known rocks on Earth are located in northern Québec. Dating back more than four billion years, these rocks provide a rare and invaluable glimpse into the origins of our planet.

two men stand on rocks examining pieces in their hands
Geologists Jonathan O’Neil and Chris Sole examine rocks in northern Québec.
(H. Rizo), CC BY

Remains from the Hadean Eon

The Hadean Eon is the first period in the geological timescale, spanning from Earth’s formation 4.6 billion years ago and ending around 4.03 billion years ago.

The oldest terrestrial materials ever dated by scientists are extremely rare zircon minerals that were discovered in western Australia. These zircons were formed as early as 4.4 billion years ago, and while their host rock eroded away, the durability of zircons allowed them to be preserved for a long time.

Studies of these zircon minerals has given us clues about the Hadean environment, and the formation and evolution of Earth’s oldest crust. The zircons’ chemistry suggests that they formed in magmas produced by the melting of sediments deposited at the bottom of an ancient ocean. This suggests that the zircons are evidence that the Hadean Eon cooled rapidly, and liquid water oceans were formed early on.

Other research on the Hadean zircons suggests that the Earth’s earliest crust was mafic (rich in magnesium and iron). Until recently, however, the existence of that crust remained to be confirmed.

In 2008, a study led by one of us — associate professor Jonathan O’Neil (then a McGill University doctoral student) — proposed that rocks of this ancient crust had been preserved in northern Québec and were the only known vestige of the Hadean.

Since then, the age of those rocks — found in the Nuvvuagittuq Greenstone Belt — has been controversial and the subject of ongoing scientific debate.

a flat, rocky landscape
The Nuvvuagittuq Greenstone Belt in northern Québec.
(H. Rizo), CC BY

‘Big, old solid rock’

The Nuvvuagittuq Greenstone Belt is located in the northernmost region of Québec, in the Nunavik region above the 55th parallel. Most of the rocks there are metamorphosed volcanic rocks, rich in magnesium and iron. The most common rocks in the belt are called the Ujaraaluk rocks, meaning “big old solid rock” in Inuktitut.

The age of 4.3 billion years was proposed after variations in neodymium-142 were detected, an isotope produced exclusively during the Hadean through the radioactive decay of samarium-146. The relationship between samarium and neodymium isotope abundances had been previously used to date meteorites and lunar rocks, but before 2008 had never been applied to Earth rocks.

This interpretation, however, was challenged by several research groups, some of whom studied zircons within the belt and proposed a younger age of at most 3.78 billion years, placing the rocks in the Archean Eon instead.

Confirming the Hadean Age

In the summer of 2017, we returned to the Nuvvuagittuq belt to take a closer look at the ancient rocks. This time, we collected intrusive rocks — called metagabbros — that cut across the Ujaraaluk rock formation, hoping to obtain independent age constraints. The fact that these newly studied metagabbros are in intrusion in the Ujaraaluk rocks implies that the latter must be older.

The project was led by masters student Chris Sole at the University of Ottawa, who joined us in the field. Back in the laboratory, we collaborated with French geochronologist Jean-Louis Paquette. Additionally, two undergraduate students — David Benn (University of Ottawa) and Joeli Plakholm (Carleton University) participated to the project.

We combined our field observations with petrology, geochemistry, geochronology and applied two independent samarium-neodymium age dating methods, dating techniques used to assess the absolute ages of magmatic rocks, before they became metamorphic rocks. Both assessments yielded the same result: the intrusive rocks are 4.16 billion years old.

a rocky landscape silhouetted by sunset
Sunset at the Nuvvuagittuq Greenstone Belt.
(H. Rizo), CC BY

The oldest rocks

Since these metagabbros cut across the Ujaraaluk formation, the Ujaraaluk rocks must be even older, placing them firmly in the Hadean Eon.

Studying the Nuvvuagittuq rocks, the only preserved rocks from the Hadean, provides a unique opportunity to learn about the earliest history of our planet. They can help us understand how the first continents formed, and how and when Earth’s environment evolved to become habitable.

The Conversation

Hanika Rizo receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC).

Jonathan O’Neil receives funding from the Natural Sciences and Engineering Research Council of Canada.

ref. The oldest rocks on Earth are more than four billion years old – https://theconversation.com/the-oldest-rocks-on-earth-are-more-than-four-billion-years-old-259657

University leaders have to make sense of massive disruption — 4 ways they do it

Source: The Conversation – Canada – By Daniel Atlin, Adjunct Professor, Gordon S. Lang School of Business, University of Guelph

Trying to navigate an environment where massive disruption and unprecedented change is the norm presents a challenge for business leaders everywhere.

Social-purpose, multi-stakeholder organizations like post-secondary institutions, hospitals, governments and NGOs are particularly affected.

The practice of “sense-making” — making sense of the situations people find themselves in, in the words of organizational theorist Karl Weick — offers an innovative and timely framework that can help social-purpose leaders address complexity.

Senior post-secondary leaders study

Management experts have described sense-making as the key skill needed in an age of disruption. This has been confirmed through my research while completing a master’s degree in change leadership.

I interviewed more than two dozen senior leaders in complex organizations in Canada, the United Kingdom, Australia and New Zealand — the majority of whom were in the post-secondary sector. I found the leaders I interviewed were intuitively using elements from Weick’s organizational sense-making framework.

As one leader shared:

“The first thing you need to do is to recognize that it’s your role to help the rest of your community make sense of what’s happening around you. It’s something that I take very seriously.”

Deborah Ancona, professor of management at MIT, says:

“Sense-making is most often needed when our understanding of the world becomes unintelligible in some way. This occurs when the environment is changing rapidly, presenting us with surprises for which we are unprepared or confronting us with adaptive, rather than technical problems to solve.”

Leading in ‘age of outrage’

Social-purpose organizations face common issues such as a lack of funding, system fragmentation, competing stakeholders, new entrants and the challenges of emerging technologies.

They are also at the centre of what business and public policy professor Karthik Ramana describes as “the age of outrage,” reflected in heightened polarization. Against this backdrop, it’s increasingly challenging to attract and retain leaders.

I heard from leaders who felt they didn’t have the proper training for the job or support once they started their roles. In part, this is because few of them, including those involved in their hiring, seem to realize the actual messiness inherent within their organizations.

This brings to mind the parable that writer David Foster Wallace used in his 2005 convocation speech at Kenyon College, in which two young fish are told by an older fish that they are swimming in water. One of the young fish then turns to the other in surprise and says: “What is water anyway?”

Lack of agency

I heard from various leaders who experienced an “aha” moment when they realized they were immersed within a fluid and dynamic organizational environment that they were expected to run like a traditional business. This realization gave them a framework to understand the lack of agency they often experienced.

The challenge with social-purpose organizations is that they’re complex adaptive systems in which individual interactions form an ever-changing array of networks generating emergent behaviours that are often unpredictable. Complex adaptive systems also tend to revert to the status quo when faced with change.

So how do social-purpose leaders navigate change and this challenging organizational context? They wrap their efforts around purpose. It’s an anchor point and unifying focus for leaders, teams and all stakeholders.

4 strategies

Based on my research, I’ve identified four main sense-making strategies that leaders use:

Exploration and map-making: These pursuits help leaders extract a steady flow of information and data from their interactions both inside and outside their organizations. This allows them to develop high-level, adaptive frameworks that are constantly in flux — similar to Google Maps, as it generates live snapshots of traffic flows and suggested routes.

Storytelling and narrative development: Leaders use storytelling and narrative development to project ideas, purposes and visions into the future. This allows them to connect emotionally and inspire people and communities. Recognizing their role as storyteller-in-chief can align disparate parts of an organization into a coherent and engaged whole.

Invention and improvisation: These are employed by leaders to test assumptions as they learn what works and what doesn’t. This approach allows them to respond in real time to the never-ending flow of new information. Without taking risks, leaders are at risk of being stuck in paralysis.

Adaptation and collaboration allows leaders to help their organizations remain relevant. Leaders spoke about the need to foster adaptation. They also stressed the need to attract new resources through collaboration across like-minded institutions, governments, funding partners and the private sector.

Embracing a sense-making mindset

Thinking that benefits the interests and perspectives of the total enterprise is a critical but challenging task for leaders in social- purpose organizations.

Time and energy — two scarce resources — are necessary to build aligned and high-performing teams and to break down silos. Team alignment cannot be achieved through the occasional team-building session, but requires an ongoing commitment and a well-articulated plan.

Social-purpose organizations need practices, frameworks and metrics that are tailored to organizations’ unique needs. Rather than spending resources, time and energy on strategic plans, some leaders are building more flexible strategic frameworks or using strategic foresight to guide an innovative vision for the future.

Leadership can be lonely

It’s also important to remember that leadership can be lonely. To survive and thrive, social-purpose leaders must remember to seek out their own coaches and build communities of practice to enhance their lived experience and activities.

Developing an outer shell to weather criticism also helps. While leaders can’t please everyone, sense-making leaders find strength and build endurance in the recognition that the roles they play are meaningful, satisfying and essential — not only within the organizations they serve but through the collective work their organizations accomplish in the world.

Leaders (and board members) must realize that hiring the same people with the same profile as the past won’t make an organization ready for change, but instead reinforces the status quo.

By recognizing the messiness of their organizations and using sense-making skills, leaders in social-purpose organizations have better odds of surviving the perils and challenges of massive disruption and unprecedented change.

The Conversation

Daniel Atlin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. University leaders have to make sense of massive disruption — 4 ways they do it – https://theconversation.com/university-leaders-have-to-make-sense-of-massive-disruption-4-ways-they-do-it-257866

The oldest rocks on Earth are 4.3 billion years old

Source: The Conversation – Canada – By Hanika Rizo, Associate Professor, Department of Earth Sciences, Carleton University

Earth formed about 4.6 billion years ago, during the geological eon known as the Hadean. The name “Hadean” comes from the Greek god of the underworld, reflecting the extreme heat that likely characterized the planet at the time.

By 4.35 billion years ago, the Earth might have cooled down enough for the first crust to form and life to emerge.

However, very little is known about this early chapter in Earth’s history, as rocks and minerals from that time are extremely rare. This lack of preserved geological records makes it difficult to reconstruct what the Earth looked like during the Hadean Eon, leaving many questions about its earliest evolution unanswered.

We are part of a research team that has confirmed the oldest known rocks on Earth are located in northern Québec. Dating back 4.3 billion years, these rocks provide a rare and invaluable glimpse into the origins of our planet.

two men stand on rocks examining pieces in their hands
Geologists Jonathan O’Neil and Chris Sole examine rocks in northern Québec.
(H. Rizo), CC BY

Remains from the Hadean Eon

The Hadean Eon is the first period in the geological timescale, spanning from Earth’s formation 4.6 billion years ago and ending around 4.03 billion years ago.

The oldest terrestrial materials ever dated by scientists are extremely rare zircon minerals that were discovered in western Australia. These zircons were formed as early as 4.4 billion years ago, and while their host rock eroded away, the durability of zircons allowed them to be preserved for a long time.

Studies of these zircon minerals has given us clues about the Hadean environment, and the formation and evolution of Earth’s oldest crust. The zircons’ chemistry suggests that they formed in magmas produced by the melting of sediments deposited at the bottom of an ancient ocean. This suggests that the zircons are evidence that the Hadean Eon cooled rapidly, and liquid water oceans were formed early on.

Other research on the Hadean zircons suggests that the Earth’s earliest crust was mafic (rich in magnesium and iron). Until recently, however, the existence of that crust remained to be confirmed.

In 2008, a study led by associate professor Jonathan O’Neil (then a McGill University doctoral student) proposed that rocks of this ancient crust had been preserved in northern Québec and were the only known vestige of the Hadean.

Since then, the age of those rocks — found in the Nuvvuagittuq Greenstone Belt — has been controversial and the subject of ongoing scientific debate.

a flat, rocky landscape
The Nuvvuagittuq Greenstone Belt in northern Québec.
(H. Rizo), CC BY

‘Big, old solid rock’

The Nuvvuagittuq Greenstone Belt is located in the northernmost region of Québec, in the Nunavik region above the 55th parallel. Most of the rocks there are metamorphosed volcanic rocks, rich in magnesium and iron. The most common rocks in the belt are called the Ujaraaluk rocks, meaning “big old solid rock” in Inuktitut.

The age of 4.3 billion years was proposed after variations in neodymium-142 were detected, an isotope produced exclusively during the Hadean through the radioactive decay of samarium-146. The relationship between samarium and neodymium isotope abundances had been previously used to date meteorites and lunar rocks, but before 2008 had never been applied to Earth rocks.

This interpretation, however, was challenged by several research groups, some of whom studied zircons within the belt and proposed a younger age of at most 3.78 billion years, placing the rocks in the Archean Eon instead.

Confirming the Hadean Age

In the summer of 2017, we returned to the Nuvvuagittuq belt to take a closer look at the ancient rocks. This time, we collected intrusive rocks — called metagabbros — that cut across the Ujaraaluk rock formation, hoping to obtain independent age constraints. The fact that these newly studied metagabbros are in intrusion in the Ujaraaluk rocks implies that the latter must be older.

The project was led by masters student Chris Sole at the University of Ottawa, who joined us in the field. Back in the laboratory, we collaborated with French geochronologist Jean-Louis Paquette. Additionally, two undergraduate students — David Benn (University of Ottawa) and Joeli Plakholm (Carleton University) participated to the project.

We combined our field observations with petrology, geochemistry, geochronology and applied two independent samarium-neodymium age dating methods, dating techniques used to assess the absolute ages of magmatic rocks, before these become metamorphic rocks. Both assessments yielded the same result: the intrusive rocks are 4.16 billion years old.

a rocky landscape silhouetted by sunset
Sunset at the Nuvvuagittuq Greenstone Belt.
(H. Rizo), CC BY

The oldest rocks

Since these metagabbros cut across the Ujaraaluk formation, the Ujaraaluk rocks must be even older, placing them firmly in the Hadean Eon.

Studying the Nuvvuagittuq rocks, the only preserved rocks from the Hadean, provides a unique opportunity to learn about the earliest history of our planet. They can help us understand how the first continents formed, and how and when Earth’s environment evolved to become habitable.

The Conversation

Hanika Rizo receives funding from the Natural Sciences and Engineering Research Council of Canada (NSERC).

Jonathan O’Neil receives funding from the Natural Sciences and Engineering Research Council of Canada.

ref. The oldest rocks on Earth are 4.3 billion years old – https://theconversation.com/the-oldest-rocks-on-earth-are-4-3-billion-years-old-259657

Workplaces have embraced mindfulness and self-compassion — but did capitalism hijack their true purpose?

Source: The Conversation – Canada – By Yasemin Pacaci, Postdoctoral Fellow, Smith School of Business, Queen’s University, Ontario

When practiced with integrity, mindfulness and self-compassion can improve the collective well-being and personal agency of employees. (Shutterstock)

Mindfulness and self-compassion have become popular tools for improving mental health and well-being in the workplace. Mindfulness involves paying attention to thoughts, emotions and surroundings without judgment, much like watching clouds pass in the sky. This moment-to-moment awareness helps people respond skilfully rather than react automatically.

Self-compassion builds on mindfulness by encouraging people to meet difficult feelings and experiences with kindness instead of resistance. In other words, mindfulness helps people first recognize their suffering, while self-compassion helps people respond with kindness.

Both mindfulness and self-compassion can be practised formally through meditations like body scans, breath awareness or loving-kindness meditation, and informally by bringing mindful attention to mind, emotions and everyday activities.

Both practices have the potential to transform dysfunctional workplaces by improving the collective well-being and personal agency of employees.

Yet too often, these practices are introduced superficially to boost productivity and performance, rather than used to address the root causes of workplace stress. It’s a pattern I’ve witnessed repeatedly in my years as a mindfulness teacher and researcher.

This brings into question whether these practices can thrive in capitalist systems that prioritize profit over people. But rather than rejecting mindfulness and self-compassion as incompatible with capitalism, I argue we need a more thoughtful framework that stays true to their essence while tackling common misunderstandings and misuses.

How capitalism is co-opting mindfulness

Academic and practitioner critics have raised concerns about how mindfulness and self-compassion practices are being integrated into corporate life.

Some of these critics argue that companies are incorporating mindfulness and self-compassion practices not to fix systemic problems, but to boost their own productivity and shift the responsibility for stress onto employees.

In these cases, critics use the term “McMindfulness” to describe a commodified, diluted version of mindfulness that is stripped of its roots in Buddhist philosophy.

Group of people having a meeting around a conference table in an office
If organizations want to reap the full benefits of mindfulness and self-compassion, they need to take a more deliberate, systemic approach.
(Unsplash/Redd Francisco)

Some critics have gone further, claiming that mindfulness encourages contentment with the status quo and may make employees more vulnerable to exploitation.

While these critiques raise valid concerns, they often create more confusion and resistance than meaningful dialogue or practical solutions for implementing mindfulness and self-compassion in the workplace.

Empirical research offers a more nuanced perspective. Mindfulness and self-compassion, when practised consistently, can strengthen employees’ sense of agency, improve their self-confidence, support ethical decision-making and action for meaningful change.

Done right, mindfulness can help workers

Employees who develop mindfulness and self-compassion skills tend to respond in three main ways, according to research.

First, they become more aware of dysfunction in the workplace. This awareness can empower them to speak up and advocate for change if it’s within their control and in their own interest. It can also cause them to engage in more ethical practices, especially in toxic work environments.

Second, they are more likely to leave toxic work environments. When employees realize change is beyond their control, mindfulness and self-compassion can cause them to lose their motivation for work and, indirectly, might prompt them to leave toxic workplaces altogether.

Third, for employees who end up staying in their roles, they are better able to acknowledge and become less effected by stressors. However, this doesn’t mean they become more productive or blindly enthusiastic about their jobs. Mindfulness enhances motivation that stems from genuine interest, not from pressure or obligation.

It’s important to note that mindfulness doesn’t mean these employees condone poor conditions or toxic practices. Rather, it helps them see reality more clearly, without denial or avoidance.

And for employers hoping mindfulness will instantly boost engagement or drive performance, research shows employees may actually become more critical of their work and less willing to perform mundane tasks.

Towards true workplace transformation

Mindfulness alone cannot fix a toxic workplace. When organizations introduce mindfulness programs without first addressing the underlying causes of stress or toxicity, they’re unlikely to see the results they expect.

If organizations want to reap the full benefits of mindfulness and self-compassion, they need to take a more deliberate, structured approach. Psychologist Kurt Lewin’s three-step change management model offers a useful guide:

Step 1. Unfreeze: Address the root causes of workplace stress

  • Address systemic stressors. Before introducing any well-being initiative, organizations must confront actual sources of stress such as excessive workloads, toxic leadership and job insecurity.
  • Correct misunderstandings. Clarify what mindfulness and self-compassion actually is to reduce scepticism and confusion.
  • Avoid mandatory participation. Giving employees the freedom to opt in fosters authentic engagement and sustains interest.
A woman looks down at a sheaf of papers in her hands with an annoying look on her face
Without addressing the systemic causes of stress, mindfulness practices can prove ineffective.
(Shutterstock)

Step 2. Change: Implement practices ethically and intentionally

  • Lead by example at the top. Instead of only offering these programs to employees, leaders should engage with mindfulness and self-compassion practices themselves. When senior figures lead by example, these programs gain legitimacy and workplaces foster more ethical, people-centered leadership that goes beyond performance and productivity.
  • Ensure cultural sensitivity. Small cultural adaptations can improve the inclusion of mindfulness and self-compassion sessions. For instance, research has found that in Hispanic communities, using familiar stories or proverbs can make mindfulness sessions more relatable and improve engagement.
  • Preserve ethical foundations. Present mindfulness and self-compassion as universal practices, not tied to any one religion. This preserves their ethical underpinnings while ensuring they remain universal and accessible to all.

Step 3. Freeze: Embed mindfulness and self-compassion into workplace culture

  • Encourage small, daily practices. Offer simple tools like journaling or mindful breathing breaks that employees can tailor to their own needs and schedules.
  • Provide ongoing support. Create time and space for continued practice, such as guided meditations, mindfulness moments in meetings or gratitude boards so new habits take root.
  • Measure impact holistically. Consider hiring qualified professionals to evaluate program effectiveness, address emerging needs and keep the organization moving forward.

Moving beyond wellness window-dressing

Mindfulness and self-compassion are not magic bullets, but they can still be powerful catalysts for change.

When introduced with a deliberate and thoughtful approach, mindfulness and self-compassion can help workplaces move beyond shallow wellness “hacks” toward truly transformative practices, even in high-pressure, profit-driven environments.

Far from serving as a quick fix or a mere productivity tool, these practices encourage employees to challenge the status quo, take meaningful action, build healthier relationships and make more ethical decisions. They can help individual employees flourish within and beyond their workplaces.

The true value of mindfulness and self-compassion practices lies not in short-term outcomes or surface-level improvements, but in helping individuals be more aware of themselves, their surroundings and the choices they make, which is beyond any outcome or context.

The Conversation

Yasemin Pacaci does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Workplaces have embraced mindfulness and self-compassion — but did capitalism hijack their true purpose? – https://theconversation.com/workplaces-have-embraced-mindfulness-and-self-compassion-but-did-capitalism-hijack-their-true-purpose-258043

Parental controls on children’s tech devices are out of touch with child’s play

Source: The Conversation – Canada – By Sara M. Grimes, Wolfe Chair in Scientific and Technological Literacy and Professor, McGill University

Parenting in the digital age can be stressful and demands a lot from parents.

The Family Online Safety Institute (FOSI) recently released its annual Online Safety Survey that discovered almost 50 per cent of parents surveyed aren’t using parental controls to manage their children’s devices. These are tools that would ostensibly help parents filter out inappropriate content or unwanted interactions on their children’s devices.

The FOSI authors conclude the reason parents aren’t using the tools is because they feel “overwhelmed” and recommend parents educate themselves as a good first step toward broader use.

While overwhelm is a real thing, we suggest a bigger problem with parental controls is how they are designed. This includes how little attention is given to supporting open communication between parents and children.

Once a year for the past three years, we’ve asked the same 33 children (initially aged six to 12) what they think about content ratings, online safety, game monetization and privacy.
Our team’s combined expertise in communication, education, policy and game studies analyzed their answers.

We also asked their parents how they mediated their kids’ gaming. Nearly half of them don’t use parental controls either. They say parental controls don’t always work as promised, offer little context about how settings affect gameplay and force binary choices that don’t align with household rules or with children’s maturity levels.

The parents we asked said they aren’t avoiding parental controls because they feel overwhelmed by them. It’s that the tools are poorly designed.

Parent controls can introduce more problems

At the same time, many of the parents described themselves as highly engaged in their child’s gameplay; talking with their children regularly or encouraging play in shared, supervised spaces. Several said they choose to trust their child rather than set top-down limits.

Our findings align with previous research on digital parenting. In one British study, parents said they felt some controls were valuable supplements to mediation, while other controls were poorly designed, introducing more problems than solutions.

The use of parental controls doesn’t necessarily translate to increased child safety. In fact, using parental controls can create a disconnect between parents and children on key safety issues.

Awareness of risks

Six children we interviewed were not aware their parents were using controls, and at least two children revealed they didn’t even know why a parent would use parental controls in the first place. In this context, parents’ efforts to protect their children had the unintended side effect of obscuring vital knowledge, leaving the children unaware of some of the key risks associated with playing online. Parental controls can remove opportunities to teach kids about safety if they aren’t part of the conversation.

We believe that the behind-the-scenes protections enabled by (some) parental controls can be detrimental to parent-child communication about online safety. What are the risks? How can children avoid the riskiest behaviour? What should they do when or if they’ve encountered danger?

Meanwhile, parents aren’t always familiar with the features and activities they are asked to restrict or allow. Very few parental controls contain information about how gameplay will be impacted by their settings. Many contain terms only someone familiar with the game would understand, while others are hard to navigate.

All of this can lead to misinterpretations and parent-child conflicts, making the tools even harder to use.

Power of communication

Open communication between parents and children on safety topics fosters trust, which increases the likelihood kids will turn to their parents for help when something dangerous happens.

It enables children to build resiliency, which in turn reduces the risk they’ll be harmed by negative online encounters.

Research also suggests that parent-child communication may be more effective at helping to avoid harm than embedded restrictions enabled by parental controls.

The importance of open communication is also emphasized in the FOSI report. In households where conversations about online safety happened regularly (six times or more a year), parents and children were both more likely to view parental controls as a useful and valuable tool for online safety.

This, the authors conclude, “supports the view of online safety as a collaborative effort as opposed to a priority imposed by parents on their children.”

On this point, we couldn’t agree more. Families would benefit from making parental controls and safety settings a family affair. Kids and parents have a lot to learn from each other about the digital world, and reviewing these systems together can provide a much-needed opening for crucial conversations about risk, safety and what kids find meaningful about digital play.

Rethinking safety tools

Let’s not pretend parental controls are a panacea for child safety.

Many parental controls contain serious design flaws and limitations. Very few comprehensively address the needs and concerns of either children or their parents.

Now that lawmakers are starting to make parental controls a mandatory part of new child safety legislation, we urgently need to start taking a closer and more critical look at what they can and can’t do.

Parental controls can be a useful tool when they are designed well, applied with transparency, and provide families with ample options so they can be tailored to not only fit with but foster household rules and open communication.

There’s a lot of work to be done before this is the standard. But also a growing impetus for game and other tech companies to make it happen.

The Conversation

Sara M. Grimes receives funding from the Social Sciences and Humanities Research Council (SSHRC) of Canada,

Riley McNair does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Parental controls on children’s tech devices are out of touch with child’s play – https://theconversation.com/parental-controls-on-childrens-tech-devices-are-out-of-touch-with-childs-play-257874

‘Big’ legislative package shifts more of SNAP’s costs to states, saving federal dollars but causing fewer Americans to get help paying for food

Source: The Conversation – USA (2) – By Tracy Roof, Associate Professor of Political Science, University of Richmond

People shop for food in Brooklyn in 2023 at a store that makes sure that its customers know it accepts SNAP benefits, also known as food stamps and EBT.
Spencer Platt/Getty Images

The legislative package that President Donald Trump signed into law on July 4, 2025, has several provisions that will shrink the safety net, including the Supplemental Nutrition Assistance Program, long known as food stamps. SNAP spending will decline by an estimated US$186 billion through 2034 as a result of several changes Congress made to the program that today helps roughly 42 million people buy groceries – an almost 20% reduction.

In my research on the history of food stamps, I’ve found that the program was meant to be widely available to most low-income people. The SNAP changes break that tradition in two ways.

The Congressional Budget Office estimates that about 3 million people are likely to be dropped from the program and lose their benefits. This decline will occur in part because more people will face time limits if they don’t meet work requirements. Even those who meet the requirements may lose benefits because of difficulty submitting the necessary documents.

And because states will soon have to take on more of the costs of the program, which totaled over $100 billion in 2024, they may eventually further restrict who gets help due to their own budgetary constraints.

Summing up SNAP’s origins

Inspired by the plight of unemployed coal miners whom John F. Kennedy met in Appalachia when he campaigned for the presidency in 1960, the early food stamps program was not limited to single parents with children, older people and people with disabilities, like many other safety net programs were at the time. It was supposed to help low-income people afford more and better food, regardless of their circumstances.

In response to national attention in the late 1960s to widespread hunger and malnutrition in other areas of the country, such as among tenant farmers in the rural South, a limited food stamps program was expanded. It reached every part of the country by 1974.

From the start, the states administered the program and covered some of its administrative costs and the federal government paid for the benefits in full. This arrangement encouraged states to enroll everyone who needed help without fearing the budgetary consequences.

Who could qualify and how much help they could get were set by uniform national standards, so that even the residents of the poorest states would be able to afford a budget-conscious but nutritionally adequate diet.

The federal government’s responsibility for the cost of benefits also allowed spending to automatically grow during economic downturns, when more people need assistance. These federal dollars helped families, retailers and local economies weather tough times.

The changes to the SNAP program included in the legislative package that Congress approved by narrow margins and Trump signed into law, however, will make it harder for the program to serve its original goals.

Restricting benefits

Since the early 1970s, most so-called able-bodied adults who were not caring for a child or an adult with disabilities had to meet a work requirement to get food stamps. Welfare reform legislation in 1996 made that requirement stricter for such adults between the ages of 18 and 50 by imposing a three-month time limit if they didn’t log 20 hours or more of employment or another approved activity, such as verified volunteering.

Budget legislation passed in 2023 expanded this rule to adults up to age 54. The 2025 law will further expand the time limit to adults up to age 64 and parents of children age 14 or over.

States can currently get permission from the federal government to waive work requirements in areas with insufficient jobs or unemployment above the national average. This flexibility to waive work requirements will now be significantly limited and available only where at least 1 in 10 workers are unemployed.

Concerned senators secured an exemption from the work requirements for most Native Americans and Native Alaskans, who are more likely to live in areas with limited job opportunities.

A 2023 budget deal exempted veterans, the homeless and young adults exiting the foster care system from work requirements because they can experience special challenges getting jobs. The 2025 law does not exempt them.

The new changes to SNAP policies will also deny benefits to many immigrants with authorization to be in the U.S., such as people granted political asylum or official refugee status. Immigrants without authorization to reside in the U.S. will continue to be ineligible for SNAP benefits.

Tracking ‘error rates’

Critics of food stamps have long argued that states lack incentives to carefully administer the program because the federal government is on the hook for the cost of benefits.

In the 1970s, as the number of Americans on the food stamp rolls soared, the U.S. Department of Agriculture, which oversees the program, developed a system for assessing if states were accurately determining whether applicants were eligible for benefits and how much they could get.

A state’s “payment error rate” estimates the share of benefits paid out that were more or less than an applicant was actually eligible for. The error rate was not then and is not today a measure of fraud. Typically, it just indicates the share of families who get a higher – or lower – amount of benefits than they are eligible for because of mistakes or confusion on the part of the applicant or the case worker who handles the application.

Congress tried to penalize states with error rates over 5% in the 1980s but ultimately suspended the effort under state pressure. After years of political wrangling, the USDA started to consistently enforce financial penalties on states with high error rates in the mid-1990s.

States responded by increasing their red tape. For example, they asked applicants to submit more documentation and made them go through more bureaucratic hoops, like having more frequent in-person interviews, to get – and continue receiving – SNAP benefits.

These demands hit low-wage workers hardest because their applications were more prone to mistakes. Low-income workers often don’t have consistent work hours and their pay can vary from week to week and month to month. The number of families getting benefits fell steeply.

The USDA tried to reverse this decline by offering states options to simplify the process for applying for and continuing to get SNAP benefits over the course of the presidencies of Bill Clinton, George W. Bush and Barack Obama. Enrollment grew steadily.

Penalizing high rates

Since 2008, states with error rates over 6% have had to develop a detailed plan to lower them.

Despite this requirement, the national average error rate jumped from 7.4% before the pandemic, to a record high of 11.7% in 2023. Rates rose as states struggled with a surge of people applying for benefits, a shortage of staff in state welfare agencies and procedural changes.

Republican leaders in Congress have responded to that increase by calling for more accountability.

Making states pay more

The big legislative package will increase states’ expenses in two ways.

It will reduce the federal government’s responsibility for half of the cost of administering the program to 25% beginning in the 2027 fiscal year.

And some states will have to pay a share of benefit costs for the first time in the program’s history, depending on their payment error rates. Beginning in the 2028 fiscal year, states with an error rate between 6-8% would be responsible for 5% of the cost of benefits. Those with an error rate between 8-10% would have to pay 10%, and states with an error rate over 10% would have to pay 15%. The federal government would continue to pay all benefits in states with error rates below 6%.

Republicans argue the changes will give states more “skin in the game” and ensure better administration of the program.

While the national payment error rate fell from 11.68% in the 2023 fiscal year to 10.93% a year later, 42 states still had rates in excess of 6% in 2024. Twenty states plus the District of Columbia had rates of 10% or higher.

At nearly 25%, Alaska has the highest payment error rate in the country. But Alaska won’t be in trouble right away. To ease passage in the Senate, where the vote of Sen. Lisa Murkowski, an Alaska Republican, was in doubt, a provision was added to the bill allowing several states with the highest error rates to avoid cost sharing for up to two years after it begins.

Democrats argue this may encourage states to actually increase their error rates in the short term.

The effect of the new law on the amount of help an eligible household gets is expected to be limited.

About 600,000 individuals and families will lose an average of $100 a month in benefits because of a change in the way utility costs are treated. The law also prevents future administrations from increasing benefits beyond the cost of living, as the Biden Administration did.

States cannot cut benefits below the national standards set in federal law.

But the shift of costs to financially strapped states will force them to make tough choices. They will either have to cut back spending on other programs, increase taxes, discourage people from getting SNAP benefits or drop the program altogether.

The changes will, in the end, make it even harder for Americans who can’t afford the bare necessities to get enough nutritious food to feed their families.

The Conversation

Tracy Roof does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. ‘Big’ legislative package shifts more of SNAP’s costs to states, saving federal dollars but causing fewer Americans to get help paying for food – https://theconversation.com/big-legislative-package-shifts-more-of-snaps-costs-to-states-saving-federal-dollars-but-causing-fewer-americans-to-get-help-paying-for-food-260166

Les lyssavirus, de rares mais mortels virus de chauve-souris

Source: The Conversation – in French – By Vinod Balasubramaniam, Associate Professor (Molecular Virology), Monash University

Le lyssavirus a été isolé dans plusieurs espèces de chauves-souris australiennes, notamment les renards volants tels que celui figurant sur cette photographie. Ken Griffiths/Getty Images

Début juillet 2025, un quinquagénaire australien vivant en Nouvelle-Galles du Sud est mort des conséquences d’une contamination par un lyssavirus, et ce plusieurs mois après avoir été mordu par une chauve-souris.

Ce décès porte à quatre le nombre de cas humains d’infection connus depuis la découverte du virus responsable, le lyssavirus austral (Lyssavirus australis, aussi appelé « australian bat lyssavirus » ou ABLV), dans la région du Queensland, en 1996. À l’époque, il avait été isolé dans une chauve-souris frugivore, le renard volant noir (Pteropus alecto). Ce nouveau cas est également le premier cas humain confirmé Nouvelle-Galles du Sud.

Que faut-il savoir de ce genre de virus, dont certains membres circulent aussi en Europe ? Que faire en cas de contact avec une chauve-souris ? Les réponses.

Un proche parent du virus de la rage

Le lyssavirus austral, qui infecte principalement les chauves-souris (chiroptères) australiennes, appartient à la famille des Rhabdoviridae, laquelle compte aussi parmi ses membres le virus de la rage.

En Australie, les données de surveillance suggèrent que moins de 1 % des chauves-souris en bonne santé sont porteuses de ce virus. La prévalence atteint en revanche 5 à 10 % chez les individus malades ou blessés. L’infection est souvent asymptomatique, même si certains chiroptères présentent parfois des symptômes neurologiques tels que désorientation, agressivité, spasmes musculaires et paralysie. Il arrive aussi que les individus contaminés par ce genre de virus en meurent.




À lire aussi :
Fascinantes chauves-souris, leur tolérance à des virus mortels pour les humains


Ce lyssavirus a déjà été isolé dans des individus appartenant aux quatre espèces de renards volants du continent (Pteropus alecto, P. poliocephalus, P. scapulatus et P. conspicillatus) ainsi que chez une espèce de micro-chauve-souris, la roussette à ventre jaune (Saccolaimus flaviventris). Des données sérologiques (recherche d’anticorps dirigés contre ce virus dans le sang de chauve-souris, ce qui constitue la preuve d’une infection) suggèrent que d’autres micro-espèces pourraient également y être sensibles.

Il convient donc de faire preuve de prudence, et de considérer que toutes les espèces de chauves-souris australiennes sont potentiellement porteuses du lyssavirus.

Des infections rares, mais potentiellement mortelles

Contrairement à la rage, qui est à l’origine d’environ 59 000 décès humains par an, principalement en Afrique et en Asie, l’infection humaine par le lyssavirus des chauves-souris australiennes reste extrêmement rare.

Il faut souligner que si ce lyssavirus est propre à l’Australie, d’autres lyssavirus, tels que les lyssavirus des chauves-souris européennes (« European bat lyssaviruses » ou EBLVs types 1 et 2), circulent également. Ils ont eux aussi déjà été à l’origine d’infections humaines, quoique très occasionnellement (des lyssavirus de chauve-souris circulent également sur les autres continents : Amérique du Nord et du Sud, Asie, Afrique ; si les cas humains identifiés sont rares, selon l’Organisation mondiale de la Santé leur nombre pourrait être sous-estimé en raison des limitations en matière de surveillance et de caractérisation de ces virus, ndlr).

La transmission du virus à l’être humain se fait par contact direct avec la salive de chauves-souris infectées, via des morsures, des griffures ou lorsque la peau est lésée. L’exposition au virus peut également se produire lorsque ladite salive entre en contact avec les muqueuses (yeux, nez, bouche).

L’exposition aux déjections, à l’urine, ou au sang de chauves-souris infectées ne fait en revanche pas courir de risque d’infection par ce virus. Le simple fait de se trouver à proximité des lieux où gîtent les chauves-souris ne constitue pas non plus un danger.

Une maladie incurable après l’apparition des symptômes

En cas de contamination par le lyssavirus, la période d’incubation peut varier de quelques semaines à plus de deux ans. Durant cette phase asymptomatique (sans symptôme), le virus migre lentement le long des nerfs vers le cerveau.

Si le traitement est administré pendant cette phase d’incubation, il est capable de prévenir l’apparition de la maladie. En revanche, si l’intervention est trop tardive (après la survenue des premiers symptômes), l’issue est toujours fatale. En effet, une fois déclarée, la maladie est incurable, comme dans le cas de la rage.

La symptomatologie humaine de l’infection à lyssavirus ressemble d’ailleurs beaucoup à celle de la rage. Les premiers signes de la maladie sont des symptômes grippaux (fièvre, maux de tête, fatigue). L’affection évolue ensuite rapidement, se transformant en une atteinte neurologique grave qui se traduit par une paralysie, un délire, des convulsions et une perte de connaissance. Le décès survient généralement une à deux semaines après l’apparition des premiers symptômes.

Les quatre cas humains recensés en Australie – trois dans le Queensland (en 1996, 1998 et 2013) auquel s’ajoute le récent cas survenu en Nouvelles-Galles du Sud – se sont tous soldés par le décès des personnes infectées.

L’importance d’une prise en charge rapide

Lorsqu’une contamination par un lyssavirus est suspectée, la prise en charge médicale intervient rapidement. Elle consiste en l’administration d’une prophylaxie post-exposition combinant immunoglobulines (anticorps) antirabiques et vaccin antirabique.

Ce traitement est très efficace s’il est initié promptement – idéalement dans les 48 heures et au plus tard sept jours après l’exposition, autrement dit avant que le virus n’atteigne le système nerveux central.

En revanche, comme mentionné précédemment, à l’heure actuelle, il n’existe aucune thérapie une fois les symptômes déclarés.

Les recherches menées ces dernières années sur des anticorps monoclonaux semblent ouvrir des perspectives intéressantes, mais les traitements qui pourraient en découler ne sont pas encore disponibles.

Comment se protéger, et que faire en cas de morsure de chauve-souris ?

Une vaccination antirabique préexposition (trois injections réparties sur un mois) est recommandée pour les populations à haut risque : vétérinaires, soigneurs animaliers, personnes travaillant à la réintroduction de spécimens de faune sauvage et personnels de laboratoire manipulant des lyssavirus.

En ce qui concerne le grand public, les campagnes d’information sont essentielles pour réduire les interactions à risque, notamment dans les zones fréquentées par les chauves-souris. Il est crucial d’éviter tout contact direct avec les chauves-souris. Seuls des professionnels formés et vaccinés, comme les soigneurs ou les vétérinaires, doivent manipuler ces animaux.

En cas de morsure ou de griffure, il est impératif d’agir sans délai : il faut nettoyer abondamment la plaie à l’eau et au savon pendant au moins 15 minutes, appliquer un antiseptique (par exemple une solution de bétadine) et consulter un médecin de toute urgence.

Le tragique décès survenu en Nouvelles-Galles du Sud nous rappelle que, même si les infections par des lyssavirus de chauve-souris demeurent rarissimes, elles constituent une sérieuse menace. Il convient donc de renforcer la sensibilisation du public et d’assurer la vaccination des individus à risque, tout en poursuivant la surveillance des populations de chauves-souris ainsi que la recherche de nouveaux traitements.

The Conversation

Vinod Balasubramaniam ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Les lyssavirus, de rares mais mortels virus de chauve-souris – https://theconversation.com/les-lyssavirus-de-rares-mais-mortels-virus-de-chauve-souris-260559

Child labour numbers rise in homes where adults are jobless – South African study

Source: The Conversation – Africa – By Derek Yu, Professor, Economics, University of the Western Cape

Child labour is a big concern across the world. It is particularly acute in countries in the global south, where it is estimated that about 160 million children are engaged in child labour, about 87 million of them in sub-Saharan Africa.

A range of countries have sought to outlaw child labour because it denies children their childhood as well as physical and mental development.

In South Africa data on the work activities of children aged between 7 and 17 years are collected in the Survey of Activities of Young People, conducted by Statistics South Africa. Despite the survey having taken place four times (1999, 2010, 2015 and 2019), the dataset has been seriously under-used. There has hardly been any comprehensive research done on the state of South Africa’s child labour and child work activities.

In a recently published study we looked at child labour activities in the country. We compared the 2010, 2015 and 2019 Survey of Activities of Young People.

We first looked at personal and geographical characteristics of children, such as their gender, ethnic group and province of residence. We went on to look at their work activities, as well as the relationship (if any) between adults’ employment status and the probability of children from the same households having to work.

The reason we chose to look at the relationship between child labour and work activities of adults is that South Africa has an extremely high level of unemployment. At the end of 2024 the unemployment rate was 31.8%.

The Basic Conditions of Employment Act, which was passed in 1997, bans the employment of children until the last school day of the year when they turn 15 years old. Nonetheless, as some adult household members struggle to find work successfully, it is possible that child members of households are exploited to help the households survive financially.

Two striking and alarming findings stand out from the study.

First, the fewer adults were employed in a household, the more likely it was that children in the household were working. Secondly, the presence of child labour in the household had a discouraging impact on the adult members’ job-seeking action.

The first key finding implies that if adults were employed, children might not be working. The second implies that jobless adult members most likely relied on the (illegal) income earned by the child labour, discouraging the adults from seeking work actively.

The number of children working in South Africa has dropped from 778,000 in 2010 to 577,000 in 2019. This downward trend implies the success of South African legislation in prohibiting child labour over the years. But, we conclude, laws and regulations are not enough. In South Africa, the enforcement as well as the public awareness and understanding of the child labour related legislation must be improved to safeguard children.

Thus, a coordinated programme of action by the government is important to bring all stakeholders into the fight against child labour and unemployment of the working-age population.

About the survey

The Survey of Activities of Young People was first introduced in 1999 by Statistics South Africa, two years after the 1997 legislation that banned child labour. However, since the 1999 survey was not linked to the Labour Force Survey and the 1999 survey questions were asked very differently from the 2010, 2015 and 2019 waves, we decided to exclude the 1999 survey wave from the analysis. Hence, we focus on examining the 2010, 2015 and 2019 results, notably because these three waves of data about young people are linked to the Labour Force Survey data taking place in the same year.

This makes it possible to investigate the relationship between the employment status of child and adult household members.

The 2019 survey findings show that, if a household had no employed adult members, the probability of the child from the same household ending up as child labour was 6.5%.

If the household had one employed adult member, child labour probability dropped to 4.7%. Lastly, if the household had at least two employed adult members, child labour likelihood decreased further to 2.7%.

Using the same 2019 data, we found that if a household had no child involved in labour, the probability of an adult member from the same household seeking work in the labour market was 60%. Adult members’ labour force participation rate from households where at least once child worked as child labour was much lower at 44%.

Looking at other child labour statistics, we found that the majority (90%) of working children were Africans; above 60% were in the illegal age cohort of 7-14 years; and most were living in the rural areas of KwaZulu-Natal, Gauteng and Eastern Cape.

In addition, 98% of them were still attending school while working as child labour.

Lastly, most child labour worked 1-5 hours per week in elementary occupations in the wholesale and retail industry. The top three reasons for children working were “to obtain pocket money”, “to assist family with money” and “duty to help family”.

The road ahead

Some children spent many hours on household chores (which is not classified as child labour, strictly speaking). Parents, employers and the community must be educated about the dangers of long hours on domestic chores and even child labour.

The government should consolidate its infrastructure development programmes, especially the delivery of electricity, water and sanitation in areas where children spend time on domestic chores. These actions will shorten the duration of child household chores and allow children more time for school activities. The surveys used for the study did not include questions about specific activities children were involved in. They only asked if the child was involved in chores such as cleaning, cooking and looking after elderly members.

It is also worthwhile if questions relating to child labour are included in the child questionnaire of the National Income Dynamics Study (the only national panel data survey in South Africa) to more thoroughly investigate whether child labour is a short-term or long-term phenomenon, and whether there is any relationship between poverty (and receipt of social grants) and child labour incidence.

Lastly, it has been six years since the Survey of Activities of Young People was last conducted. It is time for Statistics South Africa to collect the latest data on the state of child labour in the country.

This article is based on a journal article which the writers co-authored with Clinton Herwel (Economics Masters student at the University of the Western Cape).

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Child labour numbers rise in homes where adults are jobless – South African study – https://theconversation.com/child-labour-numbers-rise-in-homes-where-adults-are-jobless-south-african-study-259398

Coups in west Africa have five things in common: knowing what they are is key to defending democracy

Source: The Conversation – Africa (2) – By Salah Ben Hammou, Postdoctoral Research Associate, Rice University

August 2025 makes it five years since Malian soldiers ousted President Ibrahim Boubacar Keïta in a coup d’état. While the event reshaped Mali’s domestic politics, it also marked the beginning of a broader wave of military takeovers that swept parts of Africa between 2020 and 2023.

Soldiers have toppled governments in Niger, Burkina Faso (twice), Sudan, Chad, Guinea and Gabon.

The return of military coups shocked many observers. Once thought to be relics of the cold war, an “extinct” form of regime change, coups appeared to be making a comeback.

No new coups have taken place since Gabon’s in 2023, but the ripple effects are far from over. Gabon’s coup leader, Gen. Brice Oligui Nguema, formally assumed the presidency in May 2025. In doing so he broke promises that the military would step aside from politics. In Mali, the ruling junta dissolved all political parties to tighten its grip on power.

Across the affected countries, military rulers remain entrenched. Sudan, for its part, has descended into a devastating civil war following its coup in 2021.

Analysts often cite weak institutions, rising insecurity, and popular frustration with civilian governments to explain coups. While these factors play a role, they don’t capture the patterns we have observed.

I have studied and written on military coups for nearly a decade, especially this coup wave.

After a close analysis of the coup cascade, I conclude that the international community must move beyond the view of coups as isolated events.

Patterns suggest that the Sahelian coups are not isolated. Coup leaders are not only seizing power, they are learning from one another how to entrench authority, sidestep international pressure and craft narratives that legitimise their rule.

To help preserve democratic rule, the international community must confront five lessons revealed by the recent military takeovers.

Key lessons

Contagion: Just a month after Guinea’s military ousted President Alpha Condé, Sudan’s army disrupted its democratic transition. Three months later, Burkina Faso’s officers toppled President Roch Marc Christian Kaboré amid rising insecurity.

Each case had unique triggers, but the timing suggests more than coincidence.

Potential coup leaders watch closely, not just to see if a coup succeeds but what kinds of challenges arise as the event unfolds. When coups fail and plotters face harsh consequences, others are less likely to follow.

Whether coups spread depends on the perceived risks as much as on opportunity. But when coups succeed – especially if new leaders quickly take control and avoid immediate instability – they send a signal that can encourage others to act.

Civilian support matters: Civilian support for coups is real and observed.

Since the start of Africa’s recent coup wave, many commentators have highlighted the cheering crowds that often welcome soldiers, celebrating the fall of unpopular regimes. Civilian support is a common and often underestimated aspect of coup politics. It signals to potential coup plotters that military rule can win legitimacy and public backing.

This popular support also helps coup leaders strengthen their grip on power, shielding their regimes from both domestic opposition and international pressure. For example, following Niger’s 2023 coup, the putschists faced international condemnation and the threat of military intervention. In response, thousands of supporters gathered in the capital, Niamey, to rally around the coup leaders.

In Mali, protesters flooded the streets in 2020 to welcome the military’s ousting of President Ibrahim Boubacar Keïta. In Guinea, crowds rallied behind the junta after Alpha Condé was removed in 2021. And in Burkina Faso, both 2022 coups were met with widespread approval.

International responses: The international community’s response sends equally powerful signals. When those responses are weak, delayed, or inconsistent – such as the absence of meaningful sanctions, token aid suspensions, or symbolic suspensions from regional bodies – they can send the message that the illegal seizure of power carries few legitimate consequences.

International responses to recent coups have been mixed. Some, like Niger’s, triggered strong initial reactions, including sanctions and threats of military intervention.

But in Chad, Mahamat Déby’s 2021 takeover was effectively legitimised by key international actors, which portrayed it as a necessary step for stability following the battlefield death of his father, President Idriss Déby, at the hands of rebel forces.

In Guinea and Gabon, regional suspensions were largely symbolic, with little pressure to restore civilian rule. In Mali and Burkina Faso, transitional timelines have been extended repeatedly without much pushback.

The inconsistency signals to coup leaders that seizing power may provoke outrage, but rarely lasting consequences.

Coup leaders learn from one another: Contagion isn’t limited to the moment of takeover. Coup leaders also draw lessons from how others entrench themselves afterwards. They watch to see which tactics succeed in defusing opposition and extending their grip on power.

Entrenched military rule has become the norm across recent coup countries. On average, military rulers have remained in power for nearly 1,000 days since the start of the current wave. Before this wave, military leaders had retained power on average for 22 days since the year 2000.

In Chad, Mahamat Déby secured his grip through a contested 2024 election. Gabon’s Nguema followed in 2025, winning nearly 90% of the vote after constitutional changes cleared the path. In both cases, elections were used to re-brand military regimes as democratic, even as the role of the armed forces remains unchanged.

Connecting the dots

Coup governments across Mali, Burkina Faso and Niger have shifted away from western alliances and towards Russia, deepening military and economic ties. All three exited the Economic Community of West African States and formed the Alliance of Sahel States, denouncing regional pressure.

Aligning with Russia offers these regimes external support and a veneer of sovereignty, while legitimising authoritarianism as independence.

The final lesson is clear: when coups are treated as isolated rather than interconnected, it’s likely that more will follow. Would-be plotters are watching how citizens react, how the world responds, and how other coup leaders consolidate power.

When the message they receive is that coups are tolerable, survivable and even rewarded, the deterrent effect weakens.

Poema Sumrow, a Baker Institute researcher, contributed to this article

The Conversation

Salah Ben Hammou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Coups in west Africa have five things in common: knowing what they are is key to defending democracy – https://theconversation.com/coups-in-west-africa-have-five-things-in-common-knowing-what-they-are-is-key-to-defending-democracy-258890

¿Por qué es obligatorio vestir de blanco en Wimbledon?

Source: The Conversation – (in Spanish) – By Roger Fagge, Associate Professor in the Department of History, University of Warwick

Cuando Carlos Alcaraz venció a Jannik Sinner en la final masculina de Roland Garros el 8 de junio de 2025, en lo que ya se considera un partido clásico, se especuló mucho acerca de las elecciones de vestuario de los dos grandes cabezas de serie del tenis mundial.

Ambos llevaban camisetas Nike. La de Alcaraz era sin cuello, con rayas horizontales verdes y negras con borde azul, y pantalones cortos negros. Por su parte, Sinner llevaba una camiseta verde de estilo polo con cuello, pantalones cortos azules y una gorra azul de Nike. La camiseta de Sinner se parecía mucho a una camiseta de rugby irlandés, lo que algunos consideraron un poco incongruente en una pista de tenis.

En la final femenina del 7 de junio, Coco Gauff derrotó brillantemente a Aryna Sabalenka, la cabeza de serie número uno. Gauff lució un conjunto personalizado de New Balance con un efecto marmolado en azul oscuro, rematado con una elegante chaqueta de cuero gris que llevó puesta al entrar y salir de la pista. Sabalenka lució un colorido vestido de tenis de Nike.

La tecnología, el diseño y la moda influyen en la elección del equipamiento de tenis de los jugadores, al igual que su potencial comercial: el vestido exacto de Sabalenka se puede comprar en la página web de Nike. Pero las cosas son diferentes en el torneo de Wimbledon, donde sigue siendo obligatorio llevar un equipo «casi totalmente blanco».

Fundado en 1877, lo que lo convierte en el clásico de tenis más antiguo y prestigioso del mundo, en Wimbledon cualquier color debe limitarse a una franja de 10 mm.

La ropa blanca se impuso en Wimbledon en el siglo XIX, en parte porque ocultaba los indeseables signos de sudor. La ropa blanca también se consideraba más fresca en el calor del verano. Pero con el paso del tiempo se vinculó a un sentido de la historia y la tradición, y a la singularidad del torneo de Wimbledon.

Aunque ha habido algunas revisiones ocasionales.

Muchas mujeres de la comunidad tenística, entre ellas Billie Jean King, Judy Murray y Heather Watson, han argumentado que las jugadoras encuentran problemáticos los pantalones cortos blancos cuando tienen la menstruación. Como resultado, el All England Club revisó las normas en 2023 para permitir los pantalones cortos oscuros, «siempre que no sean más largos que los pantalones cortos o la falda».

Ya había habido controversias anteriores sobre la vestimenta en Wimbledon, a veces sobre la decencia, como en 1949, cuando Gertrude Moran desafió los códigos de vestimenta con «ropa interior visible».

Más recientemente, en 2017, se pidió a Venus Williams que se cambiara durante una pausa por lluvia en un partido debido a que se le veían los tirantes de un sujetador fucsia.

Al año siguiente, Roger Federer, que buscaba su octavo título de Wimbledon, tuvo que cambiar sus zapatillas Nike de suela naranja. Todos accedieron.

La historia de los uniformes totalmente blancos

La ropa totalmente blanca también está relacionada con el críquet, que comparte elementos de clase y tradición con el tenis. Jugar bajo el sol del verano hacía que la ropa blanca fuera una opción sensata para el críquet. Sin embargo, las autoridades de este deporte permitían que los jugadores llevaran gorras de colores que representaran a su condado o país, y los jerseys de críquet para los días menos soleados solían tener los colores del equipo en el cuello en V.

Hombre con ropa blanca de críquet
La ropa blanca también se asocia con el críquet.
Shutterstock

Las camisetas y los uniformes blancos también han desempeñado un papel importante en otros deportes, como el fútbol. Si las camisetas blancas sugieren respetabilidad y estilo, resulta algo irónico que el poderoso equipo de Leeds de mediados de los años 60 y 70, dirigido por Don Revie, se ganara el sobrenombre de «dirty Leeds» por su estilo agresivo en el campo. La historia y la tradición son tan importantes en el fútbol como en cualquier otro deporte, y los aficionados de cierta edad de otros clubes siguen refiriéndose al club de Yorkshire con este apodo.

Pero basta ya de fútbol, ya que estamos en plena temporada de Wimbledon. Disfrutemos del tenis. Afortunadamente para los aficionados más tradicionales, no habrá equipaciones verdes o azules en la pista central.

The Conversation

Roger Fagge no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. ¿Por qué es obligatorio vestir de blanco en Wimbledon? – https://theconversation.com/por-que-es-obligatorio-vestir-de-blanco-en-wimbledon-260560