School violence doesn’t happen in isolation: what research from southern Africa is telling us

Source: The Conversation – Africa – By Gift Khumalo, Lecturer, Durban University of Technology

School violence is a global public health phenomenon. This is when learners and teachers are the victims of physical and psychological abuse, cyber threats and bullying, fights, gangsterism, and the use of weapons at school.

The consequences of school violence are dire. There are implications for learners, teachers, the school and the community. Violence undermines the learners’ and teachers’ safety. It causes stress, academic decline and behavioural problems. It can contribute to a broader cycle of violence in communities.

School violence is a problem across southern Africa. This includes South Africa, Zimbabwe, Mozambique and Namibia.

In 2008 the regional body, the Southern African Development Community adopted Care and Support for Teaching and Learning framework. It was to prevent violence, create safer schools and foster a positive school ethos.

But there has been limited research unpacking factors that contribute to school violence. We recently undertook a review project to identify and understand those contextual factors.

Our research stems from our shared scholarly interest in issues of violence in educational settings. Our professional backgrounds include school social work, health promotion, social services with children and adolescents, and teaching general education modules at a South African university.

The review of studies of violence suggests that a range of factors contribute to school violence. These include: exposure to domestic violence, socio-economic status, poor family communication, lack of appropriate disciplinary processes at school, intolerance of individual and social differences, and exposure to alcohol and substance use in the community.

What’s needed are clear school policies, teacher training and deployment of school social workers.

The scope

Our project reviewed 24 studies of violence in Southern African Development Community schools. Most of the studies were done in South Africa but some were in Eswatini, Zambia, Malawi and Angola.

We focused on this region for the following reasons.

  • The region comprises low- and low-middle-income countries. Learners experience various socio-economic challenges and structural disparities within their communities and schools.

  • Previous research suggests that communities in the region face crime and violence, gangsterism, high unemployment rates and poverty.




Read more:
Blunting the impact of poor social conditions in South Africa will have big health benefits


  • The Care and Support for Teaching and Learning framework, which is intended to support learners’ enrolment, retention, performance and progression, has not prevented school violence. The limited evidence suggests a need to better understand the specific contextual factors that contribute to this violence.

Our findings from the papers we reviewed indicate that factors contributing to school violence are present in learners’ home environments, communities and schools.

Family environment

Disrespect towards teachers and physical fights are linked to witnessing domestic violence. The family unit’s socio-economic standing is significant. Compared to better-off learners, those from less privileged environments are more likely to violate school rules, steal other learners’ belongings, and bully others for their lunch meals. Learners from food-insecure families enter into transactional relationships with teachers for financial support and “free” groceries.




Read more:
Violence is a normal part of life for many young children: study traces the mental health impacts


Research shows that the inability of parents to support and talk to their children results in children succumbing to peer pressure and becoming involved in gangs and fights. Parents sometimes incite school violence by defending their children’s misconduct and blaming teachers for their children’s behaviour.

We also observed that in schools with children who have disabilities, some parents arrange intimate relationships for their children with other learners, to shield them from exploitation by community members. However, this exposes them to unintended sexual violence in those relationships, as sexual boundaries and consent are not adequately explained to the young couples.




Read more:
Bullies in South African schools were often bullied themselves – insights from an expert


Community environment

The studies we reviewed indicate that the surrounding community has a role in school violence. Learners’ exposure to alcohol and substance use can lead to violence. Specifically, community members sell substances to learners, who then return to school intoxicated, disrupting teaching and learning. In some instances, fights among the boys that start outside school continue in the school premises.




Read more:
After school clubs aren’t always safe spaces: what should be done about it


School environment

Different types of bullying occur among learners. Research shows that most of the perpetrators are boys, ridiculing girls for their achievements and using violence to “prove masculinity” and gain popularity. Boys are ridiculed for not having romantic partners, which often leads to aggression. Peer pressure also causes boys to verbally abuse girls who refuse their advances, and resort to behaviours such as taking pictures of their underwear in class or through toilet windows. Gangs are common and contribute to violence, serving as venues for violent interactions among boys.

Another factor fuelling school violence is lack of understanding and intolerance of demographic and individual diverse identities – like nationality, gender and sexual orientation, physical appearance, culture and religion. Migrant learners are subjected to xenophobic attitudes where they are body shamed and insulted. Learners are the target of homophobic statements because of their gender and sexual identities. Dark-skinned and slender learners are often targeted, with teasing guised as humour.




Read more:
Taunts and bullying drive children with albinism from Tanzanian schools


Way forward

The purpose of this review project was to map the literature on factors contributing to school violence in the Southern African Development Community region. It could be useful in other similar regions too.

We suggest education ministries and schools countries could consider:

  • implementing clear school policies on how to report and respond to incidents of school violence

  • training teachers and school administrators on national and school policies for addressing school violence and promoting professionalism

  • documenting incidents of school violence and developing strategies to create safe environments

  • collaboration among schools, parents and psychosocial support personnel, such as school social workers, to reduce violence in schools.

We argue that different intervention programmes and services need to be adopted to address the root causes of violence. Deploying more school social workers would be part of this effort.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. School violence doesn’t happen in isolation: what research from southern Africa is telling us – https://theconversation.com/school-violence-doesnt-happen-in-isolation-what-research-from-southern-africa-is-telling-us-269288

Côte d’Ivoire’s democratic backslide: elections leave even less space for freedom

Source: The Conversation – Africa (2) – By Jesper Bjarnesen, Senior researcher, The Nordic Africa Institute

Ivorians went to the polls on 25 October 2025 to choose between incumbent president Alassane Ouattara – seeking a fourth five-year term – and one of four candidates who didn’t have the backing of the largest opposition parties.

There was not much of a choice, as the three main opposition candidates were banned from standing. Ouattara claimed another first-round landslide victory with 89.77% of votes cast.

As a researcher, I have followed political developments in Côte d’Ivoire over the past 15 years, and I’m currently involved in a project on boycott movements which uses Côte d’Ivoire as a country case.

This informs my view of the 2025 presidential elections and the democratic outlook for Côte d’Ivoire.

While the country tends to be seen as a regional front runner in terms of its economic performance, the 2025 elections continue a worrying trend of democratic backsliding and political polarisation.

The 27 December legislative elections will be a test of the country’s democratic resilience.

The build-up

In the months leading up to the presidential elections, major opposition candidates were excluded and political apathy took hold in a shrinking space for democratic expression.

Ouattara announced his candidacy in August, despite the widespread objections to his third-term candidacy in 2020 at home and abroad.

As in 2020, critics insisted that Ouattara was overstepping his constitutional mandate of one presidential term, renewable once. He has argued that a 2016 revision gave him the right to run twice.

As election day approached, Côte d’Ivoire’s political landscape was marked by polarisation, repression and uncertainty.

Tensions deepened in early September when the Constitutional Council disqualified five prominent opposition candidates from the race. Former president Laurent Gbagbo, Charles Blé Goudé and Guillaume Soro were excluded due to prior criminal convictions. The two main challengers, Tidjane Thiam and Pascal Affi N’Guessan, were barred on procedural grounds.

Their exclusion more or less handed victory to Ouattara, and his campaign comfortably turned towards ensuring an absolute majority.

In early October, the National Security Council banned public gatherings, except those organised by official candidates, on the grounds of “maintaining public order”. It also imposed additional restrictions on civic mobilisation. It used the letter of the law to serve Ouattara’s interests in limiting protests against his candidacy.

Going against the ban, opposition parties called for daily protests, but the gatherings were generally small and promptly broken up by security forces.

Three days before the elections, Gbagbo denounced what he called a “civil coup” and expressed his support for those “protesting against this electoral robbery”.

On 11 October, protesters in Abidjan took to the streets. These acts of defiance led to some 700 arrests and 80 prison sentences for disturbing public order. Eleven people were killed in clashes between security forces and protesters.

Along with other domestic and international observers, Amnesty International denounced the repression of demonstrations. At the same time, the government deployed 40,000 security personnel across the country.

France, the regional grouping Ecowas and the EU have remained largely silent. They have generally prioritised stability and strategic relations with the Ivorian government over democratic accountability. This passivity risks further eroding the credibility of these international actors while reinforcing narratives of western double standards in the region.

While the excluded opposition parties tried, and largely failed, to mobilise their supporters in the streets, the remaining candidates (all representing small and newly formed political parties and coalitions) chose a different strategy.

Capable Generations Movement leader Simone Ehivet Gbagbo (the former first lady, who was divorced from ex-president Gbagbo in 2023) deplored the elimination of her ex-husband. But in the final weeks of campaigning she insisted that it was too late to call people to the streets. She called for people to vote instead.

Election day

Election day was mostly peaceful across the country, but violent clashes did break out in several towns. The president of the Independent Electoral Commission, Ibrahim Kuibiert Coulibaly, described these incidents as “marginal” and “quickly contained”.

While the election result was never in doubt, the participation rate was less predictable. The confirmed participation rate of 50.1% shows that many voters stayed at home; many out of apathy but also out of concerns over the risk of violent clashes around polling stations.

Provisional results announced on 27 October gave Outtara 89.77% of the votes. Along with other opposition members, Thiam lamented a rigged and divisive electoral process with inadequate participation, and urged nonviolent resistance. He called for the government to engage in dialogue towards reconciliation.

The ruling party and media supportive of Ouattara described the result as a “landslide victory”, particularly celebrating Ouattara’s victories in historical opposition strongholds.

Three days after election day, several leaders of the main opposition parties were summoned by police on the grounds that military-grade weapons had been found in the homes of individuals linked to the 11 October march.

So, while the elections may be said to have unfolded without major incidents, the lack of a genuine contest and the measures taken to restrict opposition cast a shadow over the poll, and over Outtara’s legacy.

What’s next, and what are the prospects for democracy?

In the short to medium term, the major opposition parties could salvage some of their influence in the parliamentary elections on 27 December. Or they may reignite protests.

In the long term, Ouattara would have to step towards outright authoritarianism to justify a fifth candidacy in 2030. It seems more likely that he will finally hand over to a successor from his inner circle.

Even if that happens, serious questions remain regarding the electoral framework. The opposition has long claimed that the independent electoral commission is biased in favour of the incumbent.

The Ouattara presidency is tainted by its record of one-sided electoral competitions, political violence and insecurity, and a shrinking space for public expression.

Given Côte d’Ivoire’s strategic importance to the global north, as a rare ally in the subregion, international actors won’t have much to say about its democratic performance.

Any prospects for reconciliation, political reform and a peaceful transition in 2030 will mainly be in the hands of the ruling party. It will have to encourage dialogue and political inclusion at municipal, provincial and regional levels.

The 27 December legislative elections will offer a better chance to understand the actual distribution of political leverage than the flawed presidential elections.

Amelie Stelter of the department of Peace and Conflict Research, Uppsala University, Sweden contributed to this article

The Conversation

Jesper Bjarnesen receives funding from the Swedish Research Council (VR) through grant number VR2024-00989.

ref. Côte d’Ivoire’s democratic backslide: elections leave even less space for freedom – https://theconversation.com/cote-divoires-democratic-backslide-elections-leave-even-less-space-for-freedom-269469

Reconciliation without accountability is just talk — especially when it comes to Indigenous health

Source: The Conversation – Canada – By Jamaica Cass, Director, Queen’s-Weeneebayko Health Education Partnership, Queen’s University, Ontario

Canada’s latest auditor general’s report reveals an uncomfortable truth: billions of dollars and countless commitments later, the federal government still cannot demonstrate meaningful improvement in health services for First Nations.

As a family physician working in my First Nation, Tyendinaga Mohawk Territory in southern Ontario, I see the evidence of this failure not in spreadsheets but in people — patients navigating a health system that remains structurally unequal.

Nearly 10 years after the Truth and Reconciliation Commission’s (TRC) Calls to Action, it is clear that reconciliation without accountability delivers only rhetoric, not care.

The report states:

“Increasing First Nations’ capacity to deliver programs and services within their communities is critical to improving outcomes for First Nations people and supporting reconciliation.”

Yet the same report concludes that the department has taken a “passive and siloed approach” to supporting First Nations. It found unsatisfactory progress on five of 11 recommendations first issued in 2015 regarding access to health services for remote communities.

Encountering racism

A decade later, systemic barriers remain — geography may vary, but inequity is consistent.

Even in communities like mine, which sit within driving distance of tertiary care, accessing culturally safe services is far from guaranteed.

Patients still encounter racism in hospitals and clinics. Providers still rotate through Indigenous communities rather than build lasting relationships. And families still find themselves falling through the cracks between federal and provincial systems that debate who pays instead of who helps.

The auditor general’s report acknowledges some progress — more nurse practitioners and paramedics working in First Nations communities — but the average monthly vacancy rate remains 21 per cent. Constant turnover and short-term contracts erode trust, continuity and quality of care.

The auditor general also found no satisfactory progress on any previous recommendations to ensure that First Nations communities have ongoing access to safe drinking water. Clean water is the most basic determinant of health, yet its absence continues to expose the limits of Canada’s political will.

A decade later, inequity remains

The TRC Calls to Action related to health — numbers 18 through 24 — called for eliminating inequities, recognizing Indigenous healing practices, increasing Indigenous professionals in health care and ensuring Indigenous leadership in governance.

But the Yellowhead Institute’s September 2025 report, Braiding Accountability, shows that Canada remains mired in performative progress. Institutions have reached the “strengthen” phase — hiring Indigenous staff or creating advisory councils — but rarely the “change” phase, where Indigenous Nations co-develop priorities, indicators and accountability measures.

Under Call 19, the report notes, the goal of measurable progress toward health equity is undermined by the “absence of Indigenous data sovereignty.” Instead, “institutions report on activities, not results, using settler-defined metrics that obscure ongoing inequities.”

As a medical educator, I see this mirrored in our training systems. Under Call 23, governments were urged to increase Indigenous representation in health professions.

Yet, Braiding Accountability points to ongoing gaps in representation and a lack of meaningful data on whether Indigenous professionals are actually being retained or advancing into leadership. It notes that recruitment efforts often amount to a revolving door: institutions bring Indigenous staff into environments that remain unwelcoming, and then attribute their departures to supposed cultural issues rather than addressing the systemic problems that drove them away.

And perhaps the sharpest critique of all: Failing to shift authority and decision-making to Indigenous communities simply continues the very colonial dynamics that made the push for Indigenous health professionals necessary in the first place.

At Queen’s, through the Queen’s-Weeneebayko Health Education Program — which I lead — we are trying to do things differently by building pathways for Indigenous learners to study in their own regions, guided by Indigenous leadership and values.

The goal of this program is to transform who holds power in the health system.

A moment of possibility

There is, however, a reason for cautious optimism. With recent cabinet appointments, Canada now has its first Indigenous minister of Indigenous Services Canada (ISC). Mandy Gull-Masty’s appointment represents the first time an Indigenous woman leads the very department responsible for addressing these systemic failures.

Her lived experience as an Indigenous woman positions her to see what others have not: that reconciliation cannot be achieved through bureaucratic procedure, but through the transfer of decision-making power to Indigenous governments and communities.

Real progress will mean dismantling silos, resourcing First Nations to design and deliver their own health systems and holding all levels of government accountable to measurable outcomes.

It will mean embedding Indigenous data sovereignty and governance into every facet of health planning so Indigenous Peoples can finally define what success looks like on their own terms.

The human cost — and the hope

Every audit finding has a face. For me, it’s the patient who avoids seeking hospital care after a racist encounter, the Elder who still boils her water each morning and the young Indigenous medical student who tells me she wonders if she truly belongs.

These stories are a reminder that inequity does not end where the roads begin. Reconciliation will never be achieved through rhetoric or reports alone. It demands courage — the courage to transfer power, to embrace accountability and to care enough to change.

The appointment of an Indigenous minister offers a moment of possibility. If Gull-Masty can insist that reconciliation be measured in lives improved, systems restructured and trust rebuilt, then perhaps Canada will see real transformation.

The Conversation

Jamaica Cass works for Queen’s University. She receives funding from the National Circle on Indigenous Medical Education, the CPFC and the CMA. She is a board member of the Indigenous Physicians’ Association of Canada and the Medical Council of Canada.

ref. Reconciliation without accountability is just talk — especially when it comes to Indigenous health – https://theconversation.com/reconciliation-without-accountability-is-just-talk-especially-when-it-comes-to-indigenous-health-268140

Growing pains: An Ontario city’s urban agriculture efforts show good policy requires real capacity

Source: The Conversation – Canada – By Richard Bloomfield, Assistant Professor in Management and Organizational Studies at Huron University College, Western University

Staff members sharing their harvest at Urban Roots, an urban farm in London, Ontario. Urban agriculture can improve access to fresh food, especially for low-income communities, immigrants and seniors. (Urban Roots London)

Canadians are paying more for food than ever. Canada’s Food Price Report 2025 estimates that a family of four will spend up to $801 more on food this year, with overall prices expected to rise three to five per cent.

In response, more people are growing their own food. A 2022 national survey found that just over half of respondents were growing fruits or vegetables at home, and nearly one in five started during the first year of the COVID-19 pandemic.

Municipal governments have taken note, developing food and urban agriculture strategies that promise more green space, better access to fresh food, stronger communities and sometimes climate benefits. But do they actually change conditions on the ground?

That question sits at the centre of our new study published in the Journal of Agriculture, Food Systems, and Community Development.

London, Ont., adopted Canada’s first stand-alone Urban Agriculture Strategy in 2017. It was a hopeful signal that food and urban agriculture finally had a place on the municipal agenda. Yet, almost eight years later, many of the strategy’s goals remain unrealized.

Based on interviews and a workshop with 56 urban growers, community organizations and city staff in London, we found how a promising strategy can stall without clear leadership, resources and follow-through.




Read more:
Inflation is down overall, so why are my grocery bills still going up?


Why urban agriculture matters

Urban agriculture encompasses everything from backyard and balcony gardens to community gardens, small commercial operations, rooftop farms and community projects that process and distribute food.

Research links these activities to better mental health, stronger social connections and improved access to fresh food, especially for low-income communities, immigrants and seniors.

In London, demand for local food and garden space surged during the pandemic. The London Food Bank reported a 92 per cent increase in demand for food donations from 2021 to 2023. Community gardens across the city have long waiting lists. There is no shortage of interest or need for local food; the question is whether city policies support it.

What the strategy changed — and what it didn’t

We found that the city’s urban agriculture strategy helped advance urban agriculture in meaningful ways. Research participants told us it helped “put food on the agenda” at city hall, supporting updates to zoning and bylaws that make it easier to grow food in the city.

But when we asked urban growers and community organizations how much the strategy shaped their day-to-day work, the picture became more complicated. Roughly one-third of the people we spoke with had never heard of the strategy at all, despite actively participating in urban agriculture.

Others knew it existed but were unsure which actions had actually been implemented. Several described it as a “good starting point” that had not been backed by the staffing or funding needed for full implementation.

The strategy came with no dedicated position or budget. Responsibility was scattered across city departments, with no one tracking progress. Supportive staff helped where they could, but limited capacity meant they relied on the community to drive change.

Common challenges mentioned by urban growers and community organizations were unclear zoning and permitting processes, a lack of available land for long-term gardening and minimal financial support, leading to over-reliance on volunteers. The strategy helped normalize urban agriculture in London and opened some doors, but didn’t transform the system.

One of the strongest themes in our research was the strain on community capacity. Like many cities, London’s urban agriculture is powered by volunteers, small non-profit organizations and social enterprises. These groups are deeply committed but face rising demand, complex social needs and unstable funding. Asking them to carry a municipal strategy without matching support is unrealistic.

This echoes findings from other cities. Reviews of urban agriculture policies in Canada and the United States show that local enthusiasm often runs ahead of institutional support.

Strategies tend to celebrate urban agriculture’s potential but pay less attention to equitable land access, labour conditions and the economic realities of growing food in cities.

How cities can help urban agriculture

If other cities want to avoid London’s growing pains, our research points to several concrete steps they can take:

Assign clear responsibility. Task a specific department, name a lead staff person and allocate ongoing funding. Without this, actions are likely to be delayed, forgotten or handled piecemeal.

Simplify the rules and centralize information. Create accessible one-stop web pages and guidance documents that spell out what’s allowed, what permits are needed, how to access land and who to contact.

Secure space for growing. Map under-utilized land, integrate food production into parks and use long-term leases or land trusts to provide more security for community-led projects.

Treat community partners as co-planners. Develop strategies alongside practitioners, including those from under-represented and marginalized communities. Bring them into the process early and support their full participation, rather than seeking their feedback after decisions are set.

Urban agriculture won’t fix food insecurity — the biggest determinants remain income, housing, social supports and broader food-system policy. But our findings from London indicate that it can still deliver public value.

By committing to implementation and treating food growing as a key piece of urban infrastructure, municipalities can build healthier, better connected and more sustainable cities.

The Conversation

The research in this article was funded by the Social Sciences and Humanities Research Council of Canada.

Richard Bloomfield served as a board member before 2023 of Urban Roots, a participating organization in this study that did not benefit in any way from this research.

Kassie Miedema and Rebecca Ellis do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Growing pains: An Ontario city’s urban agriculture efforts show good policy requires real capacity – https://theconversation.com/growing-pains-an-ontario-citys-urban-agriculture-efforts-show-good-policy-requires-real-capacity-269804

An art historian looks at the origins of the Indigenous arts collection at the Vatican Museums

Source: The Conversation – Canada – By Gloria Bell, Associate Professor of Art History, McGill University

Image showing ‘Hall of North America’ at the 1925 Vatican Missionary Exhibition in Rome.

Pope Leo XIV met with the Canadian Conference of Catholic Bishops on Nov. 15 to, in the words of the Vatican, “gift” the return of 62 Indigenous “artifacts” held in the Anima Mundi collection of the Vatican Museums.

The papal narrative that these belongings are “gifts” needs correction. The Vatican says the artifacts are “part of the patrimony received on the occasion of the Vatican Missionary Exhibition (VME) of 1925.” However, as I document in my book Eternal Sovereigns: Indigenous Artists, Activists, and Travelers Reframing Rome the majority of the Indigenous belongings in the Vatican Museums were stolen from Indigenous communities during the 1920s and displayed at this exhibition.

The study divulges an important story, examining the history of the 1925 Vatican exhibit alongside destruction of Indigenous cultures, communities and lives through the Indian Act and residential schools in Canada, and genocidal activities by missionaries in partnership with the government carried out in residential schools. The book includes details of my encounters with artists, curators and archives.

As a scholar of art history and communication studies concerned with a broader anti-colonial approach to understanding Indigenous heritage, I use the terms “belongings,” “ancestral art” and “ancestors” instead of speaking of Indigenous artifacts at the Vatican.

My research aims to unpack the sovereignty of Indigenous art and ancestors held at the Vatican Museums. Currently I am tracing the life of Anthony Martin Fernando, an Aboriginal protester with Dharug ancestry. As I document in the book, Fernando handed leaflets to visitors outside the VME in 1925 about the ongoing genocide of First Peoples and the need to recognize Indigenous sovereignty.




Read more:
Friday essay: crimes, redemption and rebellion – the truths told in 65,000 years of Australian art are essential for national healing


Indigenous communities, government officials, scholars and artists have advocated for the return of Indigenous ancestors from the Vatican Museums as part of wider calls for truth, reconciliation and justice.

A joint statement from the Vatican and CCCB said Pope Leo “desires that this gift represent a concrete sign of dialogue, respect and fraternity,” and the CCCB says it will “transfer these artifacts” to Indigenous organizations that will “ensure that the artifacts are reunited with their communities of origin.”

Culture of conquest

Indigenous belongings from nations including Cree, Lakota, Anishinaabe, Nipissing, Kanien’kehá:ka, Wolastoqiyik and Kwakwaka’wakw have remained in the Anima Mundi collection without Indigenous care for 100 years.

Now some of them will be returned home.

As I discuss in my book, thousands of Indigenous ancestors were stolen from Indigenous communities by the Catholic Church. Beyond what the Vatican has now committed to repatriating, many more belongings also need to be returned home and brought back into Indigenous care and hands.

I uncovered documents about the 1925 Vatican Missionary Exhibition through a lengthy process of research in archives across Rome and Turtle Island (North America).

The 1925 exhibit was a kind of ground zero for residential schools in Canada, as it celebrated missionary labour following amendments to the Indian Act in 1920 that made attending residential schools compulsory for children between the ages of seven and 15.

This scholarship involved tracing the provenance of many Indigenous belongings and linking them with different missionary orders. It also probes religious imperialism that continues to this day in the form of the current museum display of Indigenous belongings.

The Vatican Museums now has an online database, not available even five years ago. This can be used to do research on Indigenous belongings held in its collections, in a step towards more transparency.




Read more:
The Vatican just renounced a 500-year-old doctrine that justified colonial land theft … Now what? — Podcast


Origins of ‘Anima Mundi’

In December 1924, pope Pius XI opened the holy doors to the Vatican Missionary Exposition. Standing in the Hall of North America, one of the main exhibition spaces, he welcomed tourists and pilgrims to view First Nations, Métis and Inuit material, cultural and sacred belongings sent in especially for the exhibition.

The exhibition celebrated missionaries as noble and heroic labourers, but positioned Indigenous Peoples as needing redemption. The global exhibition, was unique in that it was the largest Catholic missionary exhibition of its time, competing with Protestant exhibitions in a race to convert souls.

Pope Pius XI decreed that everything that could be sent in from missions — including sacred and secular cultural belongings, Indigenous language materials and living human beings — should be included.

Before the exhibit, in 1923, The New York Times ran an Associated Press report that “one of the most attractive features would be ‘Natives brought specially to Rome, showing the customs and modes of living.’”

One million visitors attended the exhibition, and more than 100,000 Indigenous belongings were sent in for the expo. About 40,000 belongings remain in the Vatican Museums and became part of the permanent collection that’s now called Anima Mundi.

Not inadvertently, the 1925 exhibition was created like an adventure game where visitors could encounter wax models of Indigenous people, missionary children’s games, cultural belongings and touch materials. Missionary orders including the Jesuits, Verbites, Grey Nuns, Oblates and Cappucins all sent displays.

Indigenous ancestors put on display included beaded octopus bags, sacred masks, Cree moccasins, ledger art, photographs, potlatch regalia and an Inuvialuit sealskin kayak.

Parallel removal of children from communities

At the same time, missionaries acting under the Indian Act were forcibly removing Indigenous children from their communities across Canada and the United States.

Some missionaries sent materials made by these Indigenous children to the exhibit. Evidence of this includes photographs in the Indian Sentinel of Očhéthi Šakówiŋ (Sioux) children at residential schools, alongside missionaries’ own commentary about material.

For example, Rev. Florentin Digman of St. Francis Mission, Rosebud Reservation of South Dakota, wrote in May 1925 in the summary of the photograph of young Očhéthi Šakówiŋ girls:

“Dear Father, many thanks for informing us that Indian articles have been sent to the Vatican Mission Exposition. I hope that the articles will be as welcome as dessert at the feast.”

One of the most chilling finds in my archival research was uncovering drawings made by Indigenous children held in residential schools in British Columbia displayed at the 1925 VME. The language many missionaries used to describe these and other Indigenous belongings were “trophies of the pope.”

The 1925 exhibition created a visual reminder of the Vatican’s celebration of Catholic conquest.




Read more:
Looking for Indigenous history? ‘Shekon Neechie’ website recentres Indigenous perspectives


‘Anima Mundi’ exhibit today

Although the 1925 exhibition was a century ago, the same values of Indigenous erasure are present in current display techniques and the overall curatorial message.

In the Anima Mundi exhibit today, the curators don’t address the genocide of Indigenous children during the colonial era by missionaries in partnership with the state. Indigenous artists’ names, communities and languages are often not mentioned and Indigenous belongings are still described as gifts of the pope.

Pope Leo needs to acknowledge the harm caused by the patronizing and colonial narrative represented in Anima Mundi. The return of 62 Indigenous ancestors to Indigenous communities on Turtle Island is a first step, but much more needs to be done.

The Conversation

Gloria Bell receives funding from the Social Sciences and Humanities Research Council of Canada.

ref. An art historian looks at the origins of the Indigenous arts collection at the Vatican Museums – https://theconversation.com/an-art-historian-looks-at-the-origins-of-the-indigenous-arts-collection-at-the-vatican-museums-270239

Colleges teach the most valuable career skills when they don’t stick narrowly to preprofessional education

Source: The Conversation – USA (2) – By Daniel V. McGehee, Professor of Industrial and Systems Engineering, University of Iowa

Tracking graduates’ earnings is just one way to measure the benefit of higher education. iStock/Getty Images Plus

Across state legislatures and in Congress, debates are intensifying about the value of funding certain college degree programs – and higher education, more broadly.

The growing popularity of professional graduate degrees over the past several decades – including programs in business administration and engineering management – has reshaped the economics of higher education. Unlike traditional academic graduate programs, which are often centered on research and scholarship, these professionally oriented degrees are designed primarily for workforce advancement and typically charge much higher tuition.

These programs are often expensive for students and are sometimes described as cash-cow degrees for colleges and universities, because the tuition revenue far exceeds the instructional costs.

Some universities and colleges also leverage their brands to offer online, executive or certificate-based versions of these programs, attracting many students from the U.S. and abroad who pay the full tuition. This steady revenue helps universities subsidize tuition for other students who cannot pay the full rate, among other things.

Yet a quiet tension underlies this evolution in higher education – the widening divide between practical, technical training and a comprehensive education that perhaps is more likely to encourage students to inquire, reflect and innovate as they learn.

An overlooked factor

Some states, including Texas, track salary data for graduates of every program to measure worth through short-term earnings. This approach may strike many students and their families as useful, but I believe it overlooks a part of what makes higher education valuable.

A healthy higher education system depends not only on producing employable graduates but also on cultivating citizens and leaders who can interpret uncertainty, question assumptions and connect ideas across disciplines.

When assessing disciplines such as English, philosophy, history and world languages, I think that we should acknowledge their contributions to critical thought, communication and ethical reasoning.

These academic disciplines encourage students to synthesize ideas, construct arguments and engage in meaningful debate. Some law schools often draw their strongest students from these backgrounds because they nurture analytical and rhetorical skills essential for navigating complex civic and legal issues.

Historically, poets and writers have often been among the first to be silenced by authoritarian regimes. It’s a reminder of the societal power of inquiry and expression that I believe higher education should protect.

A group of young people wear white jackets and stand around a dummy dressed with a pink blanket over it in a hospital bed.
Undergraduate students who want to become doctors or work in other specialized fields are often encouraged to take only classes that connect with their long-term career trajectory.
Glenn Beil/Florida A&M University via Getty Images

Why students stay on narrow professional paths

Students entering college today face significant pressure to choose what they might see as safe majors that will result in a well-paying career. For aspiring physicians and engineers, the path is often scripted early by steering them toward physical and biosciences. High test scores, internships and other stepping stones are treated as nonnegotiable. Parents and peers can reinforce this mindset.

Most colleges and universities do not reward a future medical student who wants to major in comparative literature, or an engineering student who is spending time on philosophy.

Students’ majors also typically place course requirements on them, in addition to a school’s general course requirements. This often does not leave a lot of room for students to experiment with different classes, especially if they are pursuing vocationally focused majors, such as engineering.

As a result, I’ve seen many students trade curiosity for credentialing, believing that professional identity must come before intellectual exploration.

As someone who began my education in psychology and later transitioned into engineering, I have seen how different intellectual traditions approach the same human questions. Psychology teaches people to observe behavior and design experiments. Engineering trains students to model systems and optimize performance.

When combined, they help reveal how humans interact with technology and how technological solutions reshape human behavior.

In my view, these are questions neither field can answer alone.

Initiative is the missing ingredient

One of the most important and often overlooked ingredients in thriving high tech, medical and business environments is initiative. I believe students in the humanities routinely practice taking initiative by framing questions, interpreting incomplete information and proposing original arguments. These skills are crucial for scientific or business innovation, but they are often not emphasized in structured science, technology, engineering and mathematics – or STEM – coursework.

Initiative involves the willingness to move first and to see around corners, defining the next what-if, rallying others and building something meaningful even when the path is uncertain.

To help my engineering students practice taking initiative, I often give them deliberately vague instructions – something they rarely experience in their coursework. Many students, even highly capable ones, hesitate to take initiative because their schooling experience has largely rewarded caution and compliance over exploration. They wait for clarity or for permission – not because they lack ability, but because they are afraid to be wrong.

Yet in business, research labs, design studios, hospitals and engineering firms, initiative is the quality employers most urgently need and cannot easily teach. Broader educational approaches help cultivate this confidence by encouraging students to interpret ambiguity rather than avoid it.

How teaching can evolve

Helping all students develop a sense of initiative and innovation requires university leaders to rethink what success looks like.

Universities can begin with achievable steps, such as rewarding cross-disciplinary teaching and joint appointments in promotion and tenure criteria.

At the University of Iowa’s Driving Safety Research Institute, where our teams blend engineering, medicine, public health and psychology, students quickly learn that a safe automated vehicle is not just a technical system but also a behavioral one. Understanding how human drivers respond to automation is as important as the algorithms that govern the vehicle.

Other institutions are modeling this approach of integrating social, behavioral and physical sciences.

Olin College of Engineering, a school in Needham, Massachusetts, builds every project around both technical feasibility and human context. Courses are often co-taught by humanities and engineering professors, and projects require students to articulate not only what they built but why it matters.

Still, integrating liberal and technical education is difficult in practice. Professional curricula often overflow with accreditation requirements. Faculty incentives reward specialization more than collaboration. Students and parents, anxious about debt and job security, hesitate to spend credits outside of a student’s major.

Rethinking what success means

I believe that higher education’s purpose is not to produce uniform workers but adaptable thinkers.

It might not be productive to center conversations about defending the liberal arts or glorifying STEM. Rather, I think that people’s focus should be on recognizing that each field is incomplete without the other.

Education for a complex world must cultivate depth, initiative and perspective. When students connect disciplines, question assumptions and act with purpose, they are prepared not only for their first job but for a lifetime of learning and leadership.

The Conversation

Daniel V. McGehee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Colleges teach the most valuable career skills when they don’t stick narrowly to preprofessional education – https://theconversation.com/colleges-teach-the-most-valuable-career-skills-when-they-dont-stick-narrowly-to-preprofessional-education-270025

Does BBC Civilisations get its four stories of collapse correct? Experts weigh in

Source: The Conversation – UK – By Jay Silverstein, Senior Lecturer in the Department of Chemistry and Forensics, Nottingham Trent University

In four episodes, the BBC’s Civilisations series tells the story of the fall of the Romans, Aztecs, Egypt’s Ptolemies and Japan’s Edo Samurais. The show tells these stories through a combination of recreated dramatic scenes, explanation from experts and discussions of objects from the British Museum. Here, four experts in each period have reviewed the episodes and shared their recommendations for further reading.

The Collapse of the Roman Empire

The canonical date of the fall of the Western Roman Empire is 476, when the general Odoacer deposed the last emperor, Romulus Augustulus – a child who had been on the throne for less than a year. I teach my students that this relatively muted event was probably not noticed by many ordinary people at the time, as very little likely immediately changed in their daily lives.

Instead, the much more dramatic events of 410 were the real collapse moment of the ancient world: the metropolis of Rome, the capital of the empire, was sacked by King Alaric and his Gothic army. As one of the expert contributors to this episode puts it, you would remember where you were when the news reached you.

The episode’s key achievement is to depict the way that Roman mistreatment of the Goths – a Germanic-speaking people many of whom fled war with Huns into the Roman Empire – effectively threatened their survival and backed them into a corner. While historians have long discussed these realities, it’s refreshing to see this message presented in such a compelling and humane way to the wider public. The contemporary resonances are obvious, and while history cannot provide us with answers, it can give us food for thought.

Further reading
To learn more about the end of the Western Roman Empire, I would recommend starting with the very readable and provocative introduction by Bryan Ward-Perkins, The Fall of Rome: And the End of Civilization. It looks at the very real changes that ordinary people would have experienced as a centuries-old empire fell apart.

Tim Penn is Lecturer in Roman and Late Antique Material Culture at University of Reading

The Last Days of the Ptolemies in Egypt

Neither the gradual decline nor the final fall of the Ptolemaic dynasty in Egypt in 30 BC is accurately realised in this episode. It presents a simplistic narrative riddled with factual inaccuracies. It also features inadvertent misreadings or deliberate misrepresentations that play fast and loose with the historical chronology of the reign of Cleopatra VII, and the significant historical figures that were part of it.

Such inaccuracy is not helped by the fact that, with the exception of two contributors, no one participating is actually an expert on this specific period of ancient Egyptian history. One prominent figure is not even an historian or archaeologist at all.

Most of the artefacts that are incorporated in an attempt to provide insight don’t date to this period of Egyptian history, and lead the narrative off in irrelevant directions. It’s not clear who the intended audience is, nor what they are expected to take away from this, beyond appreciation for the sumptuous dramatisation that unfolds in the background. There was potential here, such as the contribution of climate change and the wider geopolitical context, that was unfortunately squandered.

Further reading

If you want to read about Cleopatra’s reign specifically, then Duane W. Roller’s Cleopatra: A Biography is good. For the Ptolemaic dynasty more broadly, from start to end, I’d recommend Lloyd Llewelyn-Jones’s The Cleopatras: The Forgotten Queens of Egypt.

Jane Draycott is Senior Lecturer in Ancient History at the University of Glasgow

The Collapse of the Aztec Empire

The episode on the Aztecs focuses on the Aztec emperor Moctezuma in the 15th century. It offers a refreshing shift from the Eurocentric narrative that often paints him as indecisive while glorifying his nemesis, the conquistador Hernán Cortés. Here, the roles are reversed: Cortés’s ambition and brutality are exposed, while Moctezuma appears as a thoughtful and capable leader. Their confrontation feels less like a simple conquest and more like a high-stakes chess match – Moctezuma had Cortés in check until one audacious move changed history.

If you’re looking for a comprehensive account of the Aztec collapse, this episode won’t deliver that. Experts such as Matthew Restall, known for challenging colonial myths, are used sparingly, and the story remains selective. Key events are skipped, and contradictory sources are left out. All of this is inevitable in a single-episode format.

What it does offer is a visually stunning, well narrated introduction to imperial collapse, framed through iconic artefacts that bring the past to life.

Further reading

To learn more about the fall of the Aztecs, read
The True History of the Conquest of New Spain, Volume 4 by Bernal Díaz del Castillo – a Spaniard who served under Cortés during conquest of the Aztec Empire. There are many translations but the first edition of the text, edited by Mexican historian Genaro García and translated by Alfred Percival Maudslay, is my pick.

Jay Silverstein is Senior Lecturer in the Department of Chemistry and Forensics at Nottingham Trent University

The End of the Samurai in Japan

This episode deals with the military encounter between the American “black ships” (kurofune 黒船) under naval commodore Matthew Perry and the Tokugawa shogunate 徳川幕府 between 1852 and 1855. The interviewed historians are certainly familiar with the event, yet the conceptual framing is not quite right.

“Traditional Japan” is introduced as an unchanging and isolated place. In reality, Japan had lived in close economic and cultural symbiosis with continental East Asia since at least the rise of Buddhism in the 6th century.

A 1637 proclamation, known as sakoku, by the Tokugawa shogunate did make Japan a hostile place for Christians and foreigners. However, the Protestant Dutch, arch-enemies of their former Spanish overlords, were granted the right to send annual expeditions. These became the basis for Japan’s “Dutch studies” (rangaku 蘭學), an exchange of scientific knowledge which is ignored by the programme. Meanwhile, contact with China and Korea continued, albeit under stricter regulations.

The documentary dwells on the image of a powerful and conservative samurai class without alluding to the social transformations which had eroded its influence. The capital Edo was not only the largest city on earth, but a veritable engine of urbanisation and commercialisation.

This documentary is still a pleasure to watch, but the premise that Perry’s western gunboats led to the “fall” of Japanese civilisation is erroneous.

Further reading
If you want to know more about the political and social turmoil that led to the end of the samurais and the Tokugawa shogunate, I recommend The Emergence of Meiji Japan by Marius B. Jansen.

Lars Laamann is Senior Lecturer in the History of China At Soas, University of London


This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Does BBC Civilisations get its four stories of collapse correct? Experts weigh in – https://theconversation.com/does-bbc-civilisations-get-its-four-stories-of-collapse-correct-experts-weigh-in-270114

From concrete to community: How synthetic data can make urban digital twins more humane

Source: The Conversation – USA – By Wei Zhai, Associate Professor of Public Affairs and Planning, University of Texas at Arlington

How people behave is a critical element of how cities function. Ahmed Deeb/picture alliance via Getty Images

When city leaders talk about making a town “smart,” they’re usually talking about urban digital twins. These are essentially high-tech, 3D computer models of cities. They are filled with data about buildings, roads and utilities. Built using precision tools like cameras and LiDAR – light detection and ranging – scanners, these twins are great at showing what a city looks like physically.

But in their rush to map the concrete, researchers, software developers and city planners have missed the most dynamic part of urban life: people. People move, live and interact inside those buildings and on those streets.

This omission creates a serious problem. While an urban digital twin may perfectly replicate the buildings and infrastructure, it often ignores how people use the parks, walk on the sidewalks, or find their way to the bus. This is an incomplete picture; it cannot truly help solve complex urban challenges or guide fair development.

To overcome this problem, digital twins will need to widen their focus beyond physical objects and incorporate realistic human behaviors. Though there is ample data about a city’s inhabitants, using it poses a significant privacy risk. I’m a public affairs and planning scholar. My colleagues and I believe the solution to producing more complete urban digital twins is to use synthetic data that closely approximates real people’s data.“

Digital twins are more than simulations.

The privacy barrier

To build a humane, inclusive digital twin, it’s critical to include detailed data on how people behave. And the model should represent the diversity of a city’s population, including families with young children, disabled residents and retirees. Unfortunately, relying solely on real-world data is impractical and ethically challenging.

The primary obstacles are significant, starting with strict privacy laws. Rules such as the European Union’s General Data Protection Regulation, or GDPR, often prevent researchers and others from widely sharing sensitive personal information. This wall of privacy stops researchers from easily comparing results and limits our ability to learn from past studies.

Furthermore, real-world data is often unfair. Data collection tends to be uneven, missing large groups of people. Training a computer model using data where low-income neighborhoods have sparse sensor coverage means the model will simply repeat and even magnify that original unfairness. To compensate for this, researchers can use the statistical technique of weighting the data in the models to make up for the underrepresentation.

Synthetic data offers a practical solution. It is artificial information generated by computers that mimics the statistical patterns of real-world data. This protects privacy while filling critical data gaps.

Synthetic data: Tool for fairer cities

Adding synthetic human dynamics fundamentally changes digital twins. It shifts them from static models of infrastructure to dynamic simulations that show how people live in the city. By generating synthetic patterns of walking, bus riding and public space use, planners can include a wider, more inclusive range of human actions in the models.

For example, Bogotá, Colombia, is using a digital twin to model its TransMilenio bus rapid transit system. Instead of relying only on limited or privacy-sensitive real-world sensor data, the city planners generated synthetic data to fill the digital twin. Such data artificially creates millions of simulated bus arrivals, vehicle speeds and queue lengths, all based on the statistical patterns – peak times, off-peak times – of actual TransMilenio operations.

This approach transforms urban planning in several crucial ways, making simulations more realistic and diverse. For example, planners can use synthetic pedestrian data to model how elderly and disabled residents would navigate a new urban design.

It also allows for risk-free testing of ideas. Planners can simulate diverse synthetic populations to see how a new flood evacuation plan would affect various groups, all without risking anyone’s safety or privacy in the real world.

Cities are increasingly building digital twins for planning and development.

Making digital twins trustworthy

For all the promises of synthetic data, it can only be helpful if planners can trust it. Since they base major decisions on these virtual worlds, the synthetic data must be proved to be a reliable replacement for real-world data. Planners can test this by checking to see if the main policy decisions they reach using the synthetic data are the same ones they would have made using real-world data that puts people’s privacy at risk. If the decisions match, the synthetic data is trustworthy enough to use for that planning task going forward.

Beyond technical checks, it’s important to consider fairness. This means routinely auditing the synthetic models to check for any hidden biases or underrepresentation across different groups. For example, planners can make sure an emergency evacuation plan in the urban digital twin works for elderly residents with mobility issues.

Most importantly, I believe planners should include their communities. Establishing citizen advisory boards and designing the synthetic data and simulation scenarios directly with the people who live in the city helps ensure that their experiences are accurately reflected.

By moving beyond static infrastructure to dynamic environments that include people’s behavior, synthetic data is set to play a critical role in urban planning. It will shape the resilient, inclusive and human-centered urban digital twins of the future.

The Conversation

Wei Zhai receives funding from National Science Foundation.

ref. From concrete to community: How synthetic data can make urban digital twins more humane – https://theconversation.com/from-concrete-to-community-how-synthetic-data-can-make-urban-digital-twins-more-humane-268847

The ChatGPT effect: In 3 years the AI chatbot has changed the way people look things up

Source: The Conversation – USA – By Deborah Lee, Professor and Director of Research Impact and AI Strategy, Mississippi State University

ChatGPT has become the go-to app for hundreds of millions of people. AP Photo/Kiichiro Sato

Three years ago, if someone needed to fix a leaky faucet or understand inflation, they usually did one of three things: typed the question into Google, searched YouTube for a how-to video or shouted desperately at Alexa for help.

Today, millions of people start with a different approach: They open ChatGPT and just ask.

I’m a professor and director of research impact and AI strategy at Mississippi State University Libraries. As a scholar who studies information retrieval, I see that this shift of the tool people reach for first for finding information is at the heart of how ChatGPT has changed everyday technology use.

Change in searching

The biggest change isn’t that other tools have vanished. It’s that ChatGPT has become the new front door to information. Within months of its introduction on Nov. 30, 2022, ChatGPT had 100 million weekly users. By late 2025, that figure had grown to 800 million. That makes it one of the most widely used consumer technologies on the planet.

Surveys show that this use isn’t just curiosity – it reflects a real change in behavior. A 2025 Pew Research Center study found that 34% of U.S. adults have used ChatGPT, roughly double the share found in 2023. Among adults under 30, a clear majority (58%) have tried it. An AP-NORC poll reports that about 60% of U.S. adults who use AI say they use it to search for information, making this the most common AI use case. The number rises to 74% for the under-30 crowd.

Traditional search engines are still the backbone of the online information ecosystem, but the kind of searching people do has shifted in measurable ways since ChatGPT entered the scene. People are changing which tool they reach for first.

For years, Google was the default for everything from “how to reset my router” to “explain the debt ceiling.” These basic informational queries made up a huge portion of search traffic. But these quick, clarifying, everyday “what does this mean” questions are the ones ChatGPT now answers faster and more cleanly than a page of links.

And people have noticed. A 2025 U.S. consumer survey found that 55% of respondents now use OpenAI’s ChatGPT or Google’s Gemini AI chatbots about tasks they previously would have asked Google search to help them with, with even higher usage figures for the U.K. Another analysis of more than 1 billion search sessions found that traffic from generative AI platforms is growing 165 times faster than traditional searches, and about 13 million U.S. adults have already made generative AI their go-to tool for online discovery.

This doesn’t mean people have stopped “Googling,” but it means ChatGPT has peeled off the kinds of questions for which users want a direct explanation instead of a list of links. Curious about a policy update? Need a definition? Want a polite way to respond to an uncomfortable email? ChatGPT is faster, feels more conversational and feels more definitive.

At the same time, Google isn’t standing still. Its search results look different than they did three years ago because Google started weaving its AI system Gemini directly into the top of the page. The “AI Overview” summaries that appear above traditional search links now instantly answer many simple questions – sometimes accurately, sometimes less so.

But either way, many people never scroll past that AI-generated snapshot. This fact combined with the impact of ChatGPT are the reasons the number of “zero-click” searches has surged. One report using Similarweb data found that traffic from Google to news sites fell from over 2.3 billion visits in mid-2024 to under 1.7 billion in May 2025, while the share of news-related searches ending in zero clicks jumped from 56% to 69% in one year.

Google search excels at pointing to a wide range of sources and perspectives, but the results can feel cluttered and designed more for clicks than clarity. ChatGPT, by contract, delivers a more focused and conversational response that prioritizes explanation over ranking. The ChatGPT response can lack the source transparency and multiple viewpoints often found in a Google search.

In terms of accuracy, both tools can occasionally get it wrong. Google’s strength lies in letting users cross-check multiple sources, while ChatGPT’s accuracy depends heavily on the quality of the prompt and the user’s ability to recognize when a response should be verified elsewhere.

OpenAI is aiming to make it even more appealing to turn to ChatPGT first for search by trying to get people to use a browser with ChatGPT built in.

Smart speakers and YouTube

The impact of ChatGPT has reverberated beyond search engines. Voice assistants, such as Alexa speakers and Google Home, continue to report high ownership, but that number is down slightly. One 2025 summary of voice-search statistics estimates that about 34% of people ages 12 and up own a smart speaker, down from 35% in 2023. This is not a dramatic decline, but the lack of growth may indicate a shift of more complex queries to ChatGPT or similar tools. When people want a detailed explanation, a step-by-step plan or help drafting something, a voice assistant that answers in a short sentence suddenly feels limited.

By contrast, YouTube remains a giant. As of 2024, it had approximately 2.74 billion users, with that number increasing steadily since 2010. Among U.S. teens, about 90% say they use YouTube, making it the most widely used platform in that age group. But what kind of videos people are looking for is changing.

People now tend to start with ChatGPT and then move to YouTube if they need the additional information a how-to video conveys. For many everyday tasks, such as “explain my health benefits” or “help me write a complaint email,” people ask ChatGPT for a summary, script or checklist. They head to YouTube only if they need to see a physical process.

You can see a similar pattern in more specialized spaces. Software engineers, for instance, have long relied on sites such as Stack Overflow for tips and pieces of software code. But question volume there began dropping sharply after ChatGPT’s release, and one analysis suggests overall traffic fell by about 50% between 2022 and 2024. When a chatbot can generate a code snippet and an explanation on demand, fewer people bother typing a question into a public forum.

So where does that leave us?

Three years in, ChatGPT hasn’t replaced the rest of the tech stack; it’s reordered it. The default search has shifted. Search engines are still for deep dives and complex comparisons. YouTube is still for seeing real people do real things. Smart speakers are still for hands-free convenience.

But when people need to figure something out, many now start with a chat conversation, not a search box. That’s the real ChatGPT effect: It didn’t just add another app to our phones – it quietly changed how we look things up in the first place.

The Conversation

Deborah Lee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The ChatGPT effect: In 3 years the AI chatbot has changed the way people look things up – https://theconversation.com/the-chatgpt-effect-in-3-years-the-ai-chatbot-has-changed-the-way-people-look-things-up-270143

When darkness shines: How dark stars could illuminate the early universe

Source: The Conversation – USA – By Alexey A. Petrov, Professor of physics and astronomy, University of South Carolina

NASA’s James Webb Space Telescope has spotted some potential dark star candidates. NASA, ESA, CSA, and STScI

Scientists working with the James Webb Space Telescope discovered three unusual astronomical objects in early 2025, which may be examples of dark stars. The concept of dark stars has existed for some time and could alter scientists’ understanding of how ordinary stars form. However, their name is somewhat misleading.

“Dark stars” is one of those unfortunate names that, on the surface, does not accurately describe the objects it represents. Dark stars are not exactly stars, and they are certainly not dark.

Still, the name captures the essence of this phenomenon. The “dark” in the name refers not to how bright these objects are, but to the process that makes them shine — driven by a mysterious substance called dark matter. The sheer size of these objects makes it difficult to classify them as stars.

As a physicist, I’ve been fascinated by dark matter, and I’ve been trying to find a way to see its traces using particle accelerators. I’m curious whether dark stars could provide an alternative method to find dark matter.

What makes dark matter dark?

Dark matter, which makes up approximately 27% of the universe but cannot be directly observed, is a key idea behind the phenomenon of dark stars. Astrophysicists have studied this mysterious substance for nearly a century, yet we haven’t seen any direct evidence of it besides its gravitational effects. So, what makes dark matter dark?

A pie chart showing the composition of the universe. The largest proportion is 'dark energy,' at 68%, while dark matter makes up 27% and normal matter 5%. The rest is neutrinos, free hydrogen and helium and heavy elements.
Despite physicists not knowing much about it, dark matter makes up around 27% of the universe.
Visual Capitalist/Science Photo Library via Getty Images

Humans primarily observe the universe by detecting electromagnetic waves emitted by or reflected off various objects. For instance, the Moon is visible to the naked eye because it reflects sunlight. Atoms on the Moon’s surface absorb photons – the particles of light – sent from the Sun, causing electrons within atoms to move and send some of that light toward us.

More advanced telescopes detect electromagnetic waves beyond the visible spectrum, such as ultraviolet, infrared or radio waves. They use the same principle: Electrically charged components of atoms react to these electromagnetic waves. But how can they detect a substance – dark matter – that not only has no electric charge but also has no electrically charged components?

Although scientists don’t know the exact nature of dark matter, many models suggest that it is made up of electrically neutral particles – those without an electric charge. This trait makes it impossible to observe dark matter in the same way that we observe ordinary matter.

Dark matter is thought to be made of particles that are their own antiparticles. Antiparticles are the “mirror” versions of particles. They have the same mass but opposite electric charge and other properties. When a particle encounters its antiparticle, the two annihilate each other in a burst of energy.

If dark matter particles are their own antiparticles, they would annihilate upon colliding with each other, potentially releasing large amounts of energy. Scientists predict that this process plays a key role in the formation of dark stars, as long as the density of dark matter particles inside these stars is sufficiently high. The dark matter density determines how often dark matter particles encounter, and annihilate, each other. If the dark matter density inside dark stars is high, they would annihilate frequently.

What makes a dark star shine?

The concept of dark stars stems from a fundamental yet unresolved question in astrophysics: How do stars form? In the widely accepted view, clouds of primordial hydrogen and helium — the chemical elements formed in the first minutes after the Big Bang, approximately 13.8 billion years ago — collapsed under gravity. They heated up and initiated nuclear fusion, which formed heavier elements from the hydrogen and helium. This process led to the formation of the first generation of stars.

Two bright clouds of gas condensing around a small central region
Stars form when clouds of dust collapse inward and condense around a small, bright, dense core.
NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

In the standard view of star formation, dark matter is seen as a passive element that merely exerts a gravitational pull on everything around it, including primordial hydrogen and helium. But what if dark matter had a more active role in the process? That’s exactly the question a group of astrophysicists raised in 2008.

In the dense environment of the early universe, dark matter particles would collide with, and annihilate, each other, releasing energy in the process. This energy could heat the hydrogen and helium gas, preventing it from further collapse and delaying, or even preventing, the typical ignition of nuclear fusion.

The outcome would be a starlike object — but one powered by dark matter heating instead of fusion. Unlike regular stars, these dark stars might live much longer because they would continue to shine as long as they attracted dark matter. This trait would make them distinct from ordinary stars, as their cooler temperature would result in lower emissions of various particles.

Can we observe dark stars?

Several unique characteristics help astronomers identify potential dark stars. First, these objects must be very old. As the universe expands, the frequency of light coming from objects far away from Earth decreases, shifting toward the infrared end of the electromagnetic spectrum, meaning it gets “redshifted.” The oldest objects appear the most redshifted to observers.

Since dark stars form from primordial hydrogen and helium, they are expected to contain little to no heavier elements, such as oxygen. They would be very large and cooler on the surface, yet highly luminous because their size — and the surface area emitting light — compensates for their lower surface brightness.

They are also expected to be enormous, with radii of about tens of astronomical units — a cosmic distance measurement equal to the average distance between Earth and the Sun. Some supermassive dark stars are theorized to reach masses of roughly 10,000 to 10 million times that of the Sun, depending on how much dark matter and hydrogen or helium gas they can accumulate during their growth.

So, have astronomers observed dark stars? Possibly. Data from the James Webb Space Telescope has revealed some very high-redshift objects that seem brighter — and possibly more massive — than what scientists expect of typical early galaxies or stars. These results have led some researchers to propose that dark stars might explain these objects.

Artist's impression of the James Webb telescope, which has a hexagonal mirror made up of smaller hexagons, and sits on a rhombus-shaped spacecraft.
The James Webb Space Telescope, shown in this illustration, detects light coming from objects in the universe.
Northrup Grumman/NASA

In particular, a recent study analyzing James Webb Space Telescope data identified three candidates consistent with supermassive dark star models. Researchers looked at how much helium these objects contained to identify them. Since it is dark matter annihilation that heats up those dark stars, rather than nuclear fusion turning helium into heavier elements, dark stars should have more helium.

The researchers highlight that one of these objects indeed exhibited a potential “smoking gun” helium absorption signature: a far higher helium abundance than one would expect in typical early galaxies.

Dark stars may explain early black holes

What happens when a dark star runs out of dark matter? It depends on the size of the dark star. For the lightest dark stars, the depletion of dark matter would mean gravity compresses the remaining hydrogen, igniting nuclear fusion. In this case, the dark star would eventually become an ordinary star, so some stars may have begun as dark stars.

Supermassive dark stars are even more intriguing. At the end of their lifespan, a dead supermassive dark star would collapse directly into a black hole. This black hole could start the formation of a supermassive black hole, like the kind astronomers observe at the centers of galaxies, including our own Milky Way.

Dark stars might also explain how supermassive black holes formed in the early universe. They could shed light on some unique black holes observed by astronomers. For example, a black hole in the galaxy UHZ-1 has a mass approaching 10 million solar masses, and is very old – it formed just 500 million years after the Big Bang. Traditional models struggle to explain how such massive black holes could form so quickly.

The idea of dark stars is not universally accepted. These dark star candidates might still turn out just to be unusual galaxies. Some astrophysicists argue that matter accretion — a process in which massive objects pull in surrounding matter — alone can produce massive stars, and that studies using observations from the James Webb telescope cannot distinguish between massive ordinary stars and less dense, cooler dark stars.

Researchers emphasize that they will need more observational data and theoretical advancements to solve this mystery.

The Conversation

Alexey A Petrov receives funding from the US Department of Energy.

ref. When darkness shines: How dark stars could illuminate the early universe – https://theconversation.com/when-darkness-shines-how-dark-stars-could-illuminate-the-early-universe-266971