What we’ve learned in ten years about county lines drug dealing

Source: The Conversation – UK – By Jenna Carr, Graduate Teaching Fellow and Sociology PhD Researcher, University of Liverpool

ThomasDeco/Shutterstock

A decade ago, the National Crime Agency identified a new drug supply method. Before then, drug supply was predominantly between user-dealers – people supplying their social circles to fund their drug use, rather than for commercial gain.

In 2015, police outside of London identified a pattern of more frequent arrests of young people and vulnerable adults, implicated in drug supply outside of their local areas. They were also frequently suspected to be associated with members of criminal gangs. Thus, “county lines” was born.

The National Crime Agency used the term “county lines” to describe the phone or “deal” line used to organise the sale of drugs – mainly heroin and crack cocaine – from cities with oversaturated supplies, to rural, coastal areas with less supply.

The deal line was controlled by gang members based in the inner city area, such as London or Liverpool, known as “exporter” areas. The sale of drugs would be completed by a young or vulnerable person who had been exploited and sometimes trafficked out of their home areas to rural “importer” areas, such as north Wales and Cornwall. The crossing of local authority and police boundaries made county lines difficult to police, and to safeguard those who had been exploited.

County lines is notably violent. It involves gang violence, knife crime, drug misuse, sexual exploitation and modern-day slavery.

Ten years on, county lines as a supply model continues to evolve. A recent assessment by the National Police Chiefs’ Council found that the practice is becoming more localised, with fewer lines running between police force boundaries, and more running from one end of a force to the other end. It is also no longer limited to the supply of class A substances, with police reporting seizures of cannabis, cash and weapons.

Researchers are now suggesting that the term “county lines” itself is outdated, and instead should be replaced with a term that focuses more on the exploitation involved, rather than drug supply.

Who gets involved

County lines affects both children and vulnerable adults. The government has estimated 14,500 children to be at risk of child criminal exploitation, but this is likely to be an underestimation. Particular risk factors include being between 15 and 17 years old, experiences of neglect and abuse, economic vulnerability, school exclusion and frequent episodes of missing from home.

Cuckooing, where a gang will take over homes as a base for drug supply, largely affects vulnerable adults, rather than children.

One challenge in responding to county lines is that vulnerability can be difficult to recognise.
Victims and perpetrators of exploitation are often one and the same. Often, victims will be unwilling to cooperate with police, out of fear of legal consequences and repercussions from their exploiters.

Those who have been exploited into participating in county lines often do not accept that they are a victim – they may think they are profiting from their involvement, both financially and socially. The ongoing cost of living crisis draws young and vulnerable people into county lines as a response to poverty and lack of legitimate and financially viable opportunities.

Responding to county lines

My ongoing research looks at the development of county lines policy and responses to the problem over the last ten years. Responses to county lines have been mainly led by law enforcement, with coordinated police “crackdowns”. But research shows that high-profile police operations are largely symbolic, and have the effect of drawing vulnerable people into the criminal justice system, which creates further harm.

One important development has been the use of the Modern Slavery Act to prosecute county lines. The purpose of this is to offer a legal defence for someone who has been exploited into selling drugs. But research has shown, rather than acting as a safeguard and a defence, it acts as a “gateway into criminalisation”.

If someone crosses the boundary of being a victim to becoming a perpetrator of exploitation, they can also find themselves being subjected to punitive criminal justice responses under the Modern Slavery Act. This is especially true for black men and boys, who have historically been treated more harshly, for example through stop and search, in relation to drug crime.

It’s become clear that county lines is an issue that criminal justice alone cannot respond to. Those who are at risk require safeguarding, not criminalisation. To this end, the government funds a specialist county lines victim support service that operates in the four main exporter locations.

But the availability of this support service only in exporter locations shows that the county lines response is a postcode lottery. Police forces in importer areas have fewer resources to dedicate to training officers to deal with complex county lines cases. A consistent national approach is still required.

What’s next?

The current government is planning to make child criminal exploitation and cuckooing specific criminal offences through new legislation. This has been celebrated as a success by child safety charities.

But should more criminalisation be the priority? Research shows that drug prohibition and punitive responses are ineffective at preventing young people and vulnerable adults becoming involved in county lines. The demand for drugs and structural issues such as poverty are fuelling county lines – policing alone cannot address this.

Instead of punitive legal responses, public health and addressing the demand for drugs should be priority. Investment is needed in support services and social care, which have been decimated by austerity cuts, to build a society where vulnerable people do not need to become involved in drug supply.


Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

Sign up for our weekly politics newsletter, delivered every Friday.


The Conversation

Jenna Carr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What we’ve learned in ten years about county lines drug dealing – https://theconversation.com/what-weve-learned-in-ten-years-about-county-lines-drug-dealing-261438

What the world can learn from Korea’s 15th-century rain gauge

Source: The Conversation – UK – By Mooyoung Han, Professor of Environmental Engineering, Seoul National University

The rain gauge with a statue of King Sejong the Great in Seoul, Korea. KoreaKHW/Shutterstock

Droughts and floods are becoming more frequent and more severe across the globe. The cause is often rain — either too little or too much. The monsoon regions of the world, where societies have weathered cycles of drought and deluge for thousands of years, hold essential lessons about rainwater monitoring and conservation.

In Korea, one such lesson dates back to the 15th century. In 1441, during the reign of King Sejong, Korea established the world’s first official rain gauge (cheugugi) — a cylindrical copper instrument — and also created a state-administered rain monitoring network.

This wasn’t just a technical invention; it was part of a wider policy. On September 3 of that year, according to the Annals of the Choson Dynasty (a Unesco Memory of the World record), local magistrates across the country were ordered to measure rainfall regularly and report it to the central government.

This system represented one of the earliest forms of climate data governance and set a precedent for valuing rain as a measurable, manageable and fairly governed resource — a public good to be shared and respected. It also reflected a philosophical tradition in Korea of respecting rain not as a curse, but as a gift — one that must be understood, welcomed and shared.

India too has a rich tradition of rainwater harvesting, spanning from the Vedic period and the Indus–Sarasvati Valley civilisation (3,000–1,500BC) to the 19th century. Throughout diverse ecological zones, Indian communities developed decentralised systems to capture and store rainwater. The archaeological site of Dholavira in Gujarat, for example, featured sophisticated reservoirs designed to collect monsoon runoff.

Historical records, including ancient inscriptions, temple documents and folk traditions, indicate that these systems were not only engineered but also governed, with established rules for sharing, maintaining and investing in water as a communal resource. In some regions of India, every third house had its own well. Although these practices declined during colonial rule, they are now being revived by local communities, government initiatives, and non-governmental organisations.

The revival of traditional wells is gaining momentum, particularly in urban areas facing water scarcity. For example in the city of Bengaluru in southern India, local communities and organisations are using age-old well-digging techniques to tap into shallow aquifers. These efforts are often supported by the state or central government, as well as specialists and organisations including the Biome Environmental Trust, Aga Khan Trust for Culture, Indian National Trust for Art and Cultural Heritage, and the Centre for Science and Environment.

India’s current prime minister has also launched a campaign called Jal Shakti Abhiyan: Catch the Rain as part of a nationwide effort to restore and promote community-led rainwater harvesting.

Reviving ancient wisdom

In Korea, there’s also been a resurgence of this ancient wisdom in modern contexts. Although urban initiatives like the Star City rainwater management system show promise, the movement towards reviving old practices like rainwater harvesting is still growing.

Meanwhile in Cambodia, the Rain School Initiative empowers students and teachers to manage rainwater for drinking and climate education. Rainwater is not just a technical solution — it is a cultural key to resilience. It offers autonomy, sustainability and hope.

That is why we propose to establish UN Rain Day on September 3, in recognition of Korea’s historical contribution and in celebration of global rain literacy. It is a symbolic date that reminds us how rain has shaped civilisations and how it can shape our future — if only we choose to listen to the wisdom of water.

Designating international days has proven effective in raising awareness and catalysing global action. For instance, World Water Day (March 22) has spurred international cooperation and policymaking on water issues since its establishment in 1993. World Toilet Day (November 19) has elevated the global conversation around sanitation and public health.

A UN Rain Day would spotlight rain as a vital yet often overlooked resource. This is something that’s especially crucial for climate adaptation in monsoon regions and beyond.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. What the world can learn from Korea’s 15th-century rain gauge – https://theconversation.com/what-the-world-can-learn-from-koreas-15th-century-rain-gauge-261530

Storm Floris: the weather is rarely this windy in August – which makes it more dangerous

Source: The Conversation – UK – By Colin Manning, Postdoctoral Research Associate in Climate Science, Newcastle University

Storm Floris made landfall in northern parts of the UK on the morning of Monday August 4 2025, bringing intense rainfall followed by severe winds throughout the afternoon. The Met Office issued an amber weather warning for much of Scotland and yellow alerts for parts of Northern Ireland and northern England.

Affected areas can expect 20mm–40mm of rain on average, with some areas of Scotland potentially receiving up to 80mm. Wind speeds in exposed and elevated regions could reach 80mph–90mph, while gusts of 50mph-60mph are forecast for much of Scotland.

The storm’s defining characteristic is the unusually strong winds for August, a time typically less prone to severe wind events. The odd seasonal timing has increased the risk to the public, as more people are outdoors, travelling for holidays or staying in campsites. In addition, trees remain in full leaf, making them more likely to be brought down by high winds.

Authorities are anticipating significant disruption to transport and electricity networks largely due to falling trees. This is underlined by recent research showing an increased risk of large power outages during windstorms that occur in summer. A large amount of debris on the ground from trees may also block drainage systems and contribute to localised flooding.

Persistent strong winds will combine with periods of heavy rainfall for the duration of the amber alert, which expires at 23:00. This will create difficult conditions for emergency workers and prevent access to affected locations if roads are blocked, potentially prolonging disruptions to travel and power networks.

Is this typical of summer months?

Storm Floris carries all the hallmarks of a classic mid-latitude storm. These develop due to sharp temperature contrasts between the northern and southern Atlantic Ocean and intensify under the influence of a strong jet stream. This is a core of fast-moving air high in the atmosphere that stretches across the Atlantic and often steers storms towards the UK.

Such conditions are unusual for the summer months, when warmer Atlantic sea temperatures typically weaken these temperature gradients and shift them farther north, closer to the polar regions. However, it is not uncommon for such storms to occur in August.

Notable ones in the past five years include Storm Ellen, which extensively damaged electricity distribution infrastructure in Ireland and led the Irish meterological service to produce and red and amber weather warning for southern parts of Ireland. Previous storms in August cancelled the Boardmasters music festival in Cornwall in 2019 and closed two stages of Leeds festival in 2024.

Floris is classified as a Shapiro-Keyser cyclone, a type distinguished by a warm core encircled by colder air on its north, west and south sides. This structure is visible in the way the storm’s frontal system wraps around its centre, forming a characteristic comma-shape in the clouds around the cyclone centre. Storms of this kind are responsible for a significant number of the UK’s most damaging wind events.

These cyclones often feature sharp pressure gradients and strong low-level airflows, particularly an air stream known as the cold jet, or cold conveyor belt, which can produce severe surface winds. In some cases, they can also generate a sting jet, a narrow stream of air that descends rapidly from around 5km above the land surface, delivering intense, damaging gusts.

Fortunately, satellite imagery suggests that Storm Floris is unlikely to have produced a sting jet. However, the cold jet alone may still drive wind speeds high enough to cause widespread disruption.

These types of storms can also produce intense rainfall along their frontal boundaries, as seen with Storm Floris. Warmer summer temperatures allow the atmosphere to hold more moisture, increasing the potential for heavier downpours. In addition, the heat contributes to a more unstable atmosphere, encouraging strong convective ascents of air that can yield extremely heavy and localised rainfall.

Floris in the future

Research shows that climate change will influence the characteristics of storms like Floris, though not all aspects will be affected equally. Warmer temperatures are expected to make future storms wetter, as increased atmospheric moisture and convective activity enhances rainfall, particularly along frontal systems. However, projections of wind extremes remain more uncertain.

Climate models generally suggest a modest intensification of winter storms over the UK and a decrease in the intensity of summer storms, implying that systems like Floris could become less common. These projections are largely tied to expected changes in Atlantic temperature gradients and the behaviour of the jet stream.

That said, most long-term climate projections rely on relatively coarse-resolution models which often fail to capture key features that drive storm intensification. These include the gulf stream (a warm Atlantic Ocean current) and drivers of extreme winds including the cold jet and sting jet.

A higher-resolution model, like that used in real-time forecasting for Storm Floris, predicts more intense winter windstorms in a warmer climate. Much of this intensification is linked to stronger cold jets and a potential increase in storms that generate sting jets.

Many powerful summer and autumn storms in the UK originate from tropical cyclones such as hurricanes, as seen with Storm Ophelia in 2017. These systems are poorly represented in lower-resolution climate models, yet they contribute significantly to Europe’s most extreme windstorms.

While Storm Floris has no tropical origins, a variety of storms can affect northern Europe at this time of year. The complexity of assessing their risks remains an area of ongoing research.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 45,000+ readers who’ve subscribed so far.


The Conversation

Colin Manning receives funding from UKRI.

ref. Storm Floris: the weather is rarely this windy in August – which makes it more dangerous – https://theconversation.com/storm-floris-the-weather-is-rarely-this-windy-in-august-which-makes-it-more-dangerous-262535

Should back-to-school require parent fundraising? Ontario schools are woefully underfunded, and families pay the price

Source: The Conversation – Canada – By Lana Parker, Associate Professor, Faculty of Education, University of Windsor

Back-to-school is around the corner, which means that many parents will soon receive requests from schools to pay fees, contribute supplies or support fundraising activities.

But many families are already shouldering significant financial concerns. This raises the question why Ontario schools have become so reliant on direct fundraising contributions from parents.

Though the Ontario government insists it has never spent more money on education, a closer look at the facts and figures reveals that the budget allocated to education is woefully short of covering necessities.

My research, “Infinite Demands, Finite Resources: A Window into the Effects of Ongoing Underfunding and Trends of Privatization in Ontario Schools,” draws on discussions with educators to share insiders’ perspectives on how underfunding looks and feels in schools.

Increased demands, shortfalls

The Canadian Centre for Policy Alternatives (CCPA) released a 2022 report showing that, even amid the increased complexity of teaching during the pandemic, the Ontario government increased class sizes, cut funding and teaching staff and continued to permit the backlog for school infrastructure repair to balloon to nearly $17 billion.

Using the current government’s budget projections, Ontario’s Financial Accountability Office has forecast the education system will see a $12.3 billion shortfall over the next decade.

While some people might ask whether these cuts are a marker of prudent financial stewardship, the numbers once again reveal a different story.

The CCPA report showed that while Ontario had robust GDP growth of nine per cent in 2021 and 6.6 per cent in 2022, Ontario’s Financial Accountability Office found that, in 2017, “overall program spending in Ontario averaged roughly $2,000 per person, per year less than the average of the other provinces.”

In other words, the province has adequate funding, but is choosing to under-serve certain portfolios. For example, Ontario announced in its latest budget it will invest $28 billion on highways over 10 years.

Public investment with future returns

The choice to underfund education is shortsighted because research shows education is a public investment that can generate a high level of future returns.

This under-investment in education has real consequences for the day-to-day quality of schools. Parents who have children with special education needs have long been raising the alarm that their children lack access to adequate testing and supports, which is a direct function of insufficient funding.

Ontario’s principals, teachers and other educators issued an urgent statement in February 2025 advising the public of chronic underfunding and subsequent system challenges that “threaten the very foundation of the education our children and young people deserve.”

How boards are managing shortfalls

My recent research shines a light on the need for more sustainable funding.

The 11 highly experienced educators and one education organizer in my study described how school boards are trying to manage budget shortfalls by asking schools to increase fundraising and by asking school principals to look for private sector contributions.

They discuss how fees are becoming commonplace for extracurricular activities, which places a burden on families.

They decry the loss of materials for school libraries, arts programs and performance spaces. And they warn that the system cannot take many more years of disinvestment.

Full scope may not be clear to parents

Because educators are employed by public school boards and are responsible to the Ministry of Education, they might not be empowered to express their concerns to parents directly. Even parents who participate in school council meetings or fundraising efforts may not understand how much of an issue education underfunding is in their child’s school.

However, with their decades of experience, the educators in my study are unambiguous about the current situation.

One educator shared, “The students who suffer the most are the ones who are in our ESL programs and who are in our special education programs.”

Another noted, “With the formulas that would have been used pre-pandemic, I would have had four and a half, maybe five special education resource teachers and last year I had fewer than two.”

Yet another revealed, “There’s hundreds of kids in our neighbourhood who have never had a music teacher.” Another spoke about playgrounds, noting their board was being encouraged to seek private donations:

“That was part of the message we got the other day: ‘Look over to this school. The
[foundation name] came and built their playground. Maybe y’all should try that.’ We’re being told that we should be seeking out these donations. That’s highly problematic.”




Read more:
Music also matters in the real world


These are losses of public education goods and services that not that long ago would have been available to all children.

As one of the participants noted:

“There are a number of opportunities that used to exist that no longer exist, and then parents get upset because they think, ‘Well, when I was in school, all of this was around. What happened?’ … Really, it’s about the underfunding.”

Province appointing supervisors

Recently, the Ontario government appointed supervisors to some boards, announcing that “investigations showed they each had accumulated deficits.”

In so doing, the government is asserting more control over public education and runs the risk of political partisanship (one of the appointed supervisors is a former Progressive Conservative MPP).

Journalist Wendy Leung with The Local, who has covered the significance of these appointments, reports the move also “hampers public scrutiny over what’s happening at the boards.”

Taking over boards can be seen as a distraction tactic as the government is asking them to meet growing needs with fewer resources.

Instead of increasing funding, which is necessary and long overdue, the government is likely to cut costs in the short term by privatizing services, a trajectory researchers have documented for some time. These shifts to the private sector are shortsighted attempts to balance a budget that only serve to raise the taxpayer burden over time.

People in Ontario — and across Canada — should be proud of our public education systems. They are held in high regard globally. But education requires ongoing financial investment in our children’s futures.

It took robust political will to compel governments to offer free public education to all children.

This history suggests it will take ongoing pressure from parents applied directly to the Ministry of Education, or via engagement with school councils and school boards, to demonstrate their desire for fair and sustainable public schooling and ensure governments do not shortchange education.

In this way, support for children today will be improved, and the proud inheritance of public education will be strengthened and viable for generations to come.

The Conversation

Lana Parker receives funding from the Canadian Social Sciences and Humanities Research Council. She is affiliated with the Public Education Exchange.

ref. Should back-to-school require parent fundraising? Ontario schools are woefully underfunded, and families pay the price – https://theconversation.com/should-back-to-school-require-parent-fundraising-ontario-schools-are-woefully-underfunded-and-families-pay-the-price-261036

Vaccine hesitancy: How social and technological issues converged to spawn mistrust

Source: The Conversation – Canada – By Emanuele Blasioli, PhD Candidate in Management Science, DeGroote School of Business, McMaster University

The rise in vaccine-preventable diseases around the world is threatening decades of progress in public health and putting millions of people at risk.

The decline in vaccination coverage in the United States illustrates the global problem. Rates of most routine vaccinations recommended for children by age 24 months by the Advisory Committee on Immunization Practices, which focus on 15 potentially serious illnesses, have declined.

Canada has not been spared from this phenomenon. As of July 19, there have been 4,206 measles cases (3,878 confirmed, 328 probable) in 2025 reported by 10 jurisdictions (Alberta, British Columbia, Manitoba, New Brunswick, Northwest Territories, Nova Scotia, Ontario, Prince Edward Island, Québec and Saskatchewan).

This decline in vaccine coverage is often attributed to misinformation and disinformation. As data analytics researchers, we used operations research techniques to understand why people are vaccine-hesitant. In our study, we explore how anti-vaccination sentiments and attitudes can be better understood through an integrated approach that combines social network analysis with insights into psychological reactance and the influence of eHealth literacy on health-related behaviours.

So what fuels skepticism about vaccines? It’s a complex blend of personal, social and environmental factors.

How our brains decide (and often get it wrong)

People typically use mental shortcuts, known as heuristics, to simplify complex issues.

The purpose is to minimize analytical efforts and speed up decision-making, which can sacrifice accuracy for the sake of efficiency. This results in distortions known as cognitive biases, which influence judgment and decision-making.

Vaccination decisions are influenced by these processes in the same way as any other decision.




Read more:
How cognitive biases and adverse events influence vaccine decisions (maybe even your own)


Skepticism toward vaccines has often been associated with fears related to possible side-effects. These fears are fuelled by our broad tendency to overestimate negative consequences, a mechanism known as risk-perception bias.

A recent study published in Nature Scientific Reports confirmed that vaccine-hesitant individuals are more sensitive to risk, and give undue weight to potential side-effects.

Another study, from the journal Vaccine: X, looked at cognitive biases related to vaccine hesitancy and revealed four factors significantly associated with hesitancy. These are:

  • fear of vaccine side-effects (skepticism factor),
  • carelessness about the risks of not being vaccinated (denial factor),
  • optimistic attitude, believing they are less at risk of illness (optimistic bias factor) and
  • preference for natural products (naturalness bias factor).

Existing beliefs can also significantly interfere with evaluations and decisions, since people are inclined to seek information that reinforces and confirms their convictions. Confirmation bias interferes with the rational evaluation of evidence related to vaccine safety and efficacy.

The effect of this bias becomes particularly relevant when analyzing how susceptible individuals are to misinformation — a major barrier to vaccine uptake.

The myth of rationality

Assuming human beings can make fully rational decisions is helpful for developing simulations and models, like those in game theory. Game theory is a powerful analytical tool often used in operations research to understand phenomena arising from the interaction of multiple decision-makers, allowing us to predict the possible scenarios that may unfold.

Insights from behavioural economics and cognitive psychology suggest that any assumption of rationality is often wildly optimistic.

Bounded rationality” describes the constraints within which reason operates. Human judgments suffer from a scarcity of information, time limitations and our limited ability to analyze.

Still, even the most effective information would not be enough to convince all vaccine-hesitant individuals. In some cases, it can have the opposite effect.

Understanding psychological and attitudinal predictors of vaccine hesitancy allows us to compare their influence in different contexts. Contexts define the environmental background (or setting) in which individuals decide about the vaccine.

These comparisons show that thought patterns and attitudes that feed into vaccine hesitancy can be modified, unlike stable risk factors, including demographic factors, such as unemployment, lower education, younger age, rural residency, sex and migrant status.

Change in vaccination decision over time

Immunization behaviours evolve over time, influenced by social dynamics. Researchers have studied why voluntary vaccination programs for childhood disease sometimes fail, showing how self-interested decision-making leads to lower vaccination rates and prevents the complete eradication of the illnesses that vaccination could otherwise control.

Assuming parents can make perfectly rational decisions, the study outlined two scenarios:

  1. The benefit of vaccination for their child, accepting there might be some risk of side-effects;

  2. The benefit of not vaccinating, knowing they can avoid side-effects, and hoping their child won’t catch the disease.

Whenever these choices seem equally good to parents, the researchers found there is a critical drop in vaccination uptake, especially for highly contagious diseases.

Another study investigated why vaccination rates swing wildly up and down over time instead of remaining steady.

The authors focused on how people copy each other’s behaviour and looked at two actual vaccine scares that happened in England and Wales, one in the 1970s with whooping cough vaccine and another in the 1990s with the measles, mumps and rubella (MMR) vaccine.

They found that parents oscillated between vaccinating and not as they followed the herd in whatever choice seemed safest, causing boom-and-bust cycles and unstable community protection from the targeted illnesses.

Such dynamics can also result in localized pockets of under-vaccination that benefit the unprotected through herd immunity, but also risk unvaccinated groups becoming high-risk clusters if that protection deteriorates.

Echo chambers in social media

The COVID-19 pandemic has proven how damaging misinformation, disinformation, information voids and information gaps can be to public health, including immunization coverage and vaccine hesitancy.

The relationship between social media misinformation and vaccine hesitancy can be understood by looking at two elements: how much individuals are exposed to it and how persuasive social media is.

At the height of the COVID-19 pandemic, vaccine-skeptical content on social media had a significant negative effect and fuelled doubts about vaccine safety and effectiveness, particularly when not subjected to flagging by fact-checkers.

The impact of unflagged vaccine-skeptical content in driving vaccine hesitancy was estimated to be 46 times greater than flagged misinformation content.

Factually accurate but potentially misleading content — such as a rare instance where a young, healthy individual passed away shortly after being vaccinated — also plays a critical role in driving up vaccine hesitancy.

In our own research, we and our collaborators argue that investigating the role of social media networks can help us develop new strategies to promote and increase evidence-based vaccine literacy.

The Conversation

Elkafi Hassini received Discovery and Alliance grants funding from the Natural Sciences and Engineering Research Council of Canada that supported this research.

Emanuele Blasioli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Vaccine hesitancy: How social and technological issues converged to spawn mistrust – https://theconversation.com/vaccine-hesitancy-how-social-and-technological-issues-converged-to-spawn-mistrust-261938

Pets don’t necessarily improve their owners’ well-being

Source: The Conversation – Canada – By Christophe Gagné, PhD candidate, Psychology, Université du Québec à Montréal (UQAM)

People often turn to pets to boost their mood and find companionship. Improving well-being and reducing loneliness are among the most cited reasons for adopting an animal companion.

But even though the belief that pets bring many benefits to their owners is widespread, research shows that having a pet is not a panacea for bolstering human psychological well-being.

Despite this, pets are often portrayed in the news and on social media as effective solutions to reduce stress and loneliness, reflecting a popular belief in their health benefits. This can lead people to adopt pets without fully considering the responsibilities and demands involved, which can have negative consequences for both themselves and their pets.

As social psychologists studying human-pet relationships, we take a more nuanced approach, examining when, how and for whom pets can — or cannot — enhance well-being.

What the research says

Many studies have found that pet owners are less anxious, lonely and stressed out compared to people who don’t have pets. Pet owners also report being more satisfied with their life.

These studies often catch our attention because they tap into something many of us believe: that our pets are good for us. This type of research offers reassurance and validates the deep bond we may feel with our animal companions. But they only tell one side of the story.

Other studies have found no significant link between pet ownership and human well-being. In other words, people with pets don’t necessarily report higher well-being, nor do they have better mental health than those without pets.




Read more:
Dogs may reflect their owners’ stress levels, finds research


Our research into pet ownership in Canada during the COVID-19 pandemic was surprising: it found that owning a pet was generally associated with lower well-being and mental health.

The study included both pet owners and those without pets, aiming to compare the two groups on various well-being indicators during the pandemic. Pet owners reported lower well-being than non-pet owners during that time, including higher levels of loneliness.

These inconsistencies across different studies show that the connection between having a pet and feeling good isn’t so straightforward. Our study indicated some of these complexities. For example, compared to owners of other pets, dog owners reported higher well-being.

To make sense of these mixed findings, researchers have started to look more closely at the nature of the relationship between owners and their pets. This approach may help us better understand the factors that influence whether pet ownership is beneficial for our well-being.

The quality of the connection

Just like our relationships with people, our bond with pets is complex. Many aspects of this connection can influence how much we benefit from it. It’s not just having a pet that counts, but how we bond and interact with them.

For example, owners who experience anxiety about being away from their pets or question their pet’s affection — reflecting an insecure attachment to a pet — also report feeling more depressed. Perceiving our pets as less understanding or more insensitive to our needs is also associated with higher levels of depression, anxiety and loneliness.

In contrast, the more people feel that they share characteristics with their pets (for example, loyalty, a mutual love of sleeping), the more likely they are to report higher well-being. Pets are also perceived as living in the present, not dwelling on the past or worrying about the future. Interacting with our animal companions mindfully can help us focus on the present moment as well, which also promotes greater well-being.




Read more:
Do people really resemble their dogs?


By nurturing the positive aspects of our relationships with pets and working through the more difficult ones, we may ease the stress associated with some of the challenges of caring for them, including the financial resources required or the anxiety we feel when they get sick.

Some challenges of pet ownership

In fact, pet ownership comes with responsibilities and challenges that don’t seem to be discussed as often as the benefits. These more difficult aspects of caring for a pet can sometimes be emotionally distressing and negatively impact a pet owner’s psychological well-being.

Having pets, no matter how much we love them, requires time, energy and financial resources. For some, especially during the COVID-19 pandemic, this responsibility may represent an additional source of stress. In our study, pet ownership was linked to lower well-being among women and among those with two or more children at home — groups already facing increased child-care and household demands.

Similarly, pet ownership was associated with lower well-being for people who were unemployed or in less stable forms of employment (for example, students, homemakers). Limited financial resources may have made pet care more challenging.

Likewise, having to care for a sick animal can be emotionally distressing for the owners. Caregivers of chronically sick dogs report feeling hopeless and powerless, especially when they cannot help to alleviate their dogs’ suffering.

Other factors, such as the pet’s behavioural problems and the grief experienced after losing a pet, can also be difficult for owners. For those contemplating adoption, it’s important to take these realities into account to make an informed decision.

Meeting our pet’s needs

There are many important factors to consider when welcoming a new animal companion into our homes. Above all, we need to ensure we have the time, energy and resources to meet their needs.

Choosing a pet carefully, based on what we realistically can offer and on reliable information about their characteristics and needs, gives us the best chance of having a positive and successful relationship.




Read more:
For better or worse, your dog’s behaviours can impact your quality of life


Supporting our pets’ needs can also improve our own well-being as owners, showing the potential for mutually beneficial interspecies relationships. But when those needs are not met, both pets and their owners can end up feeling stressed and unwell.

When considering adopting a pet, it’s important to ask: why do we want a pet? If the idea is to improve psychological well-being, our research suggests we might need to think again.

The Conversation

Catherine Amiot is a member of the emerging Board of Directors of the PHAIR Society, an academic society that seeks to promote research on human-animal intergroup relations. She has received funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and from the Fond de recherche du Québec – Santé (FRQS) for the research conducted in her laboratory which is presented in this article.

Christophe Gagné does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Pets don’t necessarily improve their owners’ well-being – https://theconversation.com/pets-dont-necessarily-improve-their-owners-well-being-259973

Survivors’ voices 80 years after Hiroshima and Nagasaki sound a warning and a call to action

Source: The Conversation – USA (2) – By Masako Toki, Senior Education Project Manager and Research Associate, Nonproliferation Education Program, Middlebury

Supporters of nuclear disarmament, including Hibakusha, demonstrate in Oslo, Norway, in 2024. Hideo Asano, CC BY-ND

Eighty years ago, in August 1945, the cities of Hiroshima and Nagasaki were incinerated by the first and only use of nuclear weapons in war. By the end of that year, approximately 140,000 people had died in Hiroshima and 74,000 in Nagasaki.

Those who survived – known as Hibakusha – have carried their suffering as living testimony to the catastrophic humanitarian consequences of nuclear war, with one key wish: that no one else will suffer as they have.

Now, in 2025, as the world marks 80 years of remembrance since those bombings, the voices of the Hibakusha offer not only memory, but also moral clarity in an age of growing peril.

As someone who focuses on nuclear disarmament and has heard Hibakusha testimonies in my native Japanese language, I have been enthusiastically promoting disarmament education grounded in their voices and experience. I believe their message is more vital than ever at a time of rising nuclear risk. Nuclear threats have reemerged in global discourse, breaking long-standing taboos against even talking about their use. From Russia and Europe to the Middle East and East Asia, the possibility of nuclear escalation is no longer unthinkable.

Amid a landscape of rubble, a partially destroyed building stands, with the skeleton of a metal dome atop a tower.
The Hiroshima Prefectural Industrial Promotion Hall was one of the few buildings not totally demolished in the Aug. 6, 1945, U.S. atomic bombing of Japan.
Universal History Archive/Universal Images Group via Getty Images

Japan’s deepening reliance on deterrence

Ironically, increasing nuclear threats are contributing to further reliance on nuclear deterrence, the strategy of preventing attack by threatening nuclear retaliation, rather than renewed efforts toward nuclear disarmament, which seeks to eliminate nuclear weapons entirely.

Nowhere is this contradiction more visible than in Japan. While the Hibakusha have long stood as global advocates for nuclear abolition, Japan’s approach to national security has placed growing emphasis on the role of nuclear deterrence.

In the face of regional threats, the Japanese government has strengthened its dependence on U.S. nuclear protection – even as the survivors of Hiroshima and Nagasaki warn not only of the dangers of relying on nuclear weapons for security, but also of the profound moral failure such reliance represents.

Masako Wada, a survivor of the 1945 atomic bomb attack on Nagasaki, speaks about the risk of nuclear weapons in the 21st century.

Listen to Hibakusha voices

For eight decades, the Hibakusha have shared their stories to prevent future tragedy – not to assign blame, but to awaken conscience and spark action.

Masako Wada, assistant secretary general of Nihon Hidankyo, a nationwide organization of atomic bomb survivors working for the abolition of nuclear weapons, was just under 2 years old when the atomic bomb was dropped on Nagasaki. Her home, 1.8 miles from the blast center, was shielded by surrounding mountains, sparing her from burns or injury. Though too young to remember the bombing herself, she grew up hearing about it from her mother and grandfather, who witnessed the devastation firsthand.

In July 2025 at a nuclear risk reduction conference in Chicago, Wada told the attendees:

“The risk of using nuclear weapons has never been higher than it is now. … Nuclear deterrence, which intimidates other countries by possessing nuclear weapons, cannot save humanity.”

In a piece she wrote for Arms Control Today that same month, she further implored:

The Hibakusha are the ones who know the humanitarian consequences of the use of nuclear weapons. We will continue to convey that reality. Please listen to us, please empathize with us. Find out what you can do and take action together with us. Nuclear weapons cannot coexist with human beings. They were created by humans; let us assume the responsibility to abolish them with the wisdom of public conscience.”

This plea – rooted in lived experience and moral responsibility – was recognized globally when the 2024 Nobel Peace Prize was awarded to Nihon Hidankyo. The award honored not only the survivors’ suffering, but their decades-long commitment to preventing future use of nuclear weapons through education, activism and testimony.

A concrete building with no windows and a metal skeleton of a dome atop a tower stand against a blue sky.
The Hiroshima Peace Memorial stands as it has since 1945, partially destroyed by the atomic bomb blast and serving as a reminder of the 140,000 people who died in the attack and its aftermath.
Masako Toki, CC BY-ND

A dwindling number

But time is running out. Most Hibakusha were children or young adults in 1945. Today, their average age is over 86. In March 2025, the number of officially recognized Hibakusha fell below 100,000, according to Japan’s Ministry of Health.

As Terumi Tanaka, a Hiroshima survivor and longtime leader of Nihon Hidankyo, said at the Nobel Peace Prize ceremony:

“Ten years from now, there may only be a handful of us able to give testimony as firsthand survivors. From now on, I hope that the next generation will find ways to build on our efforts and develop the movement even further.”

Terumi Tanaka, a survivor of the 1945 atomic bomb attack on Hiroshima, delivers the 2024 Nobel Peace Prize lecture.

The role of empathy in disarmament education

Empathy is not a luxury in disarmament education – it is a necessity. Without it, nuclear weapons remain abstract. With it, they become personal, real and morally unacceptable.

That’s why disarmament education begins with human stories. The Hibakusha testimonies illuminate not only the physical destruction caused by nuclear weapons, but also the long-term trauma, discrimination and intergenerational pain that follow. They remind us that nuclear policy is not just a matter of strategy – it is a question of human survival. Nuclear weapons are the only weapons ever created with the power to annihilate all of humanity – and that makes disarmament not just a political issue, but a moral imperative.

Yet opportunities for young people to learn about nuclear risks, or hear from the Hibakusha directly, are extremely limited. In most countries, these issues are absent from school and university classrooms. This lack of education feeds ignorance and, in turn, complacency – allowing the flawed logic of deterrence to remain unchallenged.

Disarmament education that puts empathy and ethics at its center, along with survivors’ voices, can empower the next generation not only with knowledge, but with moral strength to choose their path.

A person stands at a lectern in front of a screen with photos and text reading 'As long as I could see, all the roof tiles had been blown to one side. The green of the mountain that surround the city was gone. They were brown mountains now.'
Masako Wada, a Hibakusha who survived the U.S. bombing of Nagasaki in August 1945, speaks at a church in California in 2019, spreading the message of the horror of the attack and its aftermath, and urging people to promote nuclear disarmament.
Masako Toki, CC BY-ND

From remembrance to responsibility

Commemorating 80 years since the atomic bombings of Hiroshima and Nagasaki is not about history alone. It is about the future. It is about what people choose to remember – and what people choose to do with that memory.

The Hibakusha have never sought revenge. Their message is clear: This can happen again. But it doesn’t have to.

The Hibakusha’s journey shows that human beings are not destined to remain divided, nor are they doomed to repeat cycles of destruction. In the face of unimaginable loss, many Hibakusha chose not to dwell on anger or seek retribution, but instead to speak out for the good of all humanity. Their activism has been marked not by bitterness, but by an unwavering commitment to peace, empathy and the prevention of future suffering. Rather than directing their pain toward blame, they have transformed it into a powerful appeal to conscience and global solidarity. Their concern has never been only for Japan – but for the future of the entire human race.

That moral clarity, grounded in lived experience, remains profoundly instructive. In a world increasingly filled with conflict and fear, I believe there is much to learn from the Hibakusha. Their testimony is not just a warning – it is a guide.

I try to listen, and urge others, as well, to truly listen to what they have to say. I seek the company of people who also refuse complacency, question the legitimacy of nuclear deterrence, and work for a future where human dignity, not mutual destruction, defines human security.

The Conversation

Masako Toki does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Survivors’ voices 80 years after Hiroshima and Nagasaki sound a warning and a call to action – https://theconversation.com/survivors-voices-80-years-after-hiroshima-and-nagasaki-sound-a-warning-and-a-call-to-action-262174

South African learners struggle with reading comprehension: study reveals a gap between policy and classroom practice

Source: The Conversation – Africa – By Tracy Kitchen, Lecturer: Student Academic Development, Rhodes University

Photo by Aaron Burden on Unsplash

South African learners consistently struggle with reading comprehension, performing poorly in both international and local assessments. A significant issue is that 81% of grade 4 learners (aged 9 or 10) are unable to read for meaning: they can decode words, but do not necessarily understand them.

While this problem has received considerable attention, no clear explanation has emerged.

In my recent PhD thesis, I considered a crucial, but often overlooked, piece of the puzzle – the curriculum policy. My research sought to uncover and understand the gaps and contradictions in reading comprehension, especially between policy and practice, in a grade 4 classroom.

This research revealed a difference between curriculum policy and practice, and between what learners seemed to have understood and what they actually understood in a routine reading comprehension task.

My main findings were that:

  • grade 4 learners were being asked overly simple, literal questions about what they were reading, despite the text being more complex than expected

  • the kinds of questions that learners should be asked (as indicated in the curriculum policy) were different from what they were being asked

  • this gap led to learners seeming to be more successful at reading comprehension than they actually were.

Pinpointing the gaps between what the policy says and how reading comprehension is actually taught at this crucial stage of development (grade 4) could pave the way for more effective interventions.

Curriculum policy

South African teachers are expected to base their reading comprehension instruction and assessment on the guidelines provided by the 2012 Curriculum and Assessment Policy Statement.

The policy outlines specific cognitive skill levels – essentially, ways of thinking and understanding – that learners should master for each reading task. These levels are drawn from Barrett’s 1956 Taxonomy of Reading Comprehension, an international guideline. It’s based on the popular Bloom’s Taxonomy of Reading Comprehension, which categorises reading comprehension according to varying skill levels.

According to Barrett’s Taxonomy, reading comprehension involves five progressively complex levels:

  1. Literal comprehension: Identifying meaning that is directly stated in the text. (For example, “Name the animals in the story”.)

  2. Reorganisation: Organising, paraphrasing, or classifying information that is explicitly stated. (“Find four verbs in the story to describe what the animals did.”)

  3. Inference: Understanding meaning that is not directly stated, but implied. (“When in the story is the leopard being selfish?”)

  4. Evaluation: Making judgements about the text’s content or quality. (“Who do you think this story is usually told to?”)

  5. Appreciation: Making emotional or personal evaluations about the text. (“How well was the author able to get the message across?”)

Typically, reading comprehension tasks will assess a range of these cognitive skills.

South Africa’s Curriculum and Assessment Policy Statement document specifies (on pages 91-92) that all reading comprehension tasks should comprise questions that are:

  • 40% literal/reorganisation (lower-order thinking skills)

  • 40% inferential (middle-order)

  • 20% evaluation and appreciation (higher-order).

This approach aims to allow most students to demonstrate a basic understanding of the text, while challenging more advanced learners.

However, as my classroom case study shows, the system appears to be failing. There was a mismatch between the policy and what was taking place in the classroom.

Classroom practice

For this research, I observed the reading comprehension practices in a single classroom in a public school in the Eastern Cape province. This took place over six months, at a time when schools were not fully reopened during the COVID-19 pandemic.

The task in question included a text and activity selected by the teacher from a textbook aligned with the policy. My analysis (which used Appraisal, a linguistic framework that tracks evaluative meaning) showed that most of the text’s meaning was implicit. To fully understand it, learners would need higher-order thinking and sophisticated English first-language skills. This was a surprising finding for a grade 4 resource, especially because most learners in this study were not English first-language speakers.

Even more surprising, learners achieved seemingly high marks on comprehension, with an average of 82.9%. This suggested they understood this complex text.

However, I found that the questions in the textbook did not align with policy. Instead of the balance of skills required by the policy, 73% of the questions called only for lower-order skills. Only 20% were inferential and a mere 7% required evaluation or appreciation (middle- to higher-order skills).

At least six of the 15 available marks could be gained simply by listing explicitly stated items, not requiring genuine comprehension.

This reveals that, in this classroom, activities labelled as policy-compliant actually tested only lower-order comprehension. Learners could pass simply by identifying and listing information from the text. This creates a false sense of comprehension success, as revealed by the high marks.

When learners were tested on the same text but using different questions that I designed to align with the policy requirements, they scored lower marks, especially for the higher-order questions.

This mismatch might partly explain why South Africans score poorly in international tests (which require more higher-order thinking).

Why this matters and moving forward

These findings are concerning, as learners may be lulled into believing that they are successful readers. A false sense of accomplishment could have significant impacts on the rest of their education.

Comprehension difficulties can’t be blamed solely on the disconnect between policy and practice, however. Many other contextual factors shape how learners perform in reading comprehension tasks.

In my study, factors like COVID-19, insufficient home language teaching policies, educational inequalities, and the pressures on teachers during a crisis (brought on by COVID-19) all contributed to the literacy crisis.




Read more:
South Africa’s reading crisis: focus on the root cause, not the peripherals


However, two key points became clear during this study.

Firstly, teaching materials favour lower-order comprehension skills, skewing perceptions of learners’ abilities.

Secondly, teachers may lack the knowledge, resources or motivation to adjust these materials to truly align with the national policy in how reading comprehension is assessed.

This calls for urgent intervention in how reading comprehension is taught and assessed and in how teachers are prepared to do this effectively.

The Conversation

This research was partially funded by the National Research Foundation (NRF).

ref. South African learners struggle with reading comprehension: study reveals a gap between policy and classroom practice – https://theconversation.com/south-african-learners-struggle-with-reading-comprehension-study-reveals-a-gap-between-policy-and-classroom-practice-260033

The global health system can build back better after US aid cuts – here’s how

Source: The Conversation – Africa (2) – By Jonathan E. Cohen, Professor of Clinical Population and Public Health Sciences, Keck School of Medicine and Director of Policy Engagement, Institute on Inequalities in Global Health, University of Southern California, University of Southern California

Steep cuts in US government funding have thrown much of the field of global health into a state of fear and uncertainty. Once a crown jewel of US foreign policy, valued at some US$12 billion a year, global health has been relegated to a corner of a restructured State Department governed by an “America First” agenda.

Whatever emerges from the current crisis, it will look very different from the past.

As someone who has spent a 25-year career in global health and human rights and now teaches the subject to graduate students in California, I am often asked whether young people can hope for a future in the field. My answer is a resounding yes.

More than ever, we need the dedication, humility and vision of the next generation to reinvent the field of global health, so that it is never again so vulnerable to the political fortunes of a single country. And more than ever, I am hopeful this will be the case.

To understand the source of my hope, it is important to recall what brought US engagement in global health to its current precipice – and how a historic response to specific diseases paradoxically left African health systems vulnerable.

Disease and dependency

Over two decades ago, the field of global health as we currently know it emerged out of the global response to HIV/Aids – among the deadliest pandemics in human history. The pandemic principally affected people of reproductive age and babies born to HIV-positive parents.

The creation of the US President’s Emergency Plan for Aids Relief (Pepfar) in 2003 was at the time the largest-ever bilateral programme to combat a single disease. It redefined the field of global health for decades to come, with the US at its centre. While both the donors and issues in the field would multiply over the years, global health would never relinquish its origins in American leadership against HIV/Aids.

Pepfar placed African nations in a state of extreme dependence on the US. We are now witnessing the results – not for the first time. The global financial crisis of 2008 reduced development assistance to health, which generated new thinking about financing and domestic resource mobilisation.

Yet, the US continued to underwrite Africa’s disease responses through large contracts to American universities and implementers. This was for good reason, given the urgency of the problem, the growing strength of Africa’s health systems as a result of Pepfar, and the moral duty of the world’s richest country.

With the rise of right-wing populism and the polarising effects of COVID-19, global health would come to be seen by many Americans as an elite enterprise. The apparent trade-off between public health countermeasures and economic life during COVID-19 – a false choice to experts who know a healthy workforce to be a precondition for a strong economy – alienated many voters from the advice of disease prevention experts. The imperative to “vaccinate the world” and play a leadership role in global health security lacked a strong domestic constituency. It proved no match for monopolistic priorities of the pharmaceutical industry and the insularity and economic anxieties of millions of Americans.




Read more:
How Trump’s proposed US aid cuts will affect healthcare in Africa


This history set the stage for the sudden abdication of US global health leadership in early 2025. By the time the Department of Government Efficiency came for USAID, many viewed global health as a relic of the early response to HIV/Aids, an excuse for other governments to spend less on health, or an industry of elites. The field was an easy target, and the White House must have known it.

Yet therein lies the hope. If global health came of age around a single disease, an exercise of US soft power, and a cadre of elite experts, it now has an opportunity to change itself from the ground up. What can emerge is a new global health compact, in which African governments design robust health systems for themselves and enlist the international community to assist from behind.

Opportunity to build back better

To build a new global health compact for Africa, the first change must be from a focus on combating individual diseases to ensuring that all people have the opportunity for health and well-being throughout their lives. Rather than allowing entire health systems to be defined by the response to HIV/Aids, tuberculosis and malaria, Africa needs integrated systems that promote:

  • primary care, which brings services for the majority of health needs closer to communities

  • health promotion, which enables people to take control of all aspects of their health and well-being

  • long-term care, which helps all people function and maintain quality of life over their entire lifespan.

No global trend compels this shift more than population ageing, which will soon engulf every nation as a result of lengthening life expectancy and declining fertility. As the proportion of older adults grows to outstrip that of children, societies need systems of integrated healthcare that help people manage multiple diseases. They don’t need fragmented programmes that produce conflicting medical advice, dangerous drug interactions, and crippling bureaucracy. Time is running out to make this fundamental shift.

Second, there is a need to shift the relationship between low-income and high-income nations towards shared investment in the service of local needs. This is beginning to happen in some places, and it will require greater sacrifices on all sides.

Low-income governments need to spend a higher percentage of their GDP on healthcare. That will in turn require addressing the many factors that stymie the redistribution of wealth, from corruption to debt to lack of progressive taxation. The US and other high-income countries need to pay their fair share, while also sharing decision-making over how global public goods, from vaccines to disease surveillance to health workers, are shared and distributed in an interconnected world.




Read more:
Africa relies too heavily on foreign aid for health – 4 ways to fix this


Third, there is need to change the narrative of global health in wealthy countries such as the US to better connect to the concerns of voters who are hostile to globalism itself. This means addressing people’s real fears that public health measures will cost them their job, force them to close their business, or advance a pharmaceutical industry agenda. It means justifying global health in terms that people can relate to and agree with – that is, helping to save lives without taking responsibility for other countries’ health systems.

It means forging unlikely alliances between those who believe in leadership from the so-called global south and those who take an insular view of America’s role in the world.

Leading from behind

Make no mistake. I am not counting on this – or any – US administration to reinvent global health on terms that are more responsive to current disease trends, more equitable between nations, and more relevant to American voters.

But nor would I want them to. To create the global health for the future, the leadership must come not only from the US, but rather from a shared commitment among the community of nations to give and receive according to their capacities and needs. And that is something to hope for.

The Conversation

Jonathan E. Cohen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The global health system can build back better after US aid cuts – here’s how – https://theconversation.com/the-global-health-system-can-build-back-better-after-us-aid-cuts-heres-how-259798

National parks are key conservation areas for wildlife and natural resources

Source: The Conversation – USA (2) – By Sarah Diaz, Associate Professor of Recreation and Sport Management, Coastal Carolina University

A researcher collects water samples in Everglades National Park in Florida to monitor ecosystem health. AP Photo/Rebecca Blackwell

The United States’ national parks have an inherent contradiction. The federal law that created the National Park Service says the agency – and the parks – must “conserve the scenery and the natural and historic objects and the wildlife … unimpaired for the enjoyment of future generations.”

That means both protecting fragile wild places and making sure people can visit them. Much of the public focus on the parks is about recreation and enjoyment, but the parks are extremely important places for research and conservation efforts.

These places contain a wide range of sensitive and striking environments: volcanoes, glaciers, sand dunes, marshlands, ocean ecosystems, forests and deserts. And these areas face a broad variety of conservation challenges, including the effects of climate change, the perils of popularity driving crowds to some places, and the Trump administration’s reductions to park service staff and funding.

As scholars of recreation who study the national parks and teach a course on them, we have seen the park service make parks far more than just recreational opportunities. They are living laboratories where researchers – park service personnel and others – study nature across wide-ranging ecosystems and apply what they learn to inform public and private conservation efforts around the country.

A group of wolves on a snowy landscape.
Gray wolves, long native to the Yellowstone area, were reintroduced to the national park in the mid-1990s and have helped the entire ecosystem flourish since.
National Park Service via AP

Returning wolves to Yellowstone

One of the best known outcomes of conservation research in park service history is still playing out in the nation’s first national park, Yellowstone.

Gray wolves once roamed the forests and mountains, but government-sanctioned eradication efforts to protect livestock in the late 1800s and early 1900s hunted them to near extinction in the lower 48 states by the mid-20th century. In 1974, the federal government declared that gray wolves needed the protections of the Endangered Species Act.

Research in the park found that the ecosystem required wolves as apex predators to maintain a healthy balance in nature.

In the mid-1990s, an effort began to reintroduce gray wolves to Yellowstone National Park. The project brought 41 wolves from Canada to the park. The wolves reproduced and became the basis of a Yellowstone-based population that has numbered as many as 120 and in December 2024 was estimated at 108.

The return of wolves has not only drawn visitors hoping to see these beautiful and powerful predators, but their return has also triggered what scholars call a “trophic cascade,” in which the wolves decrease elk numbers, which in turn has allowed willow and aspen trees to survive to maturity and restore dense groves of vegetation across the park.

Increased vegetation in turn led to beaver population increases as well as ecosystem changes brought by their water management and engineering skills. Songbirds also came back, now that they could find shade and shelter in trees near water and food sources.

A bear climbs a tree.
Since the establishment of Great Smoky Mountains National Park in 1934, black bear populations have rebounded in the park.
Great Smoky Mountains National Park via AP

Black bear protection in the Great Smoky Mountains

Great Smoky Mountains National Park is the most biologically diverse park in the country, with over 19,000 species documented and another 80,000 to 100,000 species believed to be present. However, the forests of the Appalachian Mountains were nearly completely clear-cut in the late 1800s and early 20th century, during the early era of the logging industry in the region.

Because their habitat was destroyed, and because they were hunted, black bears were nearly eradicated. By 1934, when Great Smoky Mountains National Park was designated, there were only an estimated 100 bears left in the region. Under the park’s protection, the population rebounded to an estimated 1,900 bears in and around the park in 2025.

Much like the gray wolves in Yellowstone, bears are essential to the health of this ecosystem by preying on other animals, scavenging carcasses and dispersing seeds.

Water preservation in the Everglades

The Everglades are a vast subtropical ecosystem located in southern Florida. They provide drinking water and irrigation to millions of people across the state, help control storm flooding and are home to dozens of federally threatened and endangered species such as the Florida panther and American alligator.

When Everglades National Park was created in 1947, it was the first time a U.S. national park had been established to protect a natural resource for more than just its scenic value.

As agriculture and surrounding urban development continue to pollute this natural resource, park professionals and partner organizations have focused on improving habitat restoration, both for the wildlife and for humans’ water quality.

A large tawny cat springs across an area of gravel and grass.
A Florida panther, rescued as a kitten, is released into the wild in the Everglades in 2013.
AP Photo/J Pat Carter

Inspiring future generations

To us, perhaps the most important work in the national parks involves young people. Research shows that visiting, exploring and understanding the parks and their ecosystems can foster deep connections with natural spaces and encourage younger generations to take up the mantle of stewardship of the parks and the environment as a whole.

With their help, the parks – and the landscapes, resources and beauty they protect– can be preserved for the benefit of nature and humans, in the parks and far beyond their boundaries.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. National parks are key conservation areas for wildlife and natural resources – https://theconversation.com/national-parks-are-key-conservation-areas-for-wildlife-and-natural-resources-261644