Anthology 4 shows there’s still more to discover about The Beatles

Source: The Conversation – UK – By Glenn Fosbraey, Associate Dean of Humanities and Social Sciences, University of Winchester

A lot can happen in three decades. Since 1995, we’ve seen nine different UK prime ministers, the birth and death of the Minidisc, iPod and DVD. Manchester City sank to the third tier of English football then rose to become champions of Europe. One thing that hasn’t wavered, though, is the popularity of The Beatles.

On November 21, The Beatles’ Anthology 4 was released to an eager worldwide audience, 30 years after the first instalment in the series, Anthology 1, and 56 years after the band split.

Released in November 1995, Anthology 1 was initially met with bemusement by reviewers. Some dismissed its contents as “scrappy old demo tapes, TV recordings, and studio outtakes” which were “of scant interest to anyone but obsessives”. Perhaps there were simply a lot more “obsessives” than critics thought – the public bought the album in droves. Anthology 1 topped charts all over the world with the highest first week of sales ever recorded.

Anthologies 2 and 3 followed in March 1996 and October 1966, respectively. Although they didn’t quite reach the commercial heights of Anthology 1, they still sold in their millions. Their releases also coincided with the peak of Britpop, which came not so much to bury the Fab Four’s legacy as to raise it to new heights with figureheads Noel and Liam Gallagher of Oasis regularly espousing their idolatry for the band.

Trailer for The Beatles Anthology on Disney+.

The Anthology trilogy may not have been the first outtakes and demos albums (that honour goes to The Who and their 1974 Odds and Sods collection), but they did break new ground in showing how a retrospective of band’s career can move beyond a compilation of previously released tracks.

The Anthologies told the story of The Beatles, tracking their development from amateur cover-artists to bona fide musical pioneers. It showed listeners how their favourite songs were constructed, morphing from, in the case of Strawberry Fields Forever, a home recording, through a series of experimental studio versions, to the finished product.

Most importantly, though, the albums offered intimate access to private spaces. It felt as if we were in Studio 2 with the band, listening to them chatting, playing around, trying things out, then, finally, creating some of the greatest songs ever committed to tape.

Anthology 4

As with all the previous instalments, Anthology 4 shows how the personalities of John Lennon, Paul McCartney, George Harrison and Ringo Starr were so key to their appeal. Their famous sense of humour and joie de vivre can be heard throughout. On Baby You’re A Rich Man (Takes 11 and 12), following Lennon’s request for bottles of Coke from roadie Mal Evans, McCartney jokingly asks for some cannabis resin before wryly remarking “that’s recorded evidence for the high court tomorrow”.

Harrison laughs at his inability to “do a Smokey [Robinson]” on While My Guitar Gently Weeps (Third Version – Take 27); and Lennon seems to be having the time of his life singing All You Need is Love (Rehearsal for BBC Broadcast). Their humility shines through, too.

On Julia (Two Rehearsals), for example, we hear Lennon speaking with producer George Martin about his struggles with playing and singing it. Here’s the most celebrated artists of all time unsure whether he’s good enough. The recording took place only a matter of months after the release of Sgt. Pepper’s Lonely Hearts Club Band, an album considered to have changed not only music, but pop culture at large. And when Starr bashfully asks whether anyone “has heard the Octopus one” before giving Octopus’s Garden (Rehearsal) an airing, we genuinely feel his anxiety.

Another extraordinary element of this collection (and the previous three) is the Beatles’ shift from just seeming like a group of lads larking about to a group of musicians creating masterpieces, then back again. It happens so quickly and so naturally that it’s almost disorientating.

More than any of the other Anthologies, the significance of Martin’s contribution is printed in bold, then underlined, twice, in red ink. If anyone ever deserved the accolade of “fifth Beatle” it was he, with his skills as an arranger and composer gloriously evident on I am The Walrus (Take 19 – Strings, Brass, Clarinet Overdub), Strawberry Fields Forever (Take 26), and Something (Take 39 – Strings Only Instrumental).

Sadly, it looks like the well of treasures may have finally run dry. The collection includes several tracks Beatles devotees will have already hoovered up via Abbey Road Super Deluxe, The Beatles (White Album) 50th Anniversary Edition, and Let It Be Super Deluxe. But, when it comes to The Beatles, enough is never enough. As well as the album, there is also an extended version of the 1990s docuseries Anthology airing on Disney+ on November 26th, and a 25th Anniversary edition of the book (also titled Anthology).

Anthology 4 already has something in common with its mid-90s ancestors courtesy of some less-than-charitable press, but whether it will mirror their success remains to be seen. What is for sure, though, is that The Beatles’ commercial juggernaut, well into its seventh decade now, shows no signs of slowing down.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Glenn Fosbraey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Anthology 4 shows there’s still more to discover about The Beatles – https://theconversation.com/anthology-4-shows-theres-still-more-to-discover-about-the-beatles-270486

Encouraging young people to vote requires understanding why they don’t

Source: The Conversation – Canada – By Christopher Alcantara, Professor of Political Science, Western University

Around the world, political institutions are under threat and democracy hangs in the balance. Deepening political divisions, political apathy and the rise of opportunistic populist leaders have all contributed to widespread democratic backsliding and a rise in authoritarianism.

Meeting this challenge requires active and engaged citizens. In Canada, there’s a strong sense that civic engagement is on the decline, especially among young people. Recent research commissioned by the Max Bell Foundation — a charity that works to improve educational, health and environmental outcomes for Canadians — suggests that the real story may be more complex.

Our research on political engagement has found that while today’s young Canadians are participating less in conventional political activities, they are increasingly active in other less traditional ways. How do we encourage youth to engage in all forms of civic life?

Ballots versus boycotts

Our analyses of Elections Canada voting data and survey data, collected through the Canada Election Study and Democracy Checkup projects, clearly illustrate that young people differ from older Canadians in how they participate in civic and political life.

Canadians between the ages of 18 and 34 are less likely to vote than those in other age cohorts and had the steepest decline in turnout from 2015 to 2021, when fewer than half of eligible young Canadians voted.

Young people are generally less knowledgeable and politically informed than older adults.

A graph shows voter turnout numbers in recent Canadian federal elections by age
Voter turnout numbers in recent Canadian federal elections by age.
(Elections Canada)

At the same time, young Canadians are at the forefront of discussing politics online, following politicians on social media and mobilizing their peers through digital platforms. They are more likely to take part in protests, petitions and political consumerism — from boycotts to buycotts — and to volunteer with community organizations and political campaigns at higher rates than other age groups.

a graph shows levels of political participation by age
Data on levels of political participation by age.
(Democracy Checkup surveys)

The real story isn’t that youth don’t care or aren’t political. It’s that they are turning away from conventional, formal participation in favour of alternative ways of sharing and expressing their views.

Explaining changing participation norms

Our analysis suggests that younger Canadians differ from their older counterparts across key factors that shape whether and how they participate.

Many young Canadians cite a lack of time as a barrier to engagement, have lower levels of political knowledge, report slightly lower levels of interest in politics and struggle to make the connection between politics and the issues they care about.

Our work also suggests that youth are noticeably less likely to see civic participation like voting as a duty, and they’re much more likely to be influenced by whether they believe their participation will make a difference.

This presents a particular challenge, because youth also tend to express higher levels of skepticism that their participation matters.

One final surprising finding is that more attention may need to be paid to understanding how political polarization affects youth. Young people may be increasingly put off from politics by hostility and conflict that they want to avoid.

Civic engagement beyond election day

Youth don’t seem to be tuning out but are instead finding different ways to engage. Nonetheless, declining interest in political engagement through formal institutions represents a real concern for democracy in Canada.

So how to build upon the areas where youth are already engaging, and bring young people back into conventional forms of civic engagement like voting?

Our conversations with civil society organizations suggest that civic engagement starts with effective civic education programs in schools, while highlighting the challenges these programs face — from educator training to curriculum design and sustainability over time.

They also highlight the challenge of reaching older youth, especially those referred to as NEET by statisticians — not in employment, education or training. Our interviewees shared their own successful strategies and emphasized the importance of reaching out to youth where they are and through the media and platforms that they prefer.

What comes next?

Democracies around the world are under pressure, and Canada is no exception. In this moment, it’s more critical than ever to pay attention to youth civic engagement.

Investing in civic education and encouraging civic participation early in life helps ensure young people have a voice in politics. But perhaps more importantly, it can also demonstrate how civic engagement can lead to change and challenge the feelings of powerlessness that drive disengagement. Youth participation helps build habits that last a lifetime and is essential to sustaining democracy for generations to come.

The future of Canadian democracy is in the hands of our youth. They must be equipped with the knowledge and the skills to shape it for the better.

The Conversation

Christopher Alcantara receives funding from Max Bell Foundation for this research report.

Craig Mutter receives funding from Max Bell Foundation for this research.

Laura Stephenson receives funding from Max Bell Foundation for this research.

ref. Encouraging young people to vote requires understanding why they don’t – https://theconversation.com/encouraging-young-people-to-vote-requires-understanding-why-they-dont-270015

Online harassment is silencing Canada’s health experts — institutions need to do more to protect them

Source: The Conversation – Canada – By Heidi J. S. Tworek, Professor of History and Public Policy, University of British Columbia

Canada has lost the measles elimination status it has held since 1998. To regain that status, one crucial factor is hearing from researchers who speak about vaccine safety in public.

Canada can’t afford to lose expert voices at a moment when the threat of vaccine-preventable diseases is rising. Yet our work suggests that online harassment is a growing deterrent that is driving researchers and scientists out of the conversations needed at this time.

Harassment is a long-standing problem in academia. While it occurs within different institutions and disciplines, it has increasingly taken the form of online attacks from people outside of academia. It’s a phenomenon that accelerated during the COVID-19 pandemic, and one where health experts are left to cope alone.

Canadian institutions and research organizations need to create broad support for these individuals.

The harms of online harassment

Our recent study on prominent Canadian health communicators — including university researchers and public health officials — found that 94 per cent, or 33 of 35 interviewees, had faced online abuse during the COVID-19 pandemic.

Online harassment goes beyond vaccines and COVID-19; climate change, gender diversity, immigration and other topics have all triggered a backlash. But we found that vaccination was one of the topics most likely to trigger abuse, and it’s an issue that has become increasingly politicized in Canada.

As with many issues in Canada, developments in the United States have played a major role. For instance, a 2021 study on vaccine hesitancy in Canada showed how social media conversations on vaccinations were heavily influenced by discussions in the U.S. It turned out that Canadians followed a median of 32 Canadian accounts and 87 American accounts.

With such interconnected information ecosystems, anti-science harassment directed at American researchers routinely spills into Canadian online spaces. This is worsened when senior officials in the U.S. administration publicly express a lack of support for immunization and evidence-based health recommendations.

Canadian universities and academic institutions need to develop mitigation and support strategies to deal with online harassment fuelled by these realities.

We can learn from action plans by U.S.-based universities and coalitions. Canada can also learn from models in countries like the Netherlands that have created national initiatives to support researchers experiencing harassment.

Hostility that threatens public health

While academics should be comfortable having their ideas challenged, technology-facilitated harassment is very different. Online harassment is often linked with other forms of targeted abuse and includes acts of doxxing, reputation attacks or threatening and sexualized messaging, among others.

Though this hostility often targets individuals working on politically contested issues, researchers from equity-deserving groups face online abuse that builds on systemic inequities related to race, gender, sexuality and other identity factors.

Online abuse can harm mental health, provoke fears about employment or grants and undermine academic freedom, as the Canadian Association of University Teachers observes. Our research found that health communicators faced the “psychological toll” of reading hostile emails day after day, with several reporting fear, sadness or anxiety in response to threats of violence.

A racialized expert recounted how personal attacks on her appearance and background “take a toll,” while a health journalist said that messages like one wishing her “blood clots” sometimes kept her awake at night. Several interviewees described exhaustion, worry and depressive symptoms, highlighting the hidden burden of online harassment.

Besides having serious personal, institutional and societal consequences, this reality risks creating information gaps that could be quickly filled by conspiracy theories. Some health researchers decided to stop media interviews or social media posts on controversial issues. So should they simply avoid public engagement on contentious topics?

While this approach might lessen the risks, it would also dramatically reduce the impact of their expertise. Public engagement is not only a key part of research grants but it also ensures that Canadians benefit directly from research.

Currently, scholars and public health communicators targeted with online abuse mainly use individual coping strategies such as deleting social media accounts, withdrawing from public communication or accepting abuse as inevitable.

These strategies, however, leave individuals to address attacks in isolation. While such measures provide temporary relief, they reinforce self-censorship and hamper public access to expert knowledge.

The need for ‘wraparound’ support

Institutions need to adopt “wraparound” support. This approach acknowledges researcher agency and institutional responsibility through a rights-based framework. It also shifts responsibility from individuals to institutions.

Unlike many universities’ current siloed and inflexible approach, a wraparound approach co-ordinates and integrates multiple domains of support.

For instance, some targeted individuals may not face legal or safety risks but can benefit from psychological support. Others may need assistance with cybersecurity risks or removing online mentions of personal information like their home address or children’s school.

Our institution, the University of British Columbia, for example, offers cybersecurity assistance, mental health support and other key elements of a response.

However, when we consulted faculty and staff, we learned that people found it daunting to figure out all the supports available and how to access them. We created an online resource to help. York University solved that problem by creating a map.

Canadian universities can also turn to international models for inspiration. Fourteen universities in the Netherlands, for instance, participate in a joint SafeScience initiative, which offers guidance and a national helpline to report incidents. Germany’s SciComm-Support provides resources, training and free counselling to researchers.

If we expect scientists and health experts to speak out about issues like measles vaccination for the good of society, they must know that their employers and institutions will stand with them, that they will have their backs.

Canada cannot prepare for future public health emergencies, like another pandemic, without protecting the safety of researchers and their freedom to pursue their lines of inquiry without fear.

Immunity and Society is a new series from The Conversation Canada that presents new vaccine discoveries and immune-based innovations that are changing how we understand and protect human health. Through a partnership with the Bridge Research Consortium, these articles — written by academics in Canada at the forefront of immunology and biomanufacturing — explore the latest developments and their social impacts.

The Conversation

Heidi J. S. Tworek receives funding from the Bridge Research Consortium (BRC), part of Canada’s Immuno-Engineering and Biomanufacturing Hub, which in turn is funded by the Canada Biomedical Research Fund, Canada Foundation for Innovation and the BC Knowledge Development Fund. She also receives funding from the Canada Research Chair Programme and the Social Sciences and Humanities Research Council.

Chris Tenove receives funding from the Bridge Research Consortium (BRC), part of Canada’s Immuno-Engineering and
Biomanufacturing Hub, which in turn is funded by the Canada Biomedical Research Fund, Canada Foundation for Innovation and the BC Knowledge Development Fund.

Netheena Neena Mathews receives funding from the Bridge Research Consortium (BRC), part of Canada’s Immuno-Engineering and Biomanufacturing Hub, which in turn is funded by the Canada Biomedical Research Fund, Canada Foundation for Innovation and the BC Knowledge Development Fund.

ref. Online harassment is silencing Canada’s health experts — institutions need to do more to protect them – https://theconversation.com/online-harassment-is-silencing-canadas-health-experts-institutions-need-to-do-more-to-protect-them-267532

An important wetland in Ghana is under siege. Researchers investigate the real issues

Source: The Conversation – Africa – By Stephen Leonard Mensah, PhD Candidate, University of Memphis

Wetlands are vital ecological resources that provide several benefits in urban and peri-urban areas. They slow down flood waters, and act as a source of fishing and farming livelihoods. They also provide socio-cultural benefits for local communities. But some of these valuable ecosystems, due to their presence in prime locations, are at the centre of competing cultural, ecological and economic interests. Property development, especially, is a threat to wetlands.

The 2025 Global Wetland Outlook emphasises that the protection of wetlands is key to sustainable development. However, since 1970, about 411 million hectares of wetlands have been lost. In Africa, degradation is widespread and many are in poor condition.

We are a multidisciplinary team of researchers working in the area of resilience, sustainability and justice in urban transitions.

Our research highlights some of the local-level issues and conflicting interests that are shaping the rapid destruction of the Sakumono Ramsar Site in Tema, Ghana. Under the Ramsar Convention, a Ramsar site is a designated wetland with special natural significance.

We found institutional complicity and the lack of engagement with communities to be key drivers shaping current wetland conditions. Our study proposes a model for enforcing regulations and asserting the community’s right to nature for socio-cultural purposes.




Read more:
A root cause of flooding in Accra: developers clogging up the city’s wetlands


Tema: wetlands in an industrial city

Tema was developed from a small fishing community into an industrialised port city by independent Ghana’s first president, Kwame Nkrumah. Its purpose was to facilitate international trade and vibrant economic development. It is one of Ghana’s most important cities and has been experiencing urban expansion and land use changes. This has led to encroachment in environmentally sensitive areas, including the Ramsar site.

The Sakumono wetland was officially designated a Ramsar site in 1992 to protect its rich biodiversity. It covers about 1,400 hectares and is protected by several regulations, including the Wetland Management Regulations Act, 1999.

But the site has, over the years, witnessed rapid depletion and intense encroachment from property development. Approximately 80% of the Sakumono Ramsar Site has been encroached on, leaving only about 20% of the wetland intact.

Population in the wetland’s catchment area had grown from about 114,600 in 1984 to over 500,000 by 2000, indicating that large numbers of people live around and rely on the wetland. Although the exact number of people currently affected by the wetlands encroachment is unknown, the dense surrounding population suggests that many households, especially those engaged in farming and fishing, have likely experienced reduced access and livelihood displacement. Like other wetlands in Ghana, the Sakumomo Ramsar site risks eventual destruction if nothing is done to reverse current trends.

The president of Ghana has called for heavy punishment for individuals who encroach on Ramsar sites. Both community and institutional respondents in our research claimed, however, that it was the political elites who were behind unbridled property development in the first place.




Read more:
Flooding incidents in Ghana’s capital are on the rise. Researchers chase the cause


Multiple and conflicting interests in wetlands management

The main objective of our study was to analyse stakeholders’ perspectives on the use, value and management of wetlands. We evaluated the impact of these views on the sustainable management of ecologically sensitive areas. We conducted in-depth interviews with community residents, community leaders and opinion leaders. We also interviewed officials from metropolitan and municipal assemblies. The research was conducted in the Sakumono community, where the Sakumono Ramsar site is located.

Conflicting views on wetlands value: while the value of the site lies in its economic and ecological benefits, community residents were more interested in its economic value. That is, how it provides livelihood opportunities through farming and fishing activities.

Residents wondered why developers were allowed to exploit portions of the wetlands for building purposes, while they were prevented from fishing and farming. One of the residents said:

See rich and influential people buying land in the wetland area and using it for building properties. But we are not permitted to fish there.

For state institutions, protecting the wetland meant restricting access for community members. They encouraged activities such as tree planting and periodic desilting.

Conflicting views on wetlands use: the views of stakeholders also showed the changing understanding of the use of wetlands. An official from the forestry commission revealed that the wetland was acquired by the state during the 1980s for conservation. But other institutional officials, such as those of the lands commission, revealed that it had become a prime area for property development. Powerful developers bypass the land registration process and build without a permit.

The size of the Ramsar site has reduced because people are acquiring the wetland, including the buffer area, for residential development. Even though the wetland area is demarcated as a protected area, many of the politically connected developers go behind us and build without a permit.

Conflicting views on wetlands management: our research revealed contradictions between state institutions and community stakeholders. For instance, traditional authorities were of the view that:

Since the management of the wetland is not under our control, we are not responsible for the current developments taking place in and around the demarcated area.

The traditional authorities said they were not consulted and did not benefit from the wetland. This perhaps explains why they watched on as destruction continued. A member of the traditional council said:

As leaders of the community, we are not consulted about how the wetland is managed. You always hear the forestry commission accusing community leaders that we are selling the land. We can’t sell land that does not belong to us.

Towards a community-based stewardship model

Communities should be at the centre of wetlands management. We propose a stewardship-based co-management model that enforces environmental and conservation regulations. It emphasises working with a range of stakeholders. This includes government agencies, traditional authorities and environmentally conscious community members. We call for an updated wetlands management plan that reflects recent changes, but that is also fair, responsible and protective for present and future generations. This is essential for building sustainable communities in Ghana and beyond.

The Conversation

Louis Kusi Frimpong receives funding from African Peacebuilding and Developmental Dynamics (APDD) through the Individual Research Fellowship (IRF).

Seth Asare Okyere and Stephen Leonard Mensah do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. An important wetland in Ghana is under siege. Researchers investigate the real issues – https://theconversation.com/an-important-wetland-in-ghana-is-under-siege-researchers-investigate-the-real-issues-269016

How does Narcan work? Mapping how it reverses opioid overdose can provide a molecular blueprint for more effective drugs

Source: The Conversation – USA – By Saif Khan, Ph.D. Candidate in Biology, University of Southern California

Naloxone competes with opioids for the same receptor on the surface of neurons. Matt Rourke/AP Photo

Naloxone, also known by the brand name Narcan, is one of the most important drugs in the United States’ fight against the opioid crisis. It reverses an opioid overdose nearly instantly, restarting breathing in a person who was unresponsive moments before and on the brink of death. To bystanders witnessing it being administered, naloxone can appear almost supernatural.

Although the Food and Drug Administration approved naloxone for medical use in 1971 and for over-the-counter purchase in 2023, exactly how it works is still unclear. Researchers know naloxone acts on opioid receptors, a family of proteins responsible for the body’s response to pain. When opioids such as morphine and fentanyl bind to these receptors, they produce not only pain relief and euphoria but also dangerous side effects. Naloxone competes with opioids for access to these receptors, preventing the drugs from triggering effects in the body. How it does this at the molecular level, however, has been an ongoing question.

In our recently published research in the journal Nature, my team and I were able to provide some definitive evidence of how naloxone works by capturing images of it in action for the first time.

Knowing how to use naloxone can save lives.

Biology of opioids

To better grasp how naloxone works, it’s helpful to first zoom in on the biology behind opioids.

One member of the family of opioid receptors, MOR – short for µ-opioid receptor – is a central player in regulating the body’s response to pain. It sits on the surface of neurons, mostly in the brain and spinal cord, and acts as a communication hub.

When an opioid – such as an endorphin, the body’s natural painkillers – interacts with MOR, it changes the structure of the receptor. This change in shape allows what’s called a G protein to bind to the receptor and trigger a signal to the rest of the body to reduce pain, induce pleasure, or – in the case of overdose – dangerously slow breathing and heart rate.

Diagram of different configurations of MOR
When a molecule binds to the µ-opioid receptor, it changes its structure and elicits an effect. Antagonists like naloxone inactivate the µ-opioid receptor, while agonists like fentanyl activate it.
Bensaccount/Wikimedia Commons

In everyday terms, MOR is like a lock on the outside of the cell. The G protein is the mechanism inside the lock that turns when the correct key – in this case, an endorphin or a drug like fentanyl – goes in. For decades, scientists believed that an opioid’s ability to enable this signaling cascade was linked to how effectively it reshaped the structure of the receptor – essentially, whether the lock could open wide enough for the internal mechanism of the G-protein to engage.

Yet, recent research – including our work – has revealed that the critical step to how opioids work is not how wide they open the lock but how well the mechanism works. G proteins act like a switch, releasing one molecule in exchange for another molecule that triggers the protein to send the signal that sets off opioid effects.

In essence, drugs like fentanyl, by acting on the receptor, transmit physical changes to G proteins that result in the switch flipping more rapidly. What we now see is that naloxone jams the mechanism, preventing the switch from flipping and sending the signal.

Capturing the switch

Researchers know that the effects of opioids are triggered when the G protein switch is flipped. But what does this process look like?

For years, attempts to visualize this mechanism were largely limited to two states – before the G protein binds to the µ-opioid receptor, and after the molecule was released from the G protein. The states in between were considered too unstable to isolate. My team and I wanted to capture these unseen states moment by moment as the switch flips and the molecule is released.

To do this, we used a technique called cryo-electron microscopy, which freezes molecules in motion to visualize them at near-atomic resolution. For both naloxone and the opioid drug loperamide (Imodium), we trapped the G protein bound to the opioid receptor right before it released the molecule.

We captured four distinct structural states leading up to the release of the molecule from the G protein.

The first of these, which we call the latent state, is the earliest form of the opioid receptor and G protein after they make contact. We found that both the opioid receptor and the G protein are inactive at this point. Moreover, naloxone stabilizes this latent state. What this means is that naloxone effectively jams the mechanism right at the start, preventing all subsequent steps required for activation.

Diagram of MOR and G protein in six different states of activation
How the µ-opioid receptor (top half of the structure) and G protein (bottom half of the structure) are configured is key to the effects of naloxone and opioids.
Saif Khan et al/Nature, CC BY-NC-ND

In the absence of naloxone, an opioid drug promotes a transition to the remaining three states: The G-protein rotates and aligns itself with the receptor (engaged), swings open the door blocking the molecule that would trigger the switch from flipping (unlatched), and holds that door open so the molecule can be released (primed) and send the signal to carry out the drug’s effects.

To confirm that our snapshots reflect what’s really happening, we performed extensive computational simulations to watch these four states change over time. Together, these findings point to the molecular root of naloxone’s therapeutic effects: By stalling the opioid receptor and G protein at a latent state, it shuts down opioid signaling, reversing an opioid overdose within minutes.

Visualizing new drugs

Designing a new key for a lock is most successfully done when you know exactly what that lock looks like. By mapping the exact sequence of how opioids interact with opioid receptors and pinpointing where different drugs can intervene in this process, our findings provide a blueprint for engineering the next generation of opioid medicines and overdose antidotes.

For example, one of the persistent challenges with naloxone is that it must often be administered repeatedly during an overdose. This is especially the case for fentanyl overdoses, where the opioid can outcompete or outlast the effects of the treatment.

Knowing that naloxone works by stalling the µ-opioid receptor in an early, latent state suggests that molecules that can bind more tightly or more selectively to this form of the receptor could be more effective at stabilizing this inactive state and thus preventing an opioid’s effects.

By uncovering the structure of molecules involved in opioid signaling, researchers may be able to develop drugs that provide longer-lasting protection against overdose.

The Conversation

This research was supported by the National Institutes of Health.

ref. How does Narcan work? Mapping how it reverses opioid overdose can provide a molecular blueprint for more effective drugs – https://theconversation.com/how-does-narcan-work-mapping-how-it-reverses-opioid-overdose-can-provide-a-molecular-blueprint-for-more-effective-drugs-269706

Absence of evidence is not evidence of absence – and that affects what scientific journals choose to publish

Source: The Conversation – USA – By Mark Louie Ramos, Assistant Research Professor of Health Policy and Administration, Penn State

Careful planning and analysis are part of trying to reduce the chance of a false-positive finding. Arnon Mungyodklang/iStock via Getty Images Plus

Should you believe the findings of scientific studies? Amid current concerns about the public’s trust in science, old arguments are resurfacing that can sow confusion.

As a statistician involved in research for many years, I know the care that goes into designing a good study capable of coming up with meaningful results. Understanding what the results of a particular study are and are not saying can help you sift through what you see in the news or on social media.

Let me walk you through the scientific process, from investigation to publication. The research results you hear about crucially depend on the way scientists formulate the questions they’re investigating.

The scientific method and the null hypothesis

Researchers in all kinds of fields use the scientific method to investigate the questions they’re interested in.

First, a scientist formulates a new claim – what’s called a hypothesis. For example, is having some genetic mutations in BRCA genes related to a higher risk of breast cancer? Then they gather data relevant to the hypothesis and decide, based on the data, whether that initial claim was correct or not.

It’s intuitive to think that this decision is cleanly dichotomous – that the researcher decides the hypothesis is either true or false. But of course, just because you decide something doesn’t mean you’re right.

If the claim is really false but the researcher decides, based on the evidence, it’s true – a false positive – they commit what’s called a Type 1 error. If the claim is really true but the researcher fails to see that – a false-negative conclusion – then they commit a Type 2 error.

Moreover, in the real world, it gets a little messier. It’s really hard to decide about the truth or falsity of a claim just based on what’s observed.

For that reason, most scientists employ what is called the null hypothesis significance testing framework. Here’s how it works: A researcher first states a “null hypothesis,” something that’s contrary to what they want to prove. For instance, in our example the null hypothesis is that BRCA genetic mutations are not associated with increased breast cancer occurrence.

The scientist still gathers data and makes a decision, but the decision is not about whether the null is true. Instead, a researcher decides whether there’s enough evidence to reject the null hypothesis or not.

man in white coat in lab looking at tablet
Careful statistical analysis along with a well-formulated null hypothesis lend confidence to a study finding.
Jackyenjoyphotography/Moment via Getty Images

What rejecting the null does and doesn’t mean

Understanding this distinction is crucial. Rejecting the null is equivalent in practice to acting as though it is false – in the example, rejecting the null means claiming that those with some BRCA gene mutations do have a higher risk of breast cancer. Along with other evidence, such as the size of the increased risk, this outcome can justify recommending early breast cancer screening for people with the identified BRCA mutations.

But failing to reject the null hypothesis doesn’t imply that it’s true – in this case, it doesn’t mean there is no association between the BRCA mutations and breast cancer. Rather, such a result is inconclusive; there’s not enough evidence to claim there is an association. A negative result – inadequate evidence to say the null is false – does not necessarily invite the researcher to believe the null is true.

This is because null hypothesis significance testing is set up to control for Type 1 error (false positive) at a level defined in advance by the researcher but at the cost of having less control over Type 2 error (false negative).

A researcher’s chances of correctly rejecting the null if there is increased risk can depend on how much data they have, how complex the design of the study is and, most importantly, how large the effect actually is. It’s much easier to reject the null if BRCA mutations truly increase cancer risk many times than it is if the risk is only slightly elevated. A researcher can end up with a result that is not statistically significant but cannot rule out the possibility of an increased risk that is too small for the study to detect.

Which results are more often publicized

Once they have their result and the researchers want to disseminate their work, they typically do so through peer-reviewed publication. Journal publishers consider a researcher’s write-up of their study, send it out for other scientists to review, and then decide whether to publish it.

In this process, the publishers tend to favor studies that rejected their null hypothesis over those that failed to reject it. This is called positive publication bias.

It is natural for publishers to prefer studies that support new claims since they objectively carry more information than studies that failed to reject their null hypothesis. Journals want to publish something new and noteworthy.

Many sources flag this phenomenon as “bad science,” but is it really? Remember, the framework used to make decisions about scientific claims is intentionally only capable of either rejecting the null hypothesis – in other words, supporting the claim – or alternatively declaring inconclusive results.

The framework isn’t designed to be able to prove the null hypothesis. That said, researchers can reverse the design of a scientific investigation so that a previous claim becomes the null hypothesis in a new study with fresh data.

For instance, rather than a null hypothesis that there is no association between BRCA mutations and breast cancer, the null hypothesis becomes that the increased breast cancer risk from BRCA mutations is equal to or greater than some value the researcher settles on before gathering fresh data.

Rejecting the null this time would mean the increased risk is smaller than that set value, thus supporting the claim consistent with what had previously been the null hypothesis on prior data. In the example, rejecting the null means the effect of BRCA genes is small enough to be practically negligible in terms of developing breast cancer.

It’s critical for a researcher to structure their study so that what they’re interested in proving is aligned with the rejection of the null. Publishers are naturally less inclined to consider studies that failed to reject their null hypothesis, not because they do not want to publish studies that support negative statements but because null hypothesis significance testing does not actually support negative statements. Failure to reject the null just means your results are inconclusive – and may perhaps seem less newsworthy.

library shelves with research journals
Research journals want to publish results that will have an impact.
luoman/iStock via Getty Images Plus

What positive publication bias does

So what does the practice of preferring to publish studies that reject their null hypothesis do?

While we can’t know for certain, we can see how this plays out under different circumstances. You can explore the scenarios in this app I made.

If scientists are acting in good faith, using null hypothesis significance testing appropriately, it turns out that positive publication bias on the part of scientific journal publishers will increase the proportion of true discoveries in their pages much more than it will increase the proportion of false positives.

If editors did not exercise any positive publication bias, journals would be almost entirely full of studies with inconclusive results.

Of course, if scientists are not acting in good faith and are just interested in getting published while ignoring proper use of statistical tests, that can lead to false-positive rates being as high or higher than the rate of true discoveries. But this possibility is true even without positive publication bias.

The Conversation

Mark Louie Ramos does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Absence of evidence is not evidence of absence – and that affects what scientific journals choose to publish – https://theconversation.com/absence-of-evidence-is-not-evidence-of-absence-and-that-affects-what-scientific-journals-choose-to-publish-264854

Sir Richard Branson’s wife Joan Templeman dies aged 80

Source: Radio New Zealand

Joan Templeman, the wife of British billionaire Sir Richard Branson, has died aged 80.

The Virgin Records founder and airline tycoon shared the news on his Instagram page, writing that he was “heartbroken” to announce that Ms Templeman, his partner of almost 50 years, had died.

“She was the most wonderful mum and grandmum our kids and grandkids could have ever wished for,” Sir Richard, 75, wrote.

– Published by EveningReport.nz and AsiaPacificReport.nz, see: MIL OSI in partnership with Radio New Zealand

Pentagon investigation of Sen. Mark Kelly revives Cold War persecution of Americans with supposedly disloyal views

Source: The Conversation – USA – By Gregory A. Daddis, Professor and Melbern G. Glasscock Endowed Chair in American History, Texas A&M University

Arizona Sen. Mark Kelly speaks at a town hall meeting hosted by the South Carolina Democratic Party in Columbia, S.C., on Sept. 12, 2025. Bill Clark/CQ-Roll Call, Inc via Getty Images

In an unprecedented step, the Department of Defense announced online on Nov. 24, 2025, that it was reviewing statements by U.S. Sen. Mark Kelly, a Democrat, who is a retired Navy captain, decorated combat veteran and former NASA astronaut.

Kelly and five other members of Congress with military or intelligence backgrounds told members of the armed forces “You can refuse illegal orders” in a video released on Nov. 18, reiterating oaths that members of the military and the intelligence community swear to uphold and defend the Constitution. The legislators said they acted in response to concerns expressed by troops currently serving on active duty.

President Donald Trump called the video “seditious behavior, punishable by death.”

Retired senior officers like Kelly can be recalled to duty at any time, which would make it possible for the Pentagon to put Kelly on trial under the Uniform Code of Military Justice, although the Defense Department announcement did not specify possible charges. Defense Secretary Pete Hegseth wrote online that “Kelly’s conduct brings discredit upon the armed forces and will be addressed appropriately.”

This threat to punish Kelly is just the latest move by the Trump administration against perceived enemies at home. By branding critics and opponents as disloyal, traitorous or worse, Trump and his supporters are resurrecting a playbook that hearkens back to Sen. Joseph McCarthy’s crusade against people he portrayed as domestic threats to the U.S. in the 1950s.

As a historian who studies national security and the Cold War era, I know that McCarthyism wrought devastating social and cultural harm across our nation. In my view, repeating what I believe constitutes social and political fratricide could be just as harmful today, perhaps even more so.

Targeting homegrown enemies

In the late 1940s and early 1950s, many Americans believed the United States was a nation under siege. Despite their victory in World War II, Americans saw a dangerous world confronting them.

The communist-run Soviet Union held Eastern Europe in an iron grip. In 1949, Mao Zedong’s communist troops triumphed in the bloody Chinese civil war. One year later, the Korean peninsula descended into full-scale conflict, raising the prospect of World War III – a frightening possibility in the atomic era.

Anti-communist zealots in the U.S., most notably Wisconsin Republican Sen. McCarthy, argued that treasonous Americans were weakening the nation at home. During a February 1950 speech in Wheeling, West Virginia, McCarthy asserted that “the traitorous actions of those who have been treated so well by this nation” were undermining the United States during its “final, all-out battle” against communism.

When communist forces toppled China’s government, critics such as political activist Freda Utley lambasted President Harry Truman’s administration for what they cast as its timidity, blundering and, worse, “treason in high places.” Conflating foreign and domestic threats, McCarthy claimed without evidence that homegrown enemies “within our borders have been more responsible for the success of communism abroad than Soviet Russia.”

From 1950 through 1954, Sen. Joseph McCarthy, a Wisconsin Republican, used his role as chair of two powerful Senate committees to identify and accuse people he thought were Communist sympathizers. Many of those accused lost their jobs even when there was little or no evidence to support the accusations.

As ostensible proof, the senator pointed to American lives being lost in Korea and argued that it was possible to “fully fight a war abroad and at the same time … dispose of the traitorous filth and the Red vermin which have accumulated at home.”

Political opponents might disparage McCarthy for his “dishonest and cowardly use of fractional fact and innuendo,” but the Wisconsinite knew how to play to the press. Time and again, McCarthy would bombastically lash out against his critics as he did with columnist Drew Pearson, calling him “an unprincipled liar,” “a fake” and the owner of a “twisted perverted mentality.”

While McCarthy focused on allegedly disloyal government officials and media journalists, other self-pronounced protectors of the nation sought to warn naive members of the public. Defense Department pamphlets like “Know Your Communist Enemy” alerted Americans against being duped by Communist Party members skilled in deception and manipulation.

Virulent anti-communists denounced what they viewed as inherent weaknesses of postwar American society, with a clearly political bent. Republicans asserted that cowardly, effeminate liberals were weakening the nation’s defense by minimizing threats both home and abroad.

Censure and worse

In such an anxiety-ridden environment, “red-baiting” – discrediting political opponents by linking them to communism – spread across the country, leaving a trail of wrecked lives. From teachers to public officials, anyone deemed un-American by McCarthyites faced public censure, loss of employment or even imprisonment.

Under the 1940 Smith Act, which criminalized promoting the overthrow of the U.S. government, hundreds of Americans were prosecuted during the Cold War simply for having been members of the Communist Party of the United States. The act also authorized the “deportation of aliens,” reflecting fears that communist ideas had seeped into nearly all facets of American society.

The 1950 Internal Security Act, widely known as the McCarran Act, further emphasized existential threats from within. “Disloyal aliens,” a term the law left purposefully vague, could have their citizenship revoked. Communist Party members were required to register with the government, a step that made them susceptible to prosecution under the Smith Act.

Immigrants could be detained or deported if the president declared an “internal security emergency.” Advocates called this policy “preventive detention,” while critics derided the act as a “Concentration Camp Law,” in the words of historian Masumi Izumi.

Scapegoating outsiders

The scaremongering wasn’t just about people’s political views: Vulnerable groups, such as gay people, were also targeted. McCarthy warned of links between “communists and queers,” asserting that “sexual perverts” had infested the U.S. government, especially the State Department, and posed “dangerous security risks.” Closeted gay or lesbian employees, the argument went, were vulnerable to blackmail by foreign governments.

Fearmongering also took on a decidedly racist tone. South Carolina Governor George Bell Timmerman, Jr., for instance, argued in 1957 that enforcing “Negro voting rights” would promote the “cause of communism.”

Three years later, a comic book titled “The Red Iceberg” insinuated that communists were exploiting the “tragic plight” of Black families and that the NAACP, a leading U.S. civil rights advocacy group, had been infiltrated by the Kremlin. Conservatives like Arizona Sen. Barry Goldwater criticized the growing practice of using federal power to enforce civil rights, calling it communist-style social engineering.

In an interview on Oct. 13, 2024, then-candidate Donald Trump described Democratic Party rivals as ‘the enemy from within’ and suggested using the armed forces against ‘radical left lunatics’ on Election Day.

A new McCarthyism

While it’s never simple to draw neat historical parallels from past eras to the present, it appears McCarthy-like actions are recurring widely today. During the Red Scare, the focus was on alleged communists. Today, the focus is on straightforward dissent. Critics, both past and present, of President Donald Trump’s actions and policies are being targeted.

At the national level, Trump has called for using military force against “the enemy from within.” On Sept. 30, 2025, Trump told hundreds of generals and admirals who had been called to Quantico, Virginia, from posts around the world that the National Guard should view America’s “dangerous cities as training grounds.”

The Trump administration is making expansive use of the McCarran Act to crack down on immigrants in U.S. cities. White House adviser Stephen Miller has proposed suspending the constitutionally protected writ of habeas corpus, which entitles prisoners to challenge their detentions in court, in order to deport “illegal aliens,” alleging that the U.S. is “under invasion.”

In my home state of Texas, political fearmongering has taken on an equally McCarthyesque tone, with the Legislature directing the State Board of Education to adopt mandatory instruction on “atrocities attributable to communist regimes.”

Perhaps it is unsurprising, then, that right-wing activist Laura Loomer has unapologetically called for “making McCarthy great again.”

Disagreement is democratic

The history of McCarthyism shows where this kind of action can lead. Charging political opponents with treason and calling the media an “enemy of the people,” all without evidence, undercuts democratic principles.

These actions cast certain groups as different and dehumanize them. Portraying political rivals as existential threats, simply for disagreeing with their fellow citizens or political leaders, promotes forced consensus. This diminishes debate and can lead to bad policies.

Americans live in an insecure world today, but as I see it, demonizing enemies won’t make the United States a safer place. Instead, it only will lead to the kind of harm that was brought to pass by the very worst tendencies of McCarthyism.

The Conversation

Gregory A. Daddis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Pentagon investigation of Sen. Mark Kelly revives Cold War persecution of Americans with supposedly disloyal views – https://theconversation.com/pentagon-investigation-of-sen-mark-kelly-revives-cold-war-persecution-of-americans-with-supposedly-disloyal-views-265964

A database could help revive the Arapaho language before its last speakers are gone

Source: The Conversation – USA – By Andrew Cowell, Professor of Linguistics, University of Colorado Boulder

There are fewer than 100 speakers of the Arapaho language today. Mark Makela/GettyImages

I was hired at the University of Colorado Boulder in 1995 as a language professor. I relocated from Hawaii, where I had learned the Hawaiian language.

When I arrived in Colorado, I decided I needed to learn about the Indigenous language of the Boulder and Denver area, Arapaho. The Arapaho people had occupied the area for many years until they were forced to leave in the 1860s.

I first visited the Northern Arapaho people on the Wind River Reservation in Wyoming in 1999. At that time, there were hundreds of speakers of the Arapaho language.

Today, there are less than 100, and all are over the age of 70.

The Arapaho people in Wyoming and Colorado believe their language can still survive, and so do I. That’s why I am working to combine decades of language documentation with new technological approaches in order to help revive the language.

Loss of Native languages

Many Native American languages currently have few Native speakers, and the speakers are typically the oldest members of the community. The languages of the Wichita and Kansa people, for example, are among many that are no longer spoken at all.

Native American languages have been in decline in the face of Euro-American pressure for centuries.

On the Great Plains, this decline accelerated after World War II when Native soldiers came home after seeing prosperity off the reservation.

Arapaho elders tell me that bilingual parents decided to speak only English to their children to improve their chances of success in life. They were certain the tribal languages would come “later.”

But “later” didn’t happen. Boarding schools had already been suppressing the language, and now economic improvements brought cars, radios and televisions to Wind River, further promoting the use of English. Without language exposure in the home, children were not able to acquire good speaking abilities.

A documentary from Rocky Mountain PBS about Native American people who lost their language as children.

Today, however, tribal communities around the country increasingly want to maintain or reacquire their languages. Efforts to do this have been going on for several decades, with some successes, such as the Mohawk language of New York and Canada, Cherokee in Oklahoma and North Carolina and the Blackfoot language of northern Montana.

In most places however, numbers of Native speakers continue to decline, while learning among younger speakers progresses slowly.

Uses of data for curriculum

My early work focused on documenting the Arapaho language. Past linguists working with Native languages typically focused on traditional storytelling, as well as audio-recorded data. But my interest in anthropology led me to focus on conversation and everyday interaction. I also recorded on video to capture social settings, gestures and sign language. And to better understand the role of the language in daily use, I worked to become a good speaker myself.

I have compiled my documentation into a database that contains over 100,000 sentences of natural Arapaho speech. All of this has been transcribed, translated into English and accompanied by detailed linguistic analysis.

The database is further supported by an online learning site and an online dictionary of around 25,000 entries. They are among the largest such resources for an Indigenous language, though resources do exist for other languages, such as Yurok.

Courtesy of Andrew Cowell.13.8 KB (download)

From documentation to curriculum

In response to the Arapaho people’s goal of language revitalization, my own work has shifted from documentation to assisting teachers, students and curriculum developers. The database turns out to have great value in this area.

Adult learners can watch the videos along with the Arapaho transcriptions or English translations, or both, and review the detailed grammatical analysis.

However, it is quite difficult for young learners to immediately benefit from listening to natural discourse. That’s why carefully graded curricula are crucial. Unlike for commonly taught languages such as French or Spanish, materials for most Native American languages are just being developed.

Arapaho can be challenging to learn because its structure is quite different from English. Many small chunks of meaning are combined to produce long, complex words. For example, an English speaker can start with “happy” and produce “un-happi-ness.” Arapaho speakers typically add three, four or even five prefixes, and multiple suffixes as well. A speaker can say the word “niibeetwonwoteekoohunoo” – which has six separate meaningful chunks. This translates to English, “I want to go and drive to town.”

There is little value in memorizing such complex words, just as English learners don’t memorize entire sentences. Instead, Arapaho learners need to understand the separate parts, and how they combine.

Previous efforts have succeeded in teaching children to speak basic Arapaho. The challenge now is to keep improving their Arapaho language abilities, using a graded curriculum that continues through all school levels.

The database can identify and label the individual chunks of words, and assign meanings to each chunk. A beginner’s dictionary of 1,300 entries has been created by calculating the overall frequency of base words in the 100,000 sentences, and then selecting only the most common ones.

The list has been broken down further to produce target vocabulary for each grade level. Smaller chunks of prefixes and suffixes are also measured, and sequential grammar-learning goals can be produced based on frequency and complexity.

A draft Arapaho learning sequence has been created, with 44 stages. It is now possible for the first time to produce a full, progressive language curriculum for Arapaho. The next step is to develop more curricular materials and train teachers to use them.

The sequence of 44 stages is now being introduced at Wyoming Indian Elementary School, the first school on the Wind River Reservation to pioneer dual-language classrooms.

Limitations of technology

Technology is not a magic bullet, however. Only Native people can save their languages, by choosing to learn and speak them.

Because artificial intelligence works using large language models, it needs billions of words of discourse to be trained effectively in a language. No Indigenous language has nearly that amount of data, so the capacity of AI to address Native language endangerment is limited. Moreover, many Indigenous communities are wary of AI due to concerns over data sovereignty and cultural property rights.

A man in a red gingham shirt holds a colorful quilted blanket.
The author, Andrew Cowell, is recognized for his Arapaho language revitalization at a 2018 ceremony on the Wind River Reservation in Wyoming.
Courtesy of Andrew Cowell.

My own old-fashioned experience as a learner and teacher has proved crucial. I can see where difficulties lie for learners, and how to fine-tune computational measurements and predictions. I’ve learned that success in helping revitalize Native languages depends on researchers building long-term relationships with Native peoples and, ideally, speaking Native languages. Only then can new technologies be applied most productively.

The Conversation

Andrew Cowell currently receives funding from National Science Foundation. Past funding related to the work described here has come from the American Council of Learned Societies and Hans Rausing Endangered Language Documentation Programme.

He has received compensation from elements of the Northern Arapaho Tribe and the Southern Cheyenne and Arapaho Tribe for some of his assistance and consultation.

ref. A database could help revive the Arapaho language before its last speakers are gone – https://theconversation.com/a-database-could-help-revive-the-arapaho-language-before-its-last-speakers-are-gone-269592

Why posture during movement matters more than how you sit or stand

Source: Radio New Zealand

If you’ve ever caught yourself slouching at your desk and immediately jerked your shoulders back and straightened your spine, you know the struggle. Within minutes, you’re slumped over again.

Here’s why: Rigid corrections are impossible to maintain because they treat posture like a static position to hold rather than a dynamic skill to train.

Good posture isn’t something you freeze into place — it’s a fluid, balanced and aligned state that you build with proper breathing, strength and mobility.

Proper breathing can improve your posture, enhance your mobility, and relieve aches and pains.

Proper breathing can improve your posture, enhance your mobility, and relieve aches and pains.

Unsplash

– Published by EveningReport.nz and AsiaPacificReport.nz, see: MIL OSI in partnership with Radio New Zealand