Why protecting Colorado children from dying of domestic violence is such a hard problem

Source: The Conversation – USA – By Kaitlyn M. Sims, Assistant Professor of Public Policy, University of Denver; Institute for Humane Studies

More than one-third of homicides of women are perpetrated by intimate partners, and there has been a steady increase in domestic violence-related deaths of children. Alvaro Medina Jurado/Getty Images

A record number of Colorado children died in 2024 as a result of domestic violence, despite a statewide reduction in overall homicide.

Of the eight children who died, five were involved in active custody disputes. These deaths took place when families faced high stress but also when legal systems should have been well placed to intervene. Multiple children were killed alongside a sibling or a parent.

As a researcher studying domestic violence, crime and anti-violence policy, I have watched these numbers with a sense of resignation rather than surprise.

Domestic violence homicide is persistent. Local, state and federal governments spend millions of dollars each year to operate hotlines, fund shelters and engage in prevention programs for victims of domestic violence. Yet more than one-third of homicides of women are still perpetrated by intimate partners. And there has been a steady increase nationally in domestic violence-related deaths of children over the past 20 years.

It’s clear that something is different about domestic violence that resists our attempts to reduce overall violent crime. But researchers have struggled to identify exactly what those differences are in ways that can inform effective policy.

To start addressing these deaths, we first need to effectively measure them, a task that is more challenging than one might expect.

Measuring domestic violence

Studying domestic violence is, at best, difficult — not least because data is highly limited.

Researchers often try to ask causal questions about what works to prevent domestic violence. To do this, they use large-scale national datasets, including the Uniform Crime Reporting Program and the National Incident-Based Reporting System. However, these datasets are often incomplete or have inconsistent reporting from responding agencies.

Law enforcement may not recognize and interpret a fatality as resulting from domestic violence if abuse was not previously reported. It is particularly challenging to identify whether a death involved dating or sexual partners unless witnesses who knew the victim closely cooperate with the investigation.

Additionally, the vast majority of victims of domestic abuse do not contact law enforcement or seek medical care. Often, this is due to fears that police will not believe them or that their abuser will find out. Parents may worry their abuser could take custody of their children, or that calling 911 will instigate child welfare system involvement.

The result is that half of the perpetrators of domestic violence fatalities in Colorado in 2024 did not have a prior domestic violence-related arrest. Only one-fifth had been previously convicted of domestic violence.

Domestic violence affects more than intimate partners

Domestic violence affects more than intimate partners or spouses. It can also affect siblings, roommates and even neighbors, co-workers or bystanders. These are collateral victims – people harmed by domestic violence without directly being part of the abusive relationship.

9News reports on the increase in domestic violence-related deaths in 2024.

Colorado and Wisconsin have expanded their definition of domestic violence fatalities to account for some of these collateral deaths. For years, Colorado has included abusers who died by suicide, or whom law enforcement killed in the line of duty, in statewide counts. But states disagree on how wide to cast the net, making comparisons between states difficult.

These fatality reviews are further hamstrung by the boundary between domestic violence and child abuse.

In Colorado, deaths due to child abuse and neglect are counted in the Domestic Violence Fatality Report only if the death can be traced to violence between intimate partners. Children can therefore get lost in the count when violence between parents or caregivers is hidden behind closed doors.

What we don’t know can hurt us

These data gaps present challenges to understanding, predicting and preventing domestic violence. Policymakers struggle to gather up-to-date information to make effective public safety policy, including over how and when to detain alleged abusers before their day in court.

In Colorado, pretrial detention recommendations are made using a rigid scoring rubric. This rubric includes the accused’s prior criminal sentences or time served in jail or prison. However, it does not include information about domestic violence protection orders or prior charges that did not result in conviction.

In general, this is a well-intended policy that upholds the principle of “innocent until proven guilty.” But in domestic violence cases, it creates a catch-22. The vast majority of abusers have never been found guilty in court. This can be due to dropped charges, lack of victim cooperation or unclear evidence. These abusers can have long histories of abusive behavior that aren’t visible to a judge when making pretrial detention decisions.

Designing effective prevention and response

Despite these challenges, policymakers have made substantial steps forward.

In 2022, the national Bipartisan Safer Communities Act closed the so-called “boyfriend loophole” whereby married individuals convicted of domestic violence offenses were prohibited from gun ownership but dating partners were exempt. This is particularly important given that the majority of firearm mass shootings in the U.S. are domestic violence-related.

States and counties nationally are improving the way courts assign pretrial detention and arrest and charge offenders. Mandatory arrest policies require law enforcement to make an arrest when they suspect abuse. No-drop orders prevent abusers from intimidating survivors into dropping charges.

However, these laws have limited effectiveness and introduce new harms, including increasing domestic violence homicides. Colorado’s own mandatory arrest law has been criticized for increasing arrests of victims of domestic violence. This can threaten victims’ own custody of their children and cause further economic precarity, increasing the risk of lethal violence.

Because laws and law enforcement cannot do everything or support every survivor, solutions must come from outside of the criminal-legal system. Community-based services and programs such as emergency housing, counseling and cash assistance help survivors to overcome barriers to safety.

Adams County, Colorado, unveils new Family Justice Center to help domestic violence survivors.

However, access to these programs and services varies. Not all counties – in Colorado or most other states – have emergency domestic violence shelters. Recent federal funding cuts threaten many programs’ continued operations. Even when programs exist, local availability of housing and services can limit service providers’ effectiveness for their adult clients and their children.

Failing to effectively measure, prevent and respond to domestic violence can be a matter of life and death. Given how survivors’ needs vary, policymakers need to recognize that policy solutions and programs are not one-size-fits-all. And tailored, local policy solutions require improved data and better resources.

Read more of our stories about Colorado.

The Conversation

Kaitlyn M. Sims receives funding from the Wisconsin Department of Children and Families, the Arnold Ventures Foundation, and the Institute for Humane Studies.

ref. Why protecting Colorado children from dying of domestic violence is such a hard problem – https://theconversation.com/why-protecting-colorado-children-from-dying-of-domestic-violence-is-such-a-hard-problem-268836

Ranked choice voting outperforms the winner-take-all system used to elect nearly every US politician

Source: The Conversation – USA – By Ismar Volić, Professor of Mathematics, Director of Institute for Mathematics and Democracy, Wellesley College

Ranked choice voting makes use of more information from the voters than plurality voting. stefanamer/Getty Images

American democracy is straining under countless pressures, many of them rooted in structural problems that go back to the nation’s founding. Chief among them is the “pick one” plurality voting system – also called winner-take-all – used to elect nearly all of the 520,000 government officials in the United States.

In this system, voters select one candidate, and the candidate who receives the highest number of votes wins.

Plurality voting is notorious for producing winners without majority support in races that have more than two candidates. It can also create spoilers, or losing candidates whose presence in a race alters the outcome, as Ralph Nader’s did in the 2000 presidential election. And it can result in vote-splitting, where similar candidates divide support, paving the way for a less popular winner. This happened in the 2016 Republican primaries when Marco Rubio, Ted Cruz and John Kasich split the anti-Donald Trump vote.

Plurality can also encourage dishonest voting. That happens when voters are pressured to abandon their favorite candidate for one they like less but think can win. In the 2024 elections, for example, voters whose preference for president was Jill Stein, the Green Party nominee, might have instead cast their vote for Democrat Kamala Harris.

An increasingly well-known alternative to plurality voting is ranked choice voting. It’s used statewide in Maine and Alaska and in dozens of municipalities, including New York City.

Better performance

Whereas plurality voting allows voters to select only one candidate, ranked choice lets them rank candidates. If a candidate secures a majority of first-place rankings, they are the winner just like they would be under plurality.

But the two systems diverge when there is no majority winner. Plurality simply chooses the candidates with the most first-place votes, while ranked choice voting eliminates the person with the fewest first-place votes and transfers their votes to the next candidate on each ballot. The process is repeated until there is a majority winner.

Ranked choice voting makes use of more information from the voters than plurality, but does it avoid some of the problems plurality suffers from?

We are a team of mathematicians who recently concluded a study aimed at answering this and related questions. We analyzed some 2,000 ranked choice elections from the U.S., Australia and Scotland. We supplemented those real-world results with 60 million simulated elections.

The results were clear: Ranked choice voting performed much better across all the measures we tested, including spoiler, vote-splitting, strength of candidates and strategic voting.

A woman smiles and places her left hand on a Bible held by a man.
Eugene Peltola Jr. holds the Bible during a ceremonial swearing-in for his wife, U.S. Rep. Mary Peltola, D-Alaska, on Capitol Hill in Washington, D.C., on Sept. 13, 2022.
AP Photo/Jose Luis Magana, File

Empowering voters

Plurality voting produced a spoiler up to 15 times more often than ranked choice voting. And it was 50% more likely to elect an extreme candidate. Plurality, furthermore, was highly susceptible to vote-splitting, while ranked choice voting was nearly impervious to it.

Ranked choice voting picked strong candidates up to 18 times more frequently than plurality voting, where by “strong” we mean candidates who received many first-place votes and also had broad support, even among their noncore supporters. This method also rarely elects a weak or fringe candidate and typically elects a candidate near the electorate’s ideological center.

Ranked choice voting is also more resistant to various forms of strategic behavior such as bullet voting, where voters choose only one candidate despite the ability to rank more, and burying, where voters disingenuously rank an alternative candidate lower in the hopes of defeating them.

Our research also studied the ways in which election systems can influence behavior. In a plurality election, voters are afraid that their ballot could be “wasted” on a candidate who doesn’t have a shot at winning, or that they might contribute to a spoiler. Our study shows that ranked choice voting largely avoids these pitfalls, empowering voters to express their true preferences rather than being strategic.

We found that candidates in ranked choice voting elections do best when they adopt the policies the greatest number of people support, meeting the voters where they are.

In Alaska’s 2022 special U.S. House election, for example, Democrat Mary Peltola positioned herself firmly within Alaska’s center-left base – while still embracing some positions considered conservative outside of Alaska. She won by garnering enough second-place votes from supporters of Republican Nick Begich.

And in the New York mayoral primary in June 2025, Zohran Mamdani won by creating a coalition with another progressive candidate, Brad Lander, and occupying a progressive space representing a range of voters.

The Alaska and New York examples highlight some differences with plurality voting, which often favors appealing to a narrow base without the necessity of reaching out beyond it.

A person flips through tabulated ballots.
Ballots are prepared to be tabulated for Maine’s 2nd Congressional District House election on Nov. 12, 2018, in Augusta, Maine. The election was the first congressional race in U.S. history to be decided by the ranked-choice voting method.
AP Photo/Robert F. Bukaty

Mending a broken system

A mathematically interesting feature of Alaska’s 2022 special U.S. House election is that Begich beat both Peltola and Republican Sarah Palin in head-to-head contests – meaning that more people ranked Begich above Peltola than the other way around – but lost the ranked choice voting election to Peltola.

Critics seize on such cases as reasons to avoid ranked choice voting. But our work shows that these are statistical outliers, occurring fewer than 1% of the time.

Overall, our research shows that ranked choice voting elects candidates with broader support and greater democratic legitimacy than plurality. It therefore seems sensible that voting reform advocates continue to pursue this method as an alternative to plurality voting.

At a time when Americans are losing faith in democracy, voters cannot afford systems that hand victory to unrepresentative candidates and force them to play tactical games. The math is in, and the evidence is overwhelming: Plurality voting is broken. Ranked choice voting will not solve every democratic ailment, but it is a good step toward mending them.

The Conversation

Ismar Volić receives funding from Schwab Charitable.

Andy Schultz receives funding from Schwab Charitable. He is a registered Democrat.

David McCune receives funding from Schwab Charitable.

ref. Ranked choice voting outperforms the winner-take-all system used to elect nearly every US politician – https://theconversation.com/ranked-choice-voting-outperforms-the-winner-take-all-system-used-to-elect-nearly-every-us-politician-267515

Why do family companies even exist? They know how to ‘win without fighting’

Source: The Conversation – USA (2) – By Vitaliy Skorodziyevskiy, Assistant Professor of Management and Entrepreneurship, University of Louisville

When you hear the phrase “family business,” you might think of the backstabbing Roys of “Succession” or the dysfunctional Duttons of “Yellowstone.” But while TV’s family companies are entertaining, their real-life counterparts may be even more compelling.

Around the world, family businesses produce about two-thirds of all economic output and employ more than half of all workers. And they can be very profitable: The world’s 500 largest family businesses generated a collective US$8.8 trillion in 2024. That’s nearly twice the gross domestic product of Germany.

If you’re not steeped in family business research – and even if you are – their ubiquity might seem a little strange. After all, families can come with drama, conflict and long memories. That might not sound like the formula for an efficient company.

We are researchers who study family businesses, and we wanted to understand why there are so many of them in the first place. In our recent article published in the Journal of Management, we set out to understand this different kind of “why” – not just the purpose of family firms, but why they thrive around the world.

The usual answers don’t really explain it

The standard answer to “Why do family companies exist?” is straightforward: They allow owners to generate income and potentially create a legacy for future generations.

A related question is: “Why do entrepreneurs even want to involve their relatives in their new ventures?” Research suggests entrepreneurs do so because family members care and can help when resources are limited.

But that might not be unique to family businesses. All companies – whether run by a family or corporate executives – balance short-term profit and long-term goals. And all of them want reliable workers who are willing to pitch in.

So those answers don’t explain why family companies, specifically, are so common worldwide.

A different angle: Winning without fighting

For our study, we considered decades of research about family firms to conclude that family businesses are uniquely skilled at keeping competitors out of their market space – often without actually competing with them.

How? We think a quote from Sun-Tzu’s “The Art of War” captures the idea:

To fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting.

Family-owned businesses often do exactly this, which is why there are so many of them. Here’s how it works in practice:

Three key differences

Research on family businesses has shown that they differ from other types of companies in three key ways: the types of goals they pursue, the governance structures they establish, and the resources they have. Together, these three characteristics explain how family businesses may use their property rights to get an edge over their competitors.

The first is goals. Unlike other types of enterprises, family businesses prioritize noneconomic goals involving the reputation, legacy and well-being of the family – both now and in the future.

Of course, they still have to worry about making a profit. But their interest in family-centered goals can lead them to choose projects that may yield lower returns but still fulfill their noneconomic goals. These sorts of projects may not be attractive to other types of firms. As a result, family businesses may find themselves operating in spaces where there’s not much competition to start with.

For instance, take Corticeira Amorim, a family-run Portuguese company that dominates the global market for cork stoppers and other cork products. The cork industry is a classic narrow niche: There are only a handful of serious global competitors, and Amorim is widely described as the world’s largest cork processing group, with a sizable share of global wine and Champagne corks.

CEO Antonio Rios de Amorim discusses the history of his family business in this Business Insider video.

The second key factor is governance. Family members who work together often know each other well, care about each other and want the best for both the family and the firm, which may stay in the family’s possession for generations. This fact may reduce operating costs and the cost of contracting.

Why? When they make decisions, they don’t always need to hire a fancy, Harvey Specter-like lawyer from the show “Suits.” They can decide on the next move for the company while having dinner together. This significantly reduces the costs associated with decision-making. In other words, because they rely less on formal contracts and monitoring, family businesses can operate more cheaply.

Finally, family firms use resources like information and money differently. Since many established family businesses have been around for decades, relatives who work together accumulate information that’s hard to acquire and transfer, and might not even be useful elsewhere. Being a family member means not only doing business with relatives but also going through life together acquiring a unique perspective about the family itself.

As a result, family businesses have lower transaction costs than other companies. Sometimes this shows up in very concrete ways. An uncle may invest money in the business and never ask for it back. Would that happen at a nonfamily business? Probably not. This dedication makes family members a special type of human asset that’s hard to replace.

Put simply, nonfamily businesses are unlikely to hire someone who cares as much about the company’s success as a deeply invested relative does. And because these relationships aren’t for sale on the open market, competitors can’t easily access them. That fact helps family businesses keep competitors at bay while essentially being themselves – which in turn explains why there are so many of them.

Family businesses are so common worldwide that there are several holidays celebrating them, including International Family Business Day on Nov. 25, U.S. National Mom and Pop Business Owners Day on March 29 and the United Nations’ Micro-, Small and Medium-Sized Enterprises Day on June 27. This holiday season, you might consider spreading a little extra cheer with the family-run retailers in your community.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Why do family companies even exist? They know how to ‘win without fighting’ – https://theconversation.com/why-do-family-companies-even-exist-they-know-how-to-win-without-fighting-266329

Google plans to power a new data center with fossil fuels, yet release almost no emissions – here’s how its carbon capture tech works

Source: The Conversation – USA (2) – By Ramesh Agarwal, Professor of Engineering, Washington University in St. Louis

Carbon capture and storage projects capture carbon dioxide emissions from industrial facilities and power plants and pipe them underground to geological formations. Nopphinan/iStock/Getty Images Plus

As AI data centers spring up across the country, their energy demand and resulting greenhouse gas emissions are raising concerns. With servers and energy-intensive cooling systems constantly running, these buildings can use anywhere from a few megawatts of power for a small data center to more than 100 megawatts for a hyperscale data center. To put that in perspective, the average large natural gas power plant built in the U.S. generates less than 1,000 megawatts.

When the power for these data centers comes from fossil fuels, they can become major sources of climate-warming emissions in the atmosphere – unless the power plants capture their greenhouse gases first and then lock them away.

Google recently entered into a unique corporate power purchase agreement to support the construction of a natural gas power plant in Illinois designed to do exactly that through carbon capture and storage.

So how does carbon capture and storage, or CCS, work for a project like this?

I am an engineer who wrote a 2024 book about various types of carbon storage. Here’s the short version of what you need to know.

How CCS works

When fossil fuels are burned to generate electricity, they release carbon dioxide, a powerful greenhouse gas that remains in the atmosphere for centuries. As these gases accumulate in the atmosphere, they act like a blanket, holding heat close to the Earth’s surface. Too high of a concentration heats up the Earth too much, setting off climate changes, including worsening heat waves, rising sea levels and intensifying storms.

Carbon capture and storage involves capturing carbon dioxide from power plants, industrial processes or even directly from the air and then transporting it, often through pipelines, to sites where it can be safely injected underground for permanent storage.

An illustration showing pipes from oil pumps and power plants to underground layers
A snapshot of some of the ways carbon capture and storage works. The pipelines into the oil layer, in black, and the oil well illustrate enhanced oil recovery.
Congressional Budget Office, U.S. Federal Government

The carbon dioxide might be transported as a supercritical gas – which is right at the phase change from liquid to gas and has the properties of both – or dissolved in a liquid. Once injected deep underground, the carbon dioxide can become permanently trapped in the geologic structure, dissolve in brine or become mineralized, turning it to rock.

The goal of carbon storage is to ensure that carbon dioxide can be kept out of the atmosphere for a long time.

Types of underground carbon storage

There are several options for storing carbon dioxide underground.

Depleted oil and natural gas reservoirs have plentiful storage space and the added benefit that most are already mapped and their limits understood. They already held hydrocarbons in place for millions of years.

Carbon dioxide can also be injected into working oil or gas reservoirs to push out more of those fossil fuels while leaving most of the carbon dioxide behind. This method, known as enhanced oil and gas recovery, is the most common one used by carbon capture and storage projects in the U.S. today, and one reason CCS draws complaints from environmental groups.

Volcanic basalt rock and carbonate formations are considered good candidates for safe and long-term geological storage because they contain calcium and magnesium ions that interact with carbon dioxide, turning it into minerals. Iceland pioneered this method using its bedrock of volcanic basalt for carbon storage. Basalt also covers most of the oceanic crust, and scientists have been exploring the potential for sub-seafloor storage reservoirs.

How Iceland uses basalt to turn captured carbon dioxide into solid minerals.

In the U.S., a fourth option likely has the most potential for industrial carbon dioxide storage – deep saline aquifers, which is what Google plans to use. These widely distributed aquifers are porous and permeable sediment formations consisting of sandstone, limestone or dolostone. They’re filled with highly mineralized groundwater that cannot be used directly for drinking water but is very suitable for storing CO2.

Deep saline aquifers also have large storage capacities, ranging from about 1,000 to 20,000 gigatons. In comparison, the nation’s total carbon emissions from fossil fuels in 2024 were about 4.9 gigatons.

As of fall 2025, 21 industrial facilities across the U.S. used carbon capture and storage, including industries producing natural gas, fertilizer and biofuels, according to the Global CCS Institute’s 2025 report. Five of those use deep saline aquifers, and the rest involve enhanced oil or gas recovery. Eight more industrial carbon capture facilities were under construction.

Google’s plan is unique because it involves a power purchase agreement that makes building the power plant with carbon capture and storage possible.

Google’s deep saline aquifer storage plan

Google’s 400-megawatt natural gas power plant, to be built with Broadwing Energy, is designed to capture about 90% of the plant’s carbon dioxide emissions and pipe them underground for permanent storage in a deep saline aquifer in the nearby Mount Simon sandstone formation.

The Mount Simon sandstone formation is a huge saline aquifer that lies underneath most of Illinois, southwestern Indiana, southern Ohio and western Kentucky. It has a layer of highly porous and permeable sandstone that makes it an ideal candidate for carbon dioxide injection. To keep the carbon dioxide in a supercritical state, that layer needs to be at least half a mile (800 meters) deep.

A thick layer of Eau Claire shale sits above the Mount Simon formation, serving as the caprock that helps prevent stored carbon dioxide from escaping. Except for some small regions near the Mississippi River, Eau Claire shale is considerably thick – more than 300 feet (90 meters) – throughout most of the Illinois basin.

The estimated storage capacity of the Mount Simon formation ranges from 27 gigatons to 109 gigatons of carbon dioxide.

The Google project plans to use an existing injection well site that was part of the first large-scale carbon storage demonstration in the Mount Simon formation. Food producer Archer Daniels Midland began injecting carbon dioxide there from nearby corn processing plants in 2012.

Carbon capture and storage has had challenges as the technology developed over the years, including a pipeline rupture in 2020 that forced evacuations in Satartia, Mississippi, and caused several people to lose consciousness. After a recent leak deep underground at the Archer Daniels Midland site in Illinois, the Environmental Protection Agency in 2025 required the company to improve its monitoring. Stored carbon dioxide had migrated into an unapproved area, but no threat to water supplies was reported.

Why does CCS matter?

Data centers are expanding quickly, and utilities will have to build more power capacity to keep up. The artificial intelligence company OpenAI is urging the U.S. to build 100 gigawatts of new capacity every year – doubling its current rate.

Many energy experts, including the International Energy Agency, believe carbon capture and storage will be necessary to slow climate change and keep global temperatures from reaching dangerous levels as energy demand rises.

The Conversation

Ramesh Agarwal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Google plans to power a new data center with fossil fuels, yet release almost no emissions – here’s how its carbon capture tech works – https://theconversation.com/google-plans-to-power-a-new-data-center-with-fossil-fuels-yet-release-almost-no-emissions-heres-how-its-carbon-capture-tech-works-270425

Sugar starts corroding your teeth within seconds – here’s how to protect your pearly whites from decay

Source: The Conversation – USA (3) – By José Lemos, Professor of Oral Biology, University of Florida

Sugar feeds the colonies of bacteria living in your mouth and surrounding your teeth. Andrii Zastrozhnov/iStock via Getty Images Plus

Between Halloween candy, Thanksgiving pies and holiday cookies, the end of the year is often packed with opportunities to consume sugar. But what happens in your mouth during those first minutes and hours after eating those sweets?

While you’re likely aware that eating too much sugar can cause cavities – that is, damage to your teeth – you might be less familiar with how bacteria use those sugars to build a sticky film called plaque on your teeth as soon as you take that first sweet bite.

We are a team of microbiologists that studies how oral bacteria cause tooth decay. Here’s what happens in your mouth the moment sugar passes your lips – and how to protect your teeth:

An acid plunge

Within seconds of your first bite or sip of something sugary, the bacteria that make the human mouth their home start using those dietary sugars to grow and multiply. In the process of converting those sugars into energy, these bacteria produce large quantities of acids. As a result, just a minute or two after consuming high-sugar foods or drinks, the acidity of your mouth increases into levels that can dissolve enamel – that is, the minerals making up the surface of your teeth.

An array of cookies, cakes, candies and other sweets
Sweets present a delicious assault on your teeth.
Nazar Abbas Photography/Moment via Getty Images

Luckily, saliva comes to the rescue before these acids can start corroding the surface of your teeth. It washes away excess sugars while also neutralizing the acids in your mouth.

Your mouth is also home to other bacteria that compete with cavity-causing bacteria for resources and space, fighting them off and restoring the acidity of your mouth to levels that aren’t harmful to teeth.

However, frequent consumption of sweets and sugary drinks can overfeed harmful bacteria in a way that neither saliva nor helpful bacteria can overcome.

An assault on enamel

Cavity-causing bacteria also use dietary sugars to make a sticky layer called a biofilm that acts like a fortress attached to the teeth. Biofilms are very hard to remove without mechanical force, such as from routinely brushing your teeth or cleaning at the dentist’s office.

Microbes form vast communities called biofilms.

In addition, biofilms impose a physical barrier that restricts what crosses its border, such that saliva can no longer do its job of neutralizing acid as well. To make matters worse, while cavity-causing bacteria are able to survive in these acidic conditions, the good bacteria fighting them cannot.

In these protected fortresses, cavity-causing bacteria are able to keep multiplying, keeping the acidity level of the mouth elevated and leading to further loss of tooth minerals until a cavity becomes visible or painful.

How to protect your (sweet) teeth

Before eating your next sugary treat, there are a few measures you can take to help keep the cavity-forming bacteria at bay and your teeth safe.

First, try to reduce the amount of sugar you eat and consume your sugary food or drink during a meal. This way, the increased saliva production that occurs while eating can help wash away sugars and neutralize acids in your mouth.

In addition, avoid snacking on sweets and sugary drinks throughout the day, especially those containing table sugar or high-fructose corn syrup. Continually exposing your mouth to sugar will keep its acidity level higher for longer periods of time.

Finally, remember to brush regularly, especially after meals, to remove as much dental plaque as possible. Daily flossing also helps remove plaque from areas that your toothbrush cannot reach.

The Conversation

José Lemos receives funding from NIH and Vaxcyte.

Jacqueline Abranches receives funding from NIH/NIDCR and Vaxcyte.

ref. Sugar starts corroding your teeth within seconds – here’s how to protect your pearly whites from decay – https://theconversation.com/sugar-starts-corroding-your-teeth-within-seconds-heres-how-to-protect-your-pearly-whites-from-decay-269352

We are hardwired to sing − and it’s good for us, too

Source: The Conversation – USA (3) – By Elinor Harrison, Faculty Affiliate, Philosophy-Neuroscience-Psychology, Washington University in St. Louis

Gospel choir director Clyde Lawrence performs at the New Orleans Jazz & Heritage Festival on April 25, 2025.
AP Photo/Gerald Herbert

On the first Sunday after being named leader of the Catholic Church in May 2025, Pope Leo XIV stood on the balcony of St. Peter’s Basilica in Rome and addressed the tens of thousands of people gathered. Invoking tradition, he led the people in noontime prayer. But rather than reciting it, as his predecessors generally did, he sang.

In chanting the traditional Regina Caeli, the pope inspired what some have called a rebirth of Gregorian chant, a type of monophonic and unaccompanied singing done in Latin that dates back more than a thousand years.

The Vatican has been at the forefront of that push, launching an online initiative to teach Gregorian chant through short educational tutorials called “Let’s Sing with the Pope.” The stated goals of the initiative are to give Catholics worldwide an opportunity to “participate actively in the liturgy” and to “make the rich heritage of Gregorian chant accessible to all.”

These goals resonated with me. As a performing artist and scientist of human movement, I spent the past decade developing therapeutic techniques involving singing and dancing to help people with neurological disorders. Much like the pope’s initiative, these arts-based therapies require active participation, promote connection, and are accessible to anyone. Indeed, not only is singing a deeply ingrained human cultural activity, research increasingly shows how good it is for us.

The same old song and dance

For 15 years, I worked as a professional dancer and singer. In the course of that career, I became convinced that creating art through movement and song was integral to my well-being. Eventually, I decided to shift gears and study the science underpinning my longtime passion by looking at the benefits of dance for people with Parkinson’s disease.

The neurological condition, which affects over 10 million people worldwide, is caused by neuron loss in an area of the brain that is involved in movement and rhythmic processing – the basal ganglia. The disease causes a range of debilitating motor impairments, including walking instability.

An older woman sings from her music sheet.
A woman sings as part of a chorus for Parkinson’s patients and their care partners at a YMCA in Hanover, Mass., on Feb. 13, 2019.
David L. Ryan/The Boston Globe via Getty Images

Early on in my training, I suggested that people with Parkinson’s could improve the rhythm of their steps if they sang while they walked. Even as we began publishing our initial feasibility studies, people remained skeptical. Wouldn’t it be too hard for people with motor impairment to do two things at once?

But my own experience of singing and dancing simultaneously since I was a child suggested it could be innate. While Broadway performers do this at an extremely high level of artistry, singing and dancing are not limited to professionals. We teach children nursery rhymes with gestures; we spontaneously nod our heads to a favorite song; we sway to the beat while singing at a baseball game. Although people with Parkinson’s typically struggle to do two tasks at once, perhaps singing and moving were such natural activities that they could reinforce each other rather than distract.

A scientific case for song

Humans are, in effect, hardwired to sing and dance, and we likely evolved to do so. In every known culture, evidence exists of music, singing or chanting. The oldest discovered musical instruments are ivory and bone flutes dating back over 40,000 years. Before people played music, they likely sang. The discovery of a 60,000-year-old hyoid bone shaped like a modern human’s suggests our Neanderthal ancestors could sing.

In “The Descent of Man,” Charles Darwin speculated that a musical protolanguage, analogous to birdsong, was driven by sexual selection. Whatever the reason, singing and chanting have been integral parts of spiritual, cultural and healing practices around the world for thousands of years. Chanting practices, in which repetitive sounds are used to induce altered states of consciousness and connect with the spiritual realm, are ancient and diverse in their roots.

Though the evolutionary reasons remain disputed, modern science is increasingly validating what many traditions have long held: Singing and chanting can have profound benefits to physical, mental and social health, with both immediate and long-term effects.

Physically, the act of producing sound can strengthen the lungs and diaphragm and increase the amount of oxygen in the blood. Singing can also lower heart rate and blood pressure, reducing the risk of cardiovascular diseases.

Vocalizing can even improve your immune system, as active music participation can increase levels of immunoglobulin A, one of the body’s key antibodies to stave off illness.

Singing also improves mood and reduces stress.

Studies have shown that singing lowers cortisol levels, the primary stress hormone, in healthy adults and people with cancer or neurologic disorders. Singing may also rebalance autonomic nervous system activity by stimulating the vagus nerve and improving the body’s ability to respond to environmental stresses. Perhaps this is why singing has been called “the world’s most accessible stress reliever.”

A woman sings from a church podium.
A woman performs a Gregorian chant on Christmas Eve in 2023 in Spain.
saac Buj/Europa Press via Getty Images

Moreover, chanting may make you aware of your inner states while connecting to something larger. Repetitive chanting, as is common in rosary recitation and yogic mantras, can induce a meditative state, inducing mindfulness and altered states of consciousness. Neuroimaging studies show that chanting activates brainwaves associated with suspension of self-oriented and stress-related thoughts.

Singing as community

Singing alone is one thing, but singing with others brings about a host of other benefits, as anyone who has sung in a choir can likely attest.

Group singing provides a mood boost and improves overall well-being. Increased levels of neurotransmitters such as dopamine, serotonin and oxytocin during singing may promote feelings of social connection and bonding.

When people sing in unison, they synchronize not just their breath but also their heart rates. Heart rate variability, a measure of the body’s adaptability to stress, also improves during group singing, whether you’re an expert or a novice.

In my own research, singing has proven useful in yet another way: as a cue for movement. Matching footfalls to one’s own singing is an effective tool for improving walking that is better than passive listening. Seemingly, active vocalization requires a level of engagement, attention and effort that can translate into improved motor patterns. For people with Parkinson’s, for example, this simple activity can help them avoid a fall. We have shown that people with the disease, in spite of neural degeneration, activate similar brain regions as healthy controls. And it works even when you sing in your head.

Whether you choose to sing with the pope or not, you don’t need a mellifluous voice like his to raise your voice in song. You can sing in the shower. Join a choir. Chant that “om” at the end of yoga class. Releasing your voice might be easier than you think.

And, besides, it’s good for you.

The Conversation

Elinor Harrison received funding from the National Institutes of Health, the National Endowment for the Arts and the Grammy Museum Foundation. She is affiliated with the International Association of Dance Medicine and Science and the Society for Music Perception and Cognition.

ref. We are hardwired to sing − and it’s good for us, too – https://theconversation.com/we-are-hardwired-to-sing-and-its-good-for-us-too-262861

DNA from soil could soon reveal who lived in ice age caves

Source: The Conversation – UK – By Gerlinde Bigga, Scientific Coordinator of the Leibniz Science Campus "Geogenomic Archaeology Campus Tübingen", University of Tübingen

The team at GACT has been analysing sediments from Hohle Fels cave in Germany.

The last two decades have seen a revolution in scientists’ ability to reconstruct the past. This has been made possible through technological advances in the way DNA is extracted from ancient bones and analysed.

These advances have revealed that Neanderthals and modern humans interbred – something that wasn’t previously thought to have happened. It has allowed researchers to disentangle the various migrations that shaped modern people. It has also allowed teams to sequence the genomes of extinct animals, such as the mammoth, and extinct agents of disease, such as defunct strains of plague.

While much of this work has been carried out by analysing the physical remains of humans or animals, there is another way to obtain ancient DNA from the environment. Researchers can now extract and sequence DNA (determine the order of “letters” in the molecule) directly from cave sediments rather than relying on bones. This is transforming the field, known as palaeogenetics.




Read more:
When did kissing evolve and did humans and Neanderthals get off with each other? New research


Caves can preserve tens of thousands of years of genetic history, providing ideal archives for studying long-term human–ecosystem interactions. The deposits beneath our feet become biological time capsules.

It is something we are exploring here at the Geogenomic Archaeology Campus Tübingen (GACT) in Germany. Analysing DNA from cave sediments allows us to reconstruct who lived in ice age Europe, how ecosystems changed and what role humans played. For example, did modern humans and Neanderthals overlap in the same caves? It’s also possible to obtain genetic material from faeces left in caves. At the moment we are analysing DNA from the droppings of a cave hyena that lived in Europe around 40,000 years ago.

The oldest sediment DNA discovered so far comes from Greenland and is two million years old.

Palaeogenetics has come a long way since the first genome of an extinct animal, the quagga, a close relative of modern zebras, was sequenced in 1984. Over the past two decades, next-generation genetic sequencing machines, laboratory robotics and bioinformatics (the ability to analyse large, complex biological datasets) have turned ancient DNA from a fragile curiosity into a high-throughput scientific tool.

The sediment samples from Hohle Fels are divided up for different analysis methods. Some go to the clean room, some to the geochemical laboratory.
The sediment samples from Hohle Fels are divided up for different analysis methods. Some go to the clean room, some to the geochemical laboratory.

Today, sequencing machines can decode up to a hundred million times more DNA than their early predecessors. Where the first human genome took over a decade to complete, modern laboratories can now sequence hundreds of full human genomes in a single day.

In 2022, the Nobel prize in physiology or medicine was awarded to Svante Pääbo, a leading light in this field. It highlighted the global significance of this research. Ancient DNA now regularly makes headlines, from attempts to recreate mammoth-like elephants, to tracing hundreds of thousands of years of human presence in parts of the world. Crucially, advances in robotics and computing have allowed us to recover DNA from sediments as well as bones.

GACT is a growing research network based in Tübingen, Germany, where three institutions collaborate across disciplines to establish new methods for finding DNA in sediments. Archaeologists, geoscientists, bioinformaticians, microbiologists and ancient-DNA specialists combine their expertise to uncover insights that no single field could achieve alone — a collaboration in which the whole genuinely becomes greater than the sum of its parts.

The network extends well beyond Germany. International partners enable fieldwork in
archaeological cave sites and natural caves all over the world. This summer, for example, the team investigated cave sites in Serbia, collecting several hundred sediment samples for ancient DNA and related ecological analyses. Future work is planned in South Africa and the western United States to test the limits of ancient DNA preservation in sediments from different environments and time periods.

Excavation in Serbia
Work underway at a cave site in Serbia.

A needle in a haystack

Recovering DNA from sediments sounds simple: take a scoop, extract, sequence. In reality, it is far more complex. The molecules are scarce, degraded and fragmented, and mixed with modern contamination from cave visitors and wildlife. Detecting authentic ice age molecules relies on subtle chemical damage patterns to the DNA itself, ultra-clean laboratories, robotic extraction, and specialised bioinformatics. Every positive identification is a small triumph, revealing patterns invisible to conventional archaeology.

Much of GACT’s work takes place in the caves of the Swabian Jura within Unesco World Heritage sites such as Hohle Fels, home to the world’s oldest musical instruments and figurative art. Neanderthals and Homo sapiens left behind stone artefacts, bones, ivory and sediments that accumulated over tens of millennia. Caves are natural DNA archives, where stable conditions preserve fragile biomolecules, enabling researchers to build up a genetic history of ice age Europe.

One of the most exciting aspects of sediment DNA research is its ability to detect species long gone, even when no bones or artefacts remain. A particular focus lies on humans: who lived in the cave, and when? How modern humans and Neanderthals use the caves and, as mentioned, were they there at the same times? Did cave bears and humans compete for shelter and resources? And what might the microbes that lived alongside them reveal about the impact humans had on past ecosystems?

Sediment DNA also traces life outside the cave. Predators dragged prey into sheltered chambers, humans left waste behind. By following changes in human, animal and microbial DNA over time, researchers can examine ancient extinctions and ecosystem shifts, offering insights relevant to today’s biodiversity crisis.

The work is ambitious: using sedimentary DNA to reconstruct ice age ecosystems and to understand the ecological consequences of human presence. Only two years into GACT, every dataset generates new questions. Every cave layer adds another twist to the story.

With hundreds of samples now being processed, major discoveries lie ahead. Researchers expect soon to detect the first cave bear genomes, the earliest human traces, and complex microbial communities that once thrived in darkness. Will the sediments reveal all their secrets? Time will tell – but the prospects are exhilarating.

The Conversation

Gerlinde Bigga does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. DNA from soil could soon reveal who lived in ice age caves – https://theconversation.com/dna-from-soil-could-soon-reveal-who-lived-in-ice-age-caves-270318

Google is relying on its own chips for its AI system Gemini. Here’s why that’s a seismic change for the industry

Source: The Conversation – UK – By Alaa Mohasseb, Senior Lecturer in Artificial Intelligence and Machine Learning, University of Portsmouth

For many years, the US company Nvidia shaped the foundations of modern artificial intelligence. Its graphics processing units (GPUs) are a specialised type of computer chip originally designed to handle the processing demands of graphics and animation. But they’re also great for the repetitive calculations required by AI systems.

Thus, these chips have powered the rapid rise of large language models – the technology behind AI chatbots – and they have became the familiar engine behind almost every major AI breakthrough.

This hardware sat quietly in the background while most of the attention was focused on algorithms and data. Google’s decision to train Gemini on its own chips, called tensor processing units (TPUs) changes that picture. It invites the industry to look directly at the machines behind the models and to reconsider assumptions that long seemed fixed.

This moment matters because the scale of AI models has begun to expose the limits
of general purpose chips. As models grow, the demands placed on processing systems
increases to levels that make hidden inefficiencies impossible to ignore.

Google’s reliance on TPUs reveals an industry that is starting to understand that hardware choices are not simply technical preferences but strategic commitments that determine who can lead the next wave of AI development.

Google’s Gemini relies on cloud systems that simplify the challenging task of coordinating devices during large-scale training (improvement) of AI models.

The design of these different chips reflects a fundamental difference in intention. Nvidia’s GPUs are general purpose and flexible enough to run a wide range of tasks. TPUs were created for the narrow mathematical operations at the heart of AI models.

Independent comparisons highlight that TPU v5p pods can outperform high-end Nvidia systems on workloads tuned for Google’s software ecosystem. When the chip architecture, model structure and software stack align so closely, improvements in speed and efficiency become natural rather than forced.

These performance characteristics also reshape how quickly teams can experiment. When hardware works in concert with the models it is designed to train, iteration becomes faster and more scalable. This matters because the ability to test ideas quickly often determines which organisations innovate first.

These technical gains are only one part of the story. Training cutting-edge AI systems is expensive and requires enormous computing resources. Organisations that rely only on GPUs face high costs and increasing competition for supply. By developing and depending on its own hardware, Google gains more control over pricing, availability and long-term strategy.

Analysts have noted that this internal approach positions Google with lower operational costs while reducing dependence on external suppliers for chips. A particularly notable development came from Meta as it explored a multi-billion dollar agreement to use TPU capacity.

When one of the largest consumers of GPUs evaluates a shift toward custom accelerators, it signals more than curiosity. It suggests growing recognition that relying on a single supplier may no longer be the safest or most efficient strategy in an industry where hardware availability shapes competitiveness.

These moves also raise questions about how cloud providers will position themselves. If TPUs become more widely available through Google’s cloud services, the rest of the market may gain access to hardware that was once considered proprietary. The ripple effects could reshape the economics of AI training far beyond Google’s internal research.

What This Means for Nvidia

Financial markets reacted quickly to the news. Nvidia’s stock fell as investors weighed the potential for cloud providers to split their hardware needs across more than one supplier. Even if TPUs do not replace GPUs entirely, their presence introduces competition that may influence pricing and development timelines.

The existence of credible alternatives pressures Nvidia to move faster, refine its offerings and appeal to customers who now see more than one viable path forward.
Even so, Nvidia retains a strong position. Many organisations depend heavily on CUDA (a computing platform and programming model developed by NVidia) and the large ecosystem of tools and workflows built around it.

Moving away from that environment requires significant engineering effort and may not be feasible for many teams. GPUs continue to offer unmatched flexibility for diverse workloads and will remain essential in many contexts.

However, the conversation around hardware has begun to shift. Companies building
cutting-edge AI models are increasingly interested in specialised chips tuned to their exact needs. As models grow larger and more complex, organisations want greater control over the systems that support them. The idea that one chip family can meet every requirement is becoming harder to justify.

Google’s commitment to TPUs for Gemini illustrates this shift clearly. It shows that custom chips can train world-class AI models and that hardware purpose-built for AI is becoming central to future progress.

It also makes visible the growing diversification of AI infrastructure. Nvidia remains dominant, but it now shares the field with alternatives that are increasingly capable of shaping the direction of AI development.

The foundations of AI are becoming more varied and more competitive. Performance
gains will come not only from new model architectures but from the hardware designed to support them.

Google’s TPU strategy marks the beginning of a new phase in which the path forward will be defined by a wider range of chips and by the organisations willing to rethink the assumptions that once held the industry together.

The Conversation

Alaa Mohasseb does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Google is relying on its own chips for its AI system Gemini. Here’s why that’s a seismic change for the industry – https://theconversation.com/google-is-relying-on-its-own-chips-for-its-ai-system-gemini-heres-why-thats-a-seismic-change-for-the-industry-270818

Before trips to Mars, we need better protection from cosmic rays

Source: The Conversation – UK – By Zahida Sultanova, Post Doctoral Research Fellow, School of Biological Sciences, University of East Anglia

Frame Stock Footage/Shutterstock.com

The first step on the Moon was one of humanity’s most exciting accomplishments. Now scientists are planning return trips – and dreaming of Mars beyond.

Next year, Nasa’s Artemis II mission will send four astronauts to fly around the Moon to test the spacecraft before future landings. The following year, two astronauts are expected to explore the surface of the Moon for a week as part of Nasa’s Artemis III mission.

And finally, the trip to Mars is planned for the 2030s. But there’s an invisible threat standing in the way: cosmic rays.

When we look at the night sky, we see stars and nearby planets. If we’re lucky enough to live somewhere without light pollution, we might catch meteors sliding across the sky. But cosmic rays – consisting of protons, helium nuclei, heavy ions and electrons – remain hidden. They stream in from exploding stars (galactic cosmic rays) and our very own sun (solar particle events).

They don’t discriminate. These particles carry so much energy and move so fast that they can knock electrons off atoms and disrupt molecular structures of any material. That way, they can damage everything in their path, machines and humans alike.

The Earth’s magnetic field and atmosphere shield us from most of this danger. But outside Earth’s protection, space travellers will be routinely exposed. In deep space, cosmic rays can break DNA strands, disrupt proteins and damage other cellular components, increasing the risk of serious diseases such as cancer.

The research challenge is straightforward: measure how cosmic rays affect living organisms, then design strategies to reduce their damage.

Ideally, scientists would study these effects by sending tissues, organoids (artificially made organ-like structures) or lab animals (such as mice) directly into space. That does happen, but it’s expensive and difficult. A more practical approach is to simulate cosmic radiation on Earth using particle accelerators.

Cosmic ray simulators in the US and Germany expose tissues, plants and animals to different components of cosmic rays in sequence. A new international accelerator facility being built in Germany will reach even higher energies, matching levels found in space that have never been tested on living organisms.

But these simulations aren’t fully realistic. Many experiments deliver the entire mission dose in a single treatment. This is like using a tsunami to study the effects of rain.

In real space, cosmic rays arrive as a mixture of high-energy particles hitting simultaneously, not one type at a time. My colleagues and I have suggested building a multi-branch accelerator that could fire several tuneable particle beams at once, recreating the mixed radiation of deep space under controlled laboratory conditions. For now, though, this kind of facility exists only as a proposal.

Beyond better testing, we need better protection. Physical shields seem like the obvious first defence. Hydrogen-rich materials such as polyethylene and water-absorbing hydrogels can slow charged particles. Although they are used, or planned to be used, as spacecraft materials, their benefits are limited.

Particularly galactic cosmic rays, the ones that arrive from far exploding stars, are so energetic that they can penetrate through physical shielding. They can even generate secondary radiation that increases exposure. So, effective protection by using solely physical shields remains a major challenge.

Nature’s armour

That’s why scientists are exploring biological strategies. One approach is to use antioxidants. These molecules can protect DNA from harmful chemicals that are produced when cosmic rays hit living cells.

Supplementing with CDDO-EA, a synthetic antioxidant, reduces cognitive damage caused by simulated cosmic radiation in female mice. In the study, mice exposed to simulated cosmic radiation learned a simple task more slowly compared to unexposed mice. However, mice that received the synthetic antioxidant performed normally despite being exposed to simulated cosmic radiation.

Another approach involves learning from organisms with extraordinary abilities. Hibernating organisms become more resistant to radiation during hibernation. The mechanisms on how hibernation protects from radiation are not fully understood yet. Still, inducing hibernation-like conditions in non-hibernating animals is possible and can make them more radioresistant.

Tardigrades – microscopic creatures also known as water bears – are also extremely radioresistant, especially when dehydrated. Although we can’t hibernate or dehydrate astronauts, the strategies these organisms use to protect cellular components might help us preserve other organisms during long space journeys.

Microbes, seeds, simple food sources and even animals that could later become our companions might be kept in a protected state for a while. Under calmer conditions, they could then be brought back to full activity. Therefore, understanding and harnessing these protective mechanisms could prove crucial for future space journeys.

A third strategy focuses on supporting organisms’ own stress responses. Stressors on Earth, such as starvation or heat, have driven organisms to evolve cellular defences that protect DNA and other cellular components. In a recent preprint (a paper that is yet to be peer reviewed), my colleague and I suggest that activating these mechanisms through specific diets or drugs may offer additional protection in space.

Physical shields alone won’t be enough. But with biological strategies, more experiments in space and on Earth, and the construction of new dedicated accelerator complexes, humanity is getting closer to making routine space travel a reality. With current speed, we are probably decades away from fully solving cosmic-ray protection. Greater investment in space radiation research could shorten that timeline.

The ultimate goal is to journey beyond Earth’s protective bubble without the constant threat of invisible, high-energy particles damaging our bodies and our spacecraft.

The Conversation

Dr. Zahida Sultanova works for the University of East Anglia and is funded by the Leverhulme Trust. She is a member of European Society of Evolutionary Biology (ESEB) and Ecology and Evolutionary Biology Society of Turkey (EkoEvo).

ref. Before trips to Mars, we need better protection from cosmic rays – https://theconversation.com/before-trips-to-mars-we-need-better-protection-from-cosmic-rays-268934

The Beatles’ movie Help! featured crude racial stereotypes – but it shouldn’t be hidden away

Source: The Conversation – UK – By Philip Murphy, Director of History & Policy at the Institute of Historical Research and Professor of British and Commonwealth History, School of Advanced Study, University of London

I sometimes think that my teenage fascination with the Beatles is what drew me to becoming a professional historian. Piecing together what I could find out about them in the years before the internet was a sort of gateway to the history of Britain and the world in the decade in which I was born – Carnaby Street, the Vietnam War, Richard Nixon and counter-cultural figures like Timothy Leary.

The original Beatles Anthology released in 1995 in the form of three albums and a documentary series was undoubtedly a treat for Beatles fans. Recently an updated version of the Anthology arrived in the shops. The new release includes a fourth CD of unreleased tracks.

It is always thrilling to hear iconic tracks at an earlier stage of their gestation. And this is essentially what the recently released additional material offers. However, the true obsessives among us have heard much of the Anthology material from bootleg collections. What does remain as hard to come by as ever are some of the films.

At a time when a fan had to be grateful for what they were given, I can still remember the excitement of learning that the BBC was planning to show all the band’s films over the Christmas of 1979. If we are, indeed, in the “barrel-scraping” phase, as one critic dubbed the new Anthology, of Beatles commemoration, it is surprising how difficult it now is to access some of those films.

In the last ten years some work has done to make some of the films more accessible. The band’s first movie, A Hard Day’s Night (1964), directed by Richard Lester, was re-released in cinemas in 2014 to mark its 50th anniversary. It is currently available on the BFI’s streaming service.

Their swan-song film, Let it Be (1970) was remastered for Disney+ in 2024 by Peter Jackson. Jackson had previously spliced together hours of unused footage from director Michael Lindsay-Hogg’s original recording sessions to create the 2021 documentary Get Back.

Although some are yet to be remastered. The band’s 1968 well-regarded animated movie Yellow Submarine, is still awaiting an authorised re-release, although no doubt this will come. Their dismal 1967 BBC Christmas special Magical Mystery Tour was notoriously dead-on-arrival and if there are commercial reasons for resuscitating it, there certainly aren’t any artistic ones.

But what about Help! (1965), the band’s second outing with Richard Lester? The film’s madcap plot involves the group being chased around the globe by the comically inept members of a religious cult keen to recover a sacred ring from the finger of the band’s drummer, Ringo Starr.

It is a far more accomplished piece of filmmaking than Magical Mystery Tour. Indeed, it was in many ways ground-breaking for pioneering the music video format and rock musicals. But the last DVD release of the movie was a 2007 two-disc set.




Read more:
Anthology 4 shows there’s still more to discover about the Beatles


The reason why the film has missed out on more recent celebratory repackaging isn’t difficult to surmise: it features three familiar European character actors – Leo McKern, Eleanor Bron and John Bluthal – adopting “brown-face” to portray Indian cult members. And their quest for the ring is in order to enable them to carry out a human sacrifice. No mention is made of any of this in the documentary series that accompanied the Anthology recordings, and which has also been re-released.

As the 1960s progressed, these sorts of crude orientalist stereotypes faced parody and criticism. But apologists for the movie would struggle to demonstrate that Help! is doing anything other than reinforcing those attitudes. Some contemporary critics have simply labelled it as racist.

I wonder if an attempt could be made to salvage Help! by re-releasing it with its own documentary package exploring the historical context of the film. If the choice is probably between either contextualisation or simply allowing the movie to languish in the far-corners of eBay to avoid offending global consumers, then I think we should go for the former.

While it would be difficult to make Help! genuinely palatable to contemporary viewers, it could be used as the starting point for a fascinating exploration of the ways in which Britain in the Swinging Sixties viewed its colonial past.

Some of the era’s more radical writers and filmmakers like Edward Bond and Tony Richardson were keen to question the values of their parents’ generation. But as Help! demonstrates, imperial assumptions of white racial superiority continued to permeate popular culture, not least in the area of comedy.

If the Beatles of 1965 had passively accepted the plot of Help!, the Beatles of the late 60s would almost certainly have baulked at it. George Harrison famously embraced Indian music, culture and religion. John Lennon would subsequently feature in another Richard Lester film, How I Won the War (1967), which mocked British jingoism and its rigid class system.

A year later, Lennon had to face racist abuse from the British press and fans when he left his wife for the Japanese artist, Yoko Ono. Also, the track Commonwealth, which surfaced in the 2021 documentary Get Back, is an improvisation satirising Enoch Powell’s Rivers of Blood speech, which was held responsible for inspiring a spate of racist attacks.

The Beatles were on a journey, and Help! deserves to be seen and discussed as part of that journey, rather than being hidden away.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Philip Murphy has received funding from the AHRC and is a member of the European Movement UK.

ref. The Beatles’ movie Help! featured crude racial stereotypes – but it shouldn’t be hidden away – https://theconversation.com/the-beatles-movie-help-featured-crude-racial-stereotypes-but-it-shouldnt-be-hidden-away-270526