How Dracula became a red-hot lover

Source: The Conversation – USA (2) – By Stanley Stepanic, Assistant Professor of Slavic Languages and Literatures, University of Virginia

In Luc Besson’s ‘Dracula,’ the titular character is a hopeless romantic. Vertical

The Lord of Vampires. The King of the Undead. The Ultimate Lover. All refer to the immortal Count Dracula, who originally appeared in Bram Stoker’s 1897 novel.

Yet the character’s fame has sprung more from his 200-plus cinematic resurrections, beginning with “Dracula’s Death” in 1921 and, most recently, in Luc Besson’s “Dracula,” which premiered in the U.S. in February 2026.

Besson’s rendition has received particular attention for its focus on personal passion. Originally titled “Dracula: A Love Tale,” the film features a protagonist who is not simply a monster, but a lover. The New York Times called the movie “extravagantly silly” and described actor Caleb Landry Jones’ performance of the classic monster as “deliciously operatic: less villain, more virtuoso in love.”

Meanwhile, in London, Dracula as lover also features as a theme in Cynthia Erivo’s new West End production, in which she plays the Count and 22 other characters. A smaller, recent production out of Washington, D.C., titled “Dracula: A Comedy of Terrors” presents the Count similarly, though with a hilariously deviant LGBTQ+ bite.

In other words, Dracula has come a long way from his days as a lecherous, old creep, a shift that can be attributed, in part, to evolving attitudes on love, gender and sexuality.

‘Even his breath was rank’

When Stoker first published “Dracula,” the character appeared at the end of a long line of literary vampires, from Lord Ruthven in John Polidori’s “The Vampyre” (1819) to Sir Francis Varney in “Varney the Vampire” (1845-1847).

A bald, skinny, elderly vampire with hollow eyes.
In the 1922 German film ‘Nosferatu, A Symphony of Horror,’ the vampire, Count Orlok, appears like his repulsive and predatory literary predecessors.
Frederic Lewis/Hulton Archive via Getty Images

These vampires were all decrepit, revolting and predatory old men, and Stoker’s Count Dracula was no different. In the novel, one character notes Dracula’s “course” hands, the “extraordinary pallor” of his skin and his “extremely pointed” ears; atop his “lofty domed forehead,” his hair grew “scantily” upon his head. Even his “breath was rank.”

Another character describes Dracula as possessing “not a good face,” adding that it was “hard, and cruel.”

The first surviving feature-length film adaptation of “Dracula” was the 1922 German film “Nosferatu: A Symphony of Horror,” which cribs the plot and characters from Stoker’s novel. In it, Count Orlok – essentially a bootleg version of Dracula – looks ratlike, emaciated and pallid.

Seduction game

Little about Stoker’s Dracula or Count Orlok screamed “lover,” though there’s arguably an implicit sexuality in the way he attacks and stalks his victims.

Instead, Dracula gained his “lover” label from later appearances on screen.

The earliest example appears in the 1944 film “House of Frankenstein,” where Rita (Anne Gwynne) is initially concerned by Dracula’s presence. Later, however, she finds herself “no longer afraid” after he places a ring onto her index finger, which magically fits to her precise shape.

At the end of this scene, as she longingly looks into his eyes, he announces he will come for her the next day, as if it were all a budding tryst.

Count Dracula is more handsome Lothario than old lech in ‘House of Frankenstein.’

The evolution of Dracula’s character mirrored changes in more general perceptions of gender, sexuality and violence that occurred after World War II, when popular culture started to chip away at the centrality of the nuclear family. As books, films and TV shows explored themes like lust, infidelity, same-sex relationships and divorce, images of vampires became more complex.

In the 1958 film “Dracula,” for example – titled “Horror of Dracula” in the U.S. – Dracula (Christopher Lee) is a predator who breaks into the homes of married women.

Yet there’s also a hint of romance. In one particular scene, he assaults Mina Holmwood (Melissa Stribling). But Mina appears to eventually give in, and they share a brief, passionate kiss. The British Board of Film Classification even censored the scene, seeing it as a step too far in a film already replete with sexual overtones.

Director Terence Fisher later recalled telling Stribling to depict her character as though she “had one whale of a sexual night, the one of your whole sexual experience. Give me that in your face!”

Lover or monster?

By the 1970s, sexuality became even more of a pronounced theme in vampire-related media, mirroring broader cultural changes in views of human sexuality.

Comic books such as “Vampirella” presented the vampire as a hypersexualized, feminine, erotic symbol of power, while films such as “The Vampire Lovers” explored themes like lesbianism, though not in a way that was entirely explicit.

In the film “Count Dracula’s Great Love” (1973), Dracula falls head over heels for a young girl named Karen, who ends up rejecting his advances. Near the end of the film, the lovesick vampire bemoans, “For the first time, love brings a finish to the life of Dracula,” before driving a stake into his heart with his own hands.

Shortly thereafter, a made-for-TV “Dracula” features Dracula’s search for his dead wife.

A woman with black hair and a red dress passionately kisses a man with long black hair.
Winona Ryder and Gary Oldman share a kiss in a scene from the 1992 film ‘Bram Stoker’s Dracula.’
Columbia Pictures/Getty Images

The “search for a dead lover” would become a central theme in future films. For example, in Francis Ford Coppola’s “Bram Stoker’s Dracula” (1992), viewers learn that Dracula leaves Transylvania for England to pursue a reincarnation of his dead wife.

This yearning was a borrowed concept. In the Gothic soap opera “Dark Shadows” (1966-1971), the character Barnabas Collins (Jonathan Frid) tries to replicate his romance with his long-dead lover, Josette, by attempting to supernaturally control the living body of a girl named Maggie Evans (Kathryn Leigh Scott) so that she mimics Josette.

The concept of a vampire pining for a lost love – especially one from a lost era – marked a significant evolution in vampire media.

In the 1970s comic book series “The Tomb of Dracula,” the Count has a human wife named Domini; through magical means, he’s even able to conceive a child with her. Thanks to his romance, he can now “understand things such as peace and rest and love.”

Despite Dracula-as-lover now being such a well-worn trope, the ever-adaptable Count is also ready for his traditional scare duties, most recently in Robert Egger’s “Nosferatu” (2024). Whether he’s a lover, a monster or both, Dracula represents the idea of the vampire as a mirror of human experience. Romance can sometimes teeter between love and pain. Passion can sometimes be scary. So when you next see him on stage or screen, don’t be surprised if his fervent love also comes with a sharp bite.

The Conversation

Stanley Stepanic does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How Dracula became a red-hot lover – https://theconversation.com/how-dracula-became-a-red-hot-lover-275789

Menstrual pads and tampons can contain toxic substances – here’s what to know about this emerging health issue

Source: The Conversation – USA (3) – By Jenni Shearston, Assistant Professor of Integrative Physiology, University of Colorado Boulder

Studies have found small amounts of toxic heavy metals and other potentially harmful substances in some menstrual pads and tampons. zoranm/E+ via Getty Images

About half of the global population menstruates at some point in their lives. Disposable products, such as tampons and pads, are some of the most popular products used around the globe to manage menstrual flow.

Unfortunately, studies have shown that many personal care products, including shampoo, lotion, nail polish and menstrual products, contain hazardous chemicals. Items used in or near the vagina are of particular concern because they are in contact with vaginal mucous membranes – the moist tissue lining the inside of the vagina that secretes mucus. These tissues can absorb some chemicals very efficiently.

People use menstrual products 24 hours a day for multiple days monthly, over the course of many years. Tampons, which are used internally, are surrounded by the permeable vaginal mucous membrane for up to eight hours at a time.

I am an environmental epidemiologist, and I study chemical exposure, its sources and its health effects. As a person who menstruates, I also must make my own decisions around menstrual products and manage the challenge of finding accurate information about women’s health risks, which receive less research attention and funding than men’s health.

In 2024, I co-authored the first paper that detected metals in tampons, including toxic metals like lead and arsenic. My colleagues and I also wrote a review paper that surveyed the scientific literature and found about two dozen studies measuring chemicals in menstrual products.

The various chemicals that these studies detected were typically at concentrations low enough to make their health impact unclear. However, they included chemicals known to disrupt the endocrine system, which makes and controls hormones that are essential for bodies to function.

The next question after detection of toxic heavy metals in tampons is whether these substances can be absorbed into the body.

How contaminants get into menstrual products

The first modern tampon in the U.S. was patented in 1931. Nearly a century later, tampons still are made primarily from cotton, rayon or a blend of the two.

Chemicals may get into tampons and other menstrual products in a number of ways. Some chemicals, like heavy metals, are present in soil, either naturally or due to pollution, and may be absorbed by cotton plants.

Other chemicals, such as zinc, may be intentionally added to menstrual products to prevent the growth of harmful bacteria. Still others, such as phthalates – synthetic chemicals used to manufacture plastics – may leach into menstrual products from plastic packaging or be added as part of a fragrance.

Research suggests that these chemicals are present in a large proportion of menstrual products – we found lead present in all 30 tampons we tested. What we don’t yet know is if these chemicals can get into people’s bodies in a high enough concentration to cause health effects in either the reproductive system or elsewhere in the body.

Limited federal regulations

The U.S. Food and Drug Administration regulates tampons, menstrual cups and scented menstrual pads as Class II medical devices, which carry moderate to medium risk. Unscented menstrual pads are Class I medical devices, which are considered low-risk. These categories are based on the risk the device may present to a consumer who uses it in the intended way.

FDA guidance for Class II devices offers only a few general guidelines with respect to chemicals. For menstrual tampons and pads, it recommends – but does not require – that products should not contain two specific dioxin products or “any pesticide and herbicide residues.” Dioxins are a chemical by-product of the bleaching process to whiten cotton, and they are associated with cancer and endocrine disruption. Using non-chlorine bleaching methods can reduce dioxin formation.

The most stringent regulation of tampons in the U.S. occurred after an illness called toxic shock syndrome became a public concern in the 1970s and 1980s. Menstrual toxic shock syndrome occurs when the bacteria Staphlococcus aureus grows in the vagina on inserted menstrual products and releases a toxin called TSST-1. This substance can be absorbed through the vaginal mucosa and cause a variety of symptoms, including fever, high blood pressure, shock and even death.

During this epidemic, in which at least 52 cases were recorded and seven people died over a period of eight months, tampons were associated with the syndrome – especially a highly absorbent tampon called Rely, which was pulled from the market.

In response, the FDA created a task force that recommended standardizing the tampon absorbencies and advised consumers to use the lowest absorbency for their flow. This is why tampons in the U.S. now come in a range of absorbencies, from light through regular to super and ultra, so that users can choose the level they need while minimizing risk of toxic shock.

Living in a ‘soup of chemicals’

Just because a chemical is present in a menstrual product doesn’t mean it can get into the body. However, chemicals like lead and arsenic are known threats to human health. So it’s important to study whether harmful chemicals present in menstrual products could contribute to health problems.

Humans in the modern world live in what expert toxicologist Linda Birnbaum, former director of the National Institute of Environmental Health Sciences, calls a “soup of chemicals.” Simply being present on Earth means being exposed to many chemicals, at different concentrations, all at once. This makes it difficult to unravel the relationship between a single chemical exposure and health.

Nonetheless, science has shown that chemical exposure from at least one menstrual product – vaginal douches – does affect health. Vaginal douching is the process of washing or cleaning the inside of the vagina with water or other fluids.

The American College of Obstetricians and Gynecologists recommends avoiding this process, which can harm healthy bacteria in the vagina, increasing the risk of vaginal infections and other diseases.

In addition, a 2015 study found that women who use vaginal douches have higher concentrations of a chemical called monoethyl phthalate in their urine. Exposure to this substance is associated with reproductive health problems, such as reduced fertility and increased pregnancy risk.

Can these chemicals be absorbed?

Scientists are working now to determine what concentrations of metals and other chemicals can leach out of tampons and other menstrual products. One 2025 study estimated that volatile organic compounds, a group of chemicals that vaporize quickly, can be absorbed through the vaginal mucosa. Volatile organic compounds may be added to menstrual products as part of fragrances, adhesives or other product components.

My team and I are now shifting our focus to the relationship between menstrual product use, various chemicals, and menstrual pain and bleeding severity. We want to see whether some chemicals will be elevated in menstrual blood, whether these chemical levels are higher in people who use tampons, and whether the chemicals are associated with greater menstrual pain and bleeding.

States are starting to act on this issue. For example, in 2024, Vermont became the first U.S. state to ban multiple chemicals from disposable menstrual products. California bans PFAS, a widely used group of highly persistent chemicals, from menstrual products. New York adopted a law in December 2025 barring multiple toxic chemicals from menstrual products.

California also enacted a law in October 2025 that requires manufacturers of disposable tampons and pads to measure concentrations of arsenic, cadmium, lead and zinc in their products, and to share those measurements with the state, which can publish them. More information like this will help support informed choices for millions of consumers who rely on menstrual products every month.

The Conversation

Jenni Shearston receives funding from the United States National Institutes of Health.

ref. Menstrual pads and tampons can contain toxic substances – here’s what to know about this emerging health issue – https://theconversation.com/menstrual-pads-and-tampons-can-contain-toxic-substances-heres-what-to-know-about-this-emerging-health-issue-268470

Colorado has high levels of radon, which can cause lung cancer – here’s how to lower your risk

Source: The Conversation – USA (3) – By Jan Lowery, Professor of Epidemiology, Colorado School of Public Health, University of Colorado Anschutz Medical Campus

Radon exposure is the leading cause of lung cancer for people who have never used tobacco. Francesco Scatena/iStock via Getty Images

In Colorado, as of 2025, about 500 people a year die from lung cancer as the result of radon gas exposure. Nationally, the number of lung cancer deaths attributed to radon is about 21,000 per year.

Radon is present nearly everywhere outdoors, yet typically at levels that are not harmful. It becomes dangerous when it gets trapped and accumulates inside homes, schools and other buildings.

Radon is a naturally occurring radioactive gas that is produced by the breakdown of uranium, a heavy metal present in the soil. People cannot smell it or see it, which makes radon particularly dangerous. When radon gas forms in the soil, it rises and finds its way into homes old and new through cracked foundations, gaps around sump pumps and drains, and crawl spaces.

Many people are unaware of the radon levels in their home. In Colorado, it is estimated that only 50% of homes have been tested. Thus, many Coloradans may be exposed to elevated radon levels and not know it.

Though tobacco use is the most significant risk factor for lung cancer, accounting for approximately 86% of all lung cancer cases, radon is the leading cause of lung cancer among people who have never used tobacco. Radon also has a compounding effect with tobacco that further increases lung cancer risk among tobacco users. About 7 in 1,000 nontobacco users with prolonged exposure to elevated radon levels may develop lung cancer in their lifetime.

Exposure to radon is preventable. As a cancer epidemiologist, I aim to help all Colorado residents be aware of their home’s radon level and take appropriate actions to mitigate exposure and reduce their and their family’s risk of lung cancer.

Radon in your home

Because of Colorado’s unique geology, including mountainous regions that consist heavily of granite rock that contains uranium, radon levels are higher in Colorado than in other states.

Colorado is among the top 10 states with the highest radon levels across the country. About 50% of Colorado homes tested for radon have levels higher than the recommended threshold set by the Environmental Protection Agency, which is 4 picocuries per liter (pCi/L). The average level of radon in Colorado homes is 6.4 pCi/L, which is equivalent to having 200 chest X-rays each year. Radon levels differ across the 64 counties in Colorado based on their geography and makeup of the soil.

If a home is not adequately vented, radon can build up indoors. When radon decays, it releases radioactive particles that, once inhaled, can damage lung cells. More specifically, these particles can break chemical bonds in the cell’s DNA that, if not repaired, can lead to cancer. Prolonged exposure to high levels of radon, over several years, can cause lung cancer. Similar to tobacco use, it is the cumulative exposure to radon that increases risk for cancer.

Fortunately, there are ways to prevent radon from entering and accumulating inside our homes. Radon mitigation systems use fans and pipes to pull radon gas from below the foundation of the home and vent it outside. These systems can reduce radon levels inside the home by up to 99%.

Know your risk: Testing and mitigating

Testing your home for radon is simple and relatively inexpensive. Test kits are placed in the lowest living area of your house, apartment, condominium or townhome and left for a period of time. The EPA recommends testing for all residential units below the third floor.

There are short-term tests, which take from two to 90 days, and long-term tests, which take 90 days or more. Long-term tests are more accurate for estimating annual average radon levels. Once complete, tests can be mailed directly to a lab for processing.

A step-by-step instructional video on how to test your home for radon from the El Paso County (Colorado) Public Health Department.

Test kits typically cost less than US$50 or may be obtained for free from many sources, including the University of Colorado Anschutz Cancer Center. As of February 2026, the cancer center has distributed more than 1,600 test kits to people in 55 Colorado counties. Nearly 40% of the tests distributed thus far show radon levels above the EPA threshold.

The EPA recommends testing over multiple months, including colder months when windows and doors to the outside are typically closed and radon can become trapped indoors. Testing over several months provides a better understanding of the average annual radon level in the home.

Reduce your risk: Radon mitigation

People with radon levels in their home that are at or above 4 pCi/L are recommended to seek mitigation measures. This may involve sealing cracks in basement walls and foundations and installing a fan and vent pipe to pull radon gas from underneath the home and vent it outside. Mitigation can cost between $1,000 and $3,000 depending on home structure and location.

There are resources available for people who need radon mitigation and can’t afford it. Colorado’s state health department has a low-income radon mitigation assistance program that can pay for radon mitigation for people who are eligible based on income requirements.

Radon may be invisible, but its impact on human health is unmistakably real – and largely preventable. By taking action today – testing your home, sharing this knowledge and seeking help when needed – you are investing in a healthier future for yourself and your community.

The Conversation

Jan Lowery does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Colorado has high levels of radon, which can cause lung cancer – here’s how to lower your risk – https://theconversation.com/colorado-has-high-levels-of-radon-which-can-cause-lung-cancer-heres-how-to-lower-your-risk-273666

I’m a philosopher who tries to see the best in others – but I know there are limits

Source: The Conversation – USA (3) – By Mark Schroeder, Professor of Philosophy, USC Dornsife College of Letters, Arts and Sciences

Interpreting someone’s thoughts or actions can mean balancing their agency against the good. Kateryna Kovarzh/iStock via Getty Images

Understanding one another can be hard. There is a big difference between someone snapping at you out of contempt, and calling you out for a mistake because they believe in you and know you can do better. One of these cases calls for anger, but the other for humility or even embarrassment. Or maybe they are only snapping because they’re “hangry” – they might just need a Snickers bar.

And that’s just with people we know. What about strangers, people across the political divide, or even those with very different backgrounds and cultures than your own?

My field, philosophy, offers a tried-and-true answer to what we need to do in order to understand people and texts from very different backgrounds and cultural assumptions than our own. We need to be charitable.

Charity in this sense isn’t a matter of giving money to those who need it more. Instead, it’s seeing others in a favorable light – of seeing the best in them. In my work, I think of this as seeing other people as protagonists: characters who “do their best” with the predicament in which they find themselves. Interpreting someone charitably doesn’t require agreeing with them. But it does require doing our best to find merit in their point of view.

Of course, people and ideas don’t have unlimited merit. We can err by failing to see the merit of someone’s point of view – or we can err by finding merit that isn’t really there.

But the idea of charity is that it’s worse to make the first kind of error because it prevents us from getting along and learning from one another. By seeing the best in someone else and in their ideas, we can learn productively from engaging with them. Protagonists are people we can learn from and cooperate with.

Taking them seriously

It doesn’t take a genius to observe that we are all better at seeing the best in the people we agree with – and worse with those across the political divide. Political discussions on social media are often dominated by competing attributions of more and more insidious motives to people on the other side. We see them not as protagonists, but as antagonists.

By seeing the worst in someone else’s ideas, we let ourselves off easy. We dismiss them when instead we need to be taking them seriously.

So why, if charity requires seeing the best in others, are we so often tempted to see the worst in them?

A better understanding of charity provides the answer. Seeing the best and the worst in others are not opposite ways of interpreting someone, but simply two sides of the same coin. Here’s why:

A dark-haired man and woman stand as they seem to argue in a dining room, with the man clutching his temples.
Part of charity is sifting out the signal from the noise.
Maskot/Getty Images

Interpretation trade-offs

Interpreting someone isn’t all about figuring out their motives. Sometimes it’s about sorting out what is signal and what is noise. If I snap at you, you could spend a lot of time fixating over whether to be angry or embarrassed. But sometimes the right move is just to pass me a Snickers bar and move on. Our moods and actions are influenced by hunger, hormones, alcohol and lack of sleep, just to name a few. Overinterpreting a snap after I missed breakfast treats as signal what is really just noise.

Overlooking a thing or two when I am hangry can be the best way to see the best in me. When you interpret my snap as merely the result of missing a meal, you don’t really see it as coming from me, the protagonist; but as the result of my predicament. You will judge me, not by whether I am hangry, but by how I overcome that. Your interpretation sees me in a more positive light, by taking away some of my agency.

By “agency,” I mean the extent to which someone gets credit for what they do. You have greater agency over something that you do on purpose, and less if was a foreseen but accepted side effect of your plan. You have less agency if it was an accident, but more if the accident was negligent; less agency if you just snapped because you’re hangry, but more if you know you get hangry and chose to skip lunch anyway.

A perfect agent wouldn’t be affected by hormones and hunger. They would simply make rational choices that advance their goals. But humans aren’t like that. We are imperfectly embodied agents, at best. So interpreting one another well sometimes requires seeing the good in one another, at the cost of agency. In other words, it has to balance agency against the good, as I have argued in my recent work.

But you can’t find the best in someone by just ignoring more and more until all the bad things are trimmed away and only something good is left. Your interpretation has to fit with the facts of what they do and say.

And sometimes the trade-offs between agency and the good go the other way – we interpret each other in ways that attribute more agency but less good. If passing me a Snickers bar seems to calm me down, you might try it again the next time I snap. But one day you realize that you have started carrying extra Snickers bars everywhere you go in case you run into me, and a different interpretation presents itself: Maybe instead of being a decent but mood-challenged friend, I have just been using you for your candy bars.

A young bearded man in a yellow shirt grins as he holds up a chocolate bar and sits with his feet on an office table.
Truly angry, just hangry, or taking advantage of your chocolate supply?
Deagreez/iStock via Getty Images Plus

This creates tipping points for charitable interpretation. When we cross the tipping point, you switch from seeing someone as an imperfectly embodied protagonist to seeing them as an antagonist.

Charity without a cost

All of this is a way of arguing that it is sometimes right to see the worst in others. Sometimes other people really are the worst, and understanding them requires understanding their agency, not what is good about them. Protagonists and antagonists are just two sides of the same coin: The very same interpretive process can lead us in either direction.

Unfortunately, this means there is no simple test for when you are doing well enough at seeing the best in others. In particular, there is no test that we can agree about across our political differences. Interpreting someone charitably requires looking hard enough for good in them, but part of what we disagree with one another about is precisely what is good. So we are bound to disagree with one another about who is being sufficiently charitable.

But as a personal aspiration, a little more charity can go a long way. We can be generous not just with money, but in how we interpret others. But unlike giving money away, we don’t lose anything when we try harder to see the best in someone else.

The Conversation

Mark Schroeder does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. I’m a philosopher who tries to see the best in others – but I know there are limits – https://theconversation.com/im-a-philosopher-who-tries-to-see-the-best-in-others-but-i-know-there-are-limits-273446

Trump administration axed nutrition education program that saved more money than it cost, even as government encourages healthier eating

Source: The Conversation – USA (2) – By Diane Cress, Associate Professor of Nutrition and Food Science, Wayne State University

If the government had found a way to save US$10 for every dollar it spent helping low-income people get healthier, wouldn’t it make sense for it to keep doing that?

Well, that’s exactly what the U.S. government did when it piloted the SNAP-Ed program in 1977. This U.S. Department of Agriculture program persisted for nearly 50 years until the Trump administration shuttered it in 2025.

SNAP-Ed served as the nutrition education arm of the Supplemental Nutrition Assistance Program, which helps more than 40 million Americans buy groceries.

SNAP-Ed complemented SNAP by teaching people who get those benefits how best to use that government assistance. It paid for nutrition educators to teach lessons at schools, community centers and university extension offices. The educators led grocery store tours, taught label reading and budget comparisons, and taught cooking classes. And they offered a mix of printed and online resources to support good nutrition in the home.

While the federal government fully funded the program, the states, along with Washington, D.C., and Puerto Rico, administered and implemented SNAP-Ed through local community programs, often partnering with nonprofits. It cost only one penny for every SNAP dollar spent, and it worked.

But as of Oct. 1, 2025, SNAP-Ed ceased to exist due to spending cuts that were part of the big tax reform and budget package President Donald Trump signed into law three months earlier.

Dealing with the aftermath

To see why focusing on teaching food preparation skills is so critical, imagine discovering a flat tire. Do you need someone to tell you to fix it or someone to show you how? Nutrition works the same way.

We’ve all left the doctor’s office with instructions to “eat better,” which is essentially useless without the tools to do so. SNAP-Ed taught people how to identify healthy food patterns, keep food safe and navigate a complex food environment.

It also taught low-income Americans how to improve their budgeting and planning for meals that balance cost and nutrition. It’s nearly impossible to meet your basic nutritional needs if you are relying on SNAP dollars alone to fill your grocery cart. Skills are required.

States are getting creative to find ways to preserve aspects of the SNAP-Ed program. In Georgia, alternative funding sources might keep programs running for about a year. In Wyoming, a less local, more regional model has helped allow for the continuation of some programs previously funded by the SNAP-Ed program.

In my own state, Michigan State University Extension, which served as Michigan’s statewide implementing partner for SNAP-Ed, lost over $10 million in federal support when SNAP-Ed was defunded. The extension’s staff is working to keep its curricula, lesson plans, recipes and other training materials available online to the public in an effort to sustain its work.

Educating 1.2 million people

Because SNAP-Ed funding has been eliminated, the programs it supported are disappearing or shrinking. As a result, every SNAP dollar may not be spent as wisely as before.

In 2025, SNAP spending was over $100 billion, while SNAP-Ed operated on a $536 million budget, educating over 1.2 million people on how best to spend their SNAP dollars and improve their health.

SNAP-Ed’s benefits persist today, but without continued training and support its impact will diminish, decades of trust built in communities will be lost, and the health of communities no longer served will suffer.

But for now, at least, SNAP-Ed’s online resources remain freely available.

The SNAP-Ed program explained.

Reducing diabetes risks

As a dietitian and a professor, I often conduct community-based participatory research aimed at improving health in low-income populations, especially those at risk for developing Type 2 diabetes.

In a pilot study my research team helped conduct in Detroit in 2018, we paired the Centers for Disease Control’s National Diabetes Prevention Program with Cooking Matters, a course funded by SNAP-Ed that taught meal planning, hands-on meal prep and food resource management.

We wanted to see whether SNAP-Ed skills training would amplify the benefits of the National Diabetes Prevention Program in a low-income community.

It did.

All 23 participants in this Detroit pilot lost weight and lowered their hemoglobin A1c, a key marker of diabetes risk.

All but one participant moved from prediabetic to nondiabetic sugar levels, effectively reversing prediabetes.

The National Diabetes Prevention Program often has trouble retaining study participants in low-income communities where Type 2 diabetes risk and health care costs are significant problems.

Not only did our findings show how SNAP-ED was boosting health in several at-risk communities, but they also provided evidence for the economic benefits of the program.

To estimate how much money the government saved through SNAP-Ed, the USDA compiled data from multiple studies like ours, finding that every dollar spent in community health education ultimately saved $10.64 in Medicaid spending by the government.

If a drugmaker invented a pill that cut diabetes risk by 40% and reduced a key diabetes marker like HbA1c by nearly one percentage point, I have no doubt that it would be hailed as a miracle.

Our study achieved exactly these outcomes through inexpensive, skills-based education. And yet the Trump administration ended the education program that funding this kind of work.

Conflicting with the administration’s own goals

The Make America Healthy Again movement has both embraced Trump and a core principle: Healthy habits prevent chronic disease. It doesn’t make sense to me, in light of that movement, for the Trump administration to stop funding SNAP-Ed.

The program has helped reduce the prevalence of many chronic diseases, and this could have been expected to yield up to $1 trillion in health care savings by 2030.

As the popular proverb goes: “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” SNAP-Ed taught over 1.2 million people how to fish every year, all for a little more than the latest estimates of what it’s going to cost to build the White House ballroom.

The Conversation

Diane Cress previously received funding from Gleaners Community Food Bank.

ref. Trump administration axed nutrition education program that saved more money than it cost, even as government encourages healthier eating – https://theconversation.com/trump-administration-axed-nutrition-education-program-that-saved-more-money-than-it-cost-even-as-government-encourages-healthier-eating-272002

Sixth year of drought in Texas and Oklahoma leaves ranchers facing wildfires and bracing for another tough year

Source: The Conversation – USA (2) – By Joel Lisonbee, Senior Associate Scientist, Cooperative Institute for Research in the Environmental Sciences, University of Colorado Boulder

Cattle auctions aren’t often all-night affairs. But in Texas Lake Country in June 2022, ranchers facing dwindling water supplies and dried out pastures amid a worsening drought sold off more than 4,000 animals in an auction that lasted nearly 24 hours – about 200 cows an hour.

It was the height of a drought that has gripped the Southern Plains for the past six years – a drought that is still holding on in much of the region in 2026.

The drought cost the agriculture industry across Kansas, Oklahoma and Texas an estimated US$23.6 billion in lost crops, higher feed costs and selling off cattle from 2020 through 2024 alone. As rangeland dried out, it has also fueled wildfires, including several in Texas in early 2026.

Historically, droughts of this magnitude happen in the Southern Plains about once a decade, but the severe droughts of this century have been lasting longer, leaving water supplies, native rangelands and farms with little time to recover before the next one hits.

Many cattle producers and rangelands were still recovering from a severe 2010-2015 drought when a flash drought hit western Texas in spring 2020, marking the beginning of the current multibillion-dollar, multiyear and multistate drought. Ample spring rainfall in 2025 and severe flooding in central Texas that year weren’t enough to end the drought, and a powerful winter storm in late January 2026 missed the driest parts of the region.

A map shows heavy precipitation across a large part of the country, but it mostly missed the areas facing the worse drought in the Southern Plains.
Precipitation from a severe winter storm in late January 2026, shown in blue and measured in inches, largely missed the areas with the worst drought conditions, indicated by red contour lines.
UC Merced, NDMC

In a recent study with colleagues at the Southern Regional Climate Center and the National Integrated Drought Information System, we assessed the causes and damage from the ongoing drought in the Southern Plains.

We found three key reasons for the enduring drought and its damage: rising temperatures and a La Niña climate pattern; water supply shortages; and lingering economic impacts from the previous drought.

Weather and climate helped drive the drought

The Southern Plains is known to be a hot spot for rapid drought development, and the ongoing drought that started in 2020 is no exception.

Documented “flash droughts” – defined as periods of rapid drought onset or intensification of existing droughts – occurred at least five times in the region from 2020 to 2025. As global temperatures rise and climates warm, research warns that the frequency and severity of flash drought events will increase.

Maps show how the current drought progressed and moved around the region. It was at its height in 2022.
The U.S. Drought Monitor’s monthly updates from January 2020 through January 2026 show how drought moved around in the Southern Plains over those years but never let go. Darker colors reflect the intensity of drought in each location.
Joel Lisonbee; compiled from U.S. Drought Monitor

For the southern part of the Southern Plains, winter precipitation is closely linked to the El Niño–Southern Oscillation, a climate pattern that affects weather around the world. Five of the past six years exhibited a La Niña pattern, which typically means the region sees winters that are warmer and drier than normal.

La Niña was likely the primary driver – although not the only driver – of the drought for Texas and southwest Oklahoma, and one of the reasons drought conditions have continued into 2026.

The Southern Plains have a long history with severe droughts. The Dust Bowl of the early 1930s may be the best-known example. But a history with drought doesn’t make it any easier to manage when crops and water supplies dry up.

Deeply rooted water shortages

The heat and dryness since 2020 have left many of the region’s rivers, reservoirs and even groundwater reserves well below average.

San Antonio’s reservoirs all reached record-low levels in 2024 and 2025, as did the Edwards Aquifer, which provides water for roughly 2.5 million people. They were still low as 2026 began. Surface water and groundwater resources across central and western Texas have been depleted to the point that even a few big storms can’t replenish them.

A few major rivers flow into the Southern Plains from other drought-affected regions. Consider the Rio Grande, which begins in Colorado and winds through New Mexico and along Texas’ southern border: Not only has the Lower Rio Grande valley in southern Texas missed out on needed precipitation this winter, so did the Rio Grande headwaters in southern Colorado.

Colorado is facing a snow drought in winter 2026, as is much of the western U.S. If it continues, there will be less snowmelt come summer to feed rivers, such as the Rio Grande, or fill reservoirs. In early February, the Elephant Butte, Amistad and Falcon reservoirs, along the Rio Grande, were only 11%, 34%, and 20% full, respectively.

Lingering economic impacts

Like water supplies, the economy doesn’t just recover when the rains return.

One of the reasons the current drought has been so costly is that parts of the region had not fully recovered from the 2010-2015 drought when the latest one began in 2020. With only a five-year break between droughts, the landscape behaved like someone with an already weakened immune system who caught a cold.

Severe droughts over time in the Southern Plains
The percentage of land in different levels of drought or wetness for each month based on the nine-month Standardized Precipitation Index leading up to the selected date. Reds indicate drier conditions; blues indicate wetter conditions.
National Integrated Drought Information System, NOAA Drought.gov

During the 2010-2015 drought, cattle producers in Texas sold off about 20% of the statewide herd as water became scarce and rangeland dried up. Rebuilding a herd after a drought is a slow process. Pasture recovery can take a year or more, and a newborn heifer will take two years to mature and produce her own first calf.

Cattle herds had still not returned to pre-2010 levels when the 2022 drought peak forced another mass sell-off. From 2020 through 2024, Texas’s herd size declined from 13.1 million to 12 million; Oklahoma’s declined from 5.3 million to 4.7 million; and Kansas’ declined from 6.5 million to 6.15 million.

Looking beyond livestock, a large percentage of the Southern Plains’ crops failed in 2022, the peak year of the drought. In Texas, 25% of the corn crop was planted but never harvested, and 45% of the soybean crop was similarly abandoned. A normal season would have yielded a $2.4 billion cotton crop in Texas, but 74% of that crop was abandoned, slashing its value to roughly $640 million.

Ending the Southern Plains drought

Is the end in sight? With La Niña fading in early 2026 and its opposite, El Niño, potentially on the horizon, there’s a chance for wetter conditions that could reduce the drought in the fall and winter months of 2026.

But the Southern Plains still have to get through spring and summer first. Ending a drought like this requires consistent precipitation over several months, and drought conditions are likely to get worse before they get better.

This article, originally published Feb. 9, 2026, has been updated with new wildfires in Texas.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Sixth year of drought in Texas and Oklahoma leaves ranchers facing wildfires and bracing for another tough year – https://theconversation.com/sixth-year-of-drought-in-texas-and-oklahoma-leaves-ranchers-facing-wildfires-and-bracing-for-another-tough-year-275219

Why the ‘Streets of Minneapolis’ have echoed with public support – unlike the campus of Kent State in 1970

Source: The Conversation – USA – By Gregory P. Magarian, Thomas and Karole Green Professor of Law, Washington University in St. Louis

Ohio National Guardsmen on the Kent State University campus prepare to disperse student protesters on May 4, 1970. Troops later opened fire on students, killing four. Howard Ruffner/Getty Images

The president announces an aggressive, controversial policy. Large groups of protesters take to the streets. Government agents open fire and kill protesters.

All of these events, familiar from Minneapolis in 2026, also played out at Ohio’s Kent State University in 1970. In my academic writing about the First Amendment, I have described Kent State as a key moment when the government silenced free speech.

In Minneapolis, free speech has weathered the crisis better, as seen in the protests themselves, the public’s responses – and even the protest songs the two events inspired.

Protests and shootings, then and now

In 1970, President Richard Nixon announced he had expanded the Vietnam War by bombing Cambodia. Student anti-war protests, already fervent, intensified.

In Ohio, Gov. James Rhodes deployed the National Guard to quell protests at Kent State University. Monday, May 4, saw a large midday protest on the main campus commons. Students exercised their First Amendment rights by chanting and shouting at the Guard troops, who dispersed protesters with tear gas before regrouping on a nearby hill.

A video compilation of the deadly events at Kent State University on May 4, 1970.

With the nearest remaining protesters 20 yards from the Guard troops and most more than 60 yards away, 28 guardsmen inexplicably fired on students, killing four students and wounding nine others.

After the killings, the government sought to shift blame to the slain students. Nixon stated: “When dissent turns to violence, it invites tragedy.”

Minneapolis in 2026 presents vivid parallels.

As part of a sweeping campaign to deport undocumented immigrants, President Donald Trump in early January 2026 deployed armed U.S. Immigration and Customs Enforcement and Customs and Border Protection agents to Minneapolis.

Many residents protested, exercising their First Amendment rights by using smartphones and whistles to record and call out what they saw as ICE and CBP abuses. On Jan. 7, 2026, an ICE agent shot and killed activist Renee Good in her car. On Jan. 24, two CBP agents shot and killed protester Alex Pretti on the street.

The government sought to blame Good and Pretti for their own killings.

Different public reactions

After Kent State, amid bitter conservative opposition to student protesters, most Americans blamed the fallen students for their deaths. When students in New York City protested the Kent State shootings, construction workers attacked and beat the students in what became known as the “hard hat riot.” Afterward, Nixon hosted construction union leaders at the White House, where they gave him an honorary hard hat.

A huge crowd of protesters carrying anti-ICE signs.
Protesters march through the streets of downtown Minneapolis on Jan. 25, 2026, one day after federal agents shot dead U.S. citizen Alex Pretti.
Roberto Schmidt/AFP via Getty Images

In contrast, most Americans believe the Trump administration has used excessive force in Minneapolis. Majorities both oppose the federal agents’ actions against protesters and approve of protesting and recording the agents.

The public response to Minneapolis has made a difference. The Trump administration has announced an end to its immigration crackdown in the Twin Cities. Trump has backed off attacks on Good and Pretti. Congressional opposition to ICE funding has grown. Overall public support for Trump and his policies has fallen.

Free speech in protests, recordings and songs

What has caused people to view the killings in Minneapolis so differently from Kent State? One big factor, I believe, is how free speech has shaped the public response.

The Minneapolis protests themselves have sent the public a more focused message than what emerged from the student protests against the Vietnam War.

Anti-war protests in 1970 targeted military action on the other side of the world. Organizers had to plan and coordinate through in-person meetings and word of mouth. Student protesters needed the institutional news media to convey their views to the public.

In contrast, the anti-ICE protests in Minneapolis target government action at the protesters’ doorsteps. Organizers can use local networks and social media to plan, coordinate and communicate directly with the public. The protests have succeeded in deepening public opposition to ICE.

In addition, the American people have witnessed the Minneapolis shootings.

Kent State produced a famous photograph of a surviving student’s anguish but only hazy, chaotic video of the shootings.

In contrast, widely circulated video evidence showed the Minneapolis killings in horrifying detail. Within days of each shooting, news organizations had compiled detailed visual timelines, often based on recordings by protesters and observers, that sharply contradicted government accounts of what happened to Good and Pretti.

Finally, consider two popular protest songs that emerged from Kent State and Minneapolis: Crosby, Stills, Nash & Young’s “Ohio” and Bruce Springsteen’s “Streets of Minneapolis.”

Bruce Springsteen sings ‘Streets of Minneapolis.’

Crosby, Stills, Nash & Young recorded, pressed and released “Ohio” with remarkable speed for 1970. The vinyl single reached record stores and radio stations on June 4, a month after the Kent State shootings. The song peaked at No. 14 on the Billboard chart two months later.

Neil Young’s lyrics described the Kent State events in mythic terms, warning of “tin soldiers” and telling young Americans: “We’re finally on our own.” Young did not describe the shootings in detail. The song does not name Kent State, the National Guard or the fallen students. Instead, it presents the events as symbolic of a broader generational conflict over the Vietnam War.

Springsteen released “Streets of Minneapolis” on Jan. 28, 2026 – just four days after CBP agents killed Pretti. Two days later, the song topped streaming charts worldwide.

The internet and social media let Springsteen document Minneapolis, almost in real time, for a mass audience. Springsteen’s lyrics balance symbolism with specificity, naming not just “King Trump” but also victims Pretti and Good, key Trump officials Stephen Miller and Kristi Noem, main Minneapolis artery Nicollet Avenue, and the protesters’ “whistles and phones,” before fading on a chant of “ICE out!”

Critics offer compelling arguments that 21st-century mass communication degrades social relationships, elections and culture. In Minneapolis, disinformation has muddied crucial facts about the protests and killings.

At the same time, Minneapolis has shown how networked communication can promote free speech. Through focused protests, recordings of government action, and viral popular culture, today’s public can get fuller, clearer information to help critically assess government actions.

The Conversation

Gregory P. Magarian does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the ‘Streets of Minneapolis’ have echoed with public support – unlike the campus of Kent State in 1970 – https://theconversation.com/why-the-streets-of-minneapolis-have-echoed-with-public-support-unlike-the-campus-of-kent-state-in-1970-274917

Last nuclear weapons limits expired – pushing world toward new arms race

Source: The Conversation – USA – By Matthew Bunn, Professor of the Practice of Energy, National Security and Foreign Policy, Harvard Kennedy School

Russian ballistic missiles roll in Red Square during a Victory Day military parade. AP Photo/Alexander Zemlianichenko

For the first time in more than half a century, there are no binding restraints on the buildup of the largest nuclear forces on Earth. The New START treaty expired on Feb. 5, 2026, ending the last agreed limits on U.S. and Russian nuclear forces.

New START limited the number of strategic nuclear weapons the United States and Russia could deploy to 1,550 each. It also limited the missiles and bombers those warheads were loaded on, required on-site inspections and data exchanges, barred interference with satellite monitoring, and established a joint commission to discuss disputes. It did not limit the number of nuclear weapons each side could hold in reserve.

With China rapidly building up its nuclear forces, intense rivalry between the United States, China and Russia, and evolving technologies – from precision conventional weapons to artificial intelligence complicating nuclear balances – there is a real potential of an unpredictable three-way nuclear arms competition.

Such a competition could increase the danger of nuclear conflict, which I believe is higher than it has been in decades.

The security of agreed restraint

While the particular numbers of warheads and delivery vehicles an accord specifies may not make an immense difference, nuclear agreements offer important advantages in four key areas:

  • Predictability, limiting the pressures to build up nuclear arsenals that come from worst-case analysis of what adversaries might build and the destabilization that unexpected new weapons can bring.

  • Transparency, elements such as data exchanges, on-site inspections and limits on interfering with satellite monitoring, giving each side a better ability to understand what is going on with the others’ nuclear forces.

  • Reduced first-strike incentives, from banning or limiting particularly dangerous types of weapons.

  • Improved relations, through the mere fact that the other side is willing to limit the nuclear forces arrayed against you, which undermines the belief that they are implacably bent on your utter destruction. This reduces the intensity of hostility that can drive crises and escalation.

The expiration of the New START treaty upends decades of international nuclear stability.

After 1962’s Cuban missile crisis, President John F. Kennedy realized that relying on nuclear deterrence without any agreed nuclear restraints or risk-reduction measures is just too dangerous. He moved quickly to negotiate the Limited Test Ban Treaty in 1963 and put in place a U.S.-Soviet hotline for crisis communication.

He also launched a series of initiatives that led to reductions in defense spending on both sides, cuts in production of nuclear materials for weapons, and even troop pullbacks in Europe. Every subsequent U.S. president has pursued nuclear arms control accords.

Moreover, the countries that have promised not to get nuclear weapons under the Nuclear Nonproliferation Treaty want to see the nuclear-armed nations living up to their treaty obligation to negotiate in good faith toward nuclear disarmament. As pressure builds for countries to get their own nuclear weapons, maintaining the nonproliferation regime and getting the non-nuclear countries’ votes for stronger nuclear safeguards or export controls is likely to require the nuclear-armed nations to accept at least some constraints of their own.

Critics of arms control point out that Russia has violated many past accords – and the Trump administration has accused both Russia and China of carrying out illicit nuclear tests, though his administration has not offered solid evidence in public so far. But despite these very real issues, key elements of these agreements were implemented, and they “left the United States safer,” as Secretary of State Marco Rubio has noted. More than four-fifths of the nuclear weapons that used to exist in the world have been dismantled.

New limits or buildup?

a miissile breaks the surface of the ocean
The U.S. is developing a new type of cruise missile that can carry a nuclear warhead and, like this Tomahawk, can be launched from submerged submarines.
U.S. Navy via Getty Images

So, what’s next? President Donald Trump ignored Russian President Vladimir Putin’s proposal that both sides stay within the limits of New START while they explored options for new steps. But Trump said he wants to negotiate a “better” deal on fewer nuclear weapons – a deal that would not only limit U.S. and Russian strategic forces but also China’s much smaller but rapidly growing nuclear forces and Russia’s large force of nonstrategic nuclear weapons – that is, ones for battlefield or regional use.

So far, though, no negotiations on follow-on accords are underway, and the administration has not offered to negotiate about any of the U.S. weapons systems that worry Russia and China.

Moreover, there is strong pressure in Washington to build up U.S. nuclear forces rather than reduce them, to deter both Russia and China – while also dealing with the smaller but still dangerous North Korean nuclear force. The United States has many hundreds of nuclear weapons in storage that could be brought out and put on existing missiles, along with empty missile tubes on submarines that could again be filled with missiles. And the U.S. is developing new weapons, such as a nuclear-armed, sea-launched cruise missile.

Constraints and challenges

In my view, the more than 1,500 strategic nuclear weapons the United States already has deployed – with a major modernization underway – provide a sufficient deterrent to aggression. And if the United States begins to build up, Russia will respond in kind, and China may go even further. Once a multisided buildup is underway, its momentum will be more difficult to reverse.

Fortunately, the United States, Russia and China all have strong national interests in avoiding an unrestrained nuclear race, which would leave all of them poorer and no more secure. While the United States has quite a few nuclear weapons in storage, its nuclear modernization is struggling with enormous delays and cost overruns, and its industrial base is simply not prepared for a major nuclear expansion.

Putin is building a war economy that can churn out a lot of weapons – but he knows his economy is a 10th the size of the U.S.’s, and he wants to focus on rebuilding the conventional forces being chewed up in his war on Ukraine, making nuclear competition a bad idea. China has an economy to match the U.S.’s and an unrivaled manufacturing capacity, but it, too, would be worse off if its buildup provokes a U.S. buildup in response and a collapse of nuclear restraints.

Despite these common interests, finding a path to new accords among at least three parties, rather than two, will not be easy. Coalitions in each capital will have to win arguments that an accord is in their nation’s interest at the same time. The parties will have to address in some way the non-nuclear technologies that affect nuclear balances, and technologies such as cyber weapons and artificial intelligence would be hard to count or verify.

U.S. political polarization might make it very difficult to get a two-thirds vote in the Senate to ratify a treaty – though there are many other possible approaches, from reciprocal political commitments to executive agreements.

Famously unpredictable, Trump might still reverse course and agree to some version of Putin’s proposal for a “strategic pause” in which neither the United States nor Russia would build up its nuclear capabilities for the time being, while talks on next steps were underway. That would have the advantage of offering time to explore the options before new nuclear buildups got locked in.

And that would give him more chance of reaching his oft-stated goal of being the one to bring home a deal to reduce nuclear weapons and the dangers they pose.

The Conversation

Matthew Bunn is a member of the Board of Directors of the Arms Control Association; is a member of the Committee on International Security and Arms Control of the National Academy of Sciences; has consulted for several U.S. national laboratories; and has served on the Academic Alliance of U.S. Strategic Command.

ref. Last nuclear weapons limits expired – pushing world toward new arms race – https://theconversation.com/last-nuclear-weapons-limits-expired-pushing-world-toward-new-arms-race-275749

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

Source: The Conversation – USA (2) – By Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities? Hill Street Studios/DigitalVision via Getty Images

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it?

These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom.

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag “at-risk” students, optimize course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses and researchers use them to write code, scan literature and compress hours of tedious work into minutes.

People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve?

Over the past eight years, we’ve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences.

As these technologies become better at producing knowledge work – designing classes, writing papers, suggesting experiments and summarizing difficult texts – they don’t just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend.

Nonautonomous AI

Consider three kinds of AI systems and their respective impacts on university life:

AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising and institutional risk assessment. These are considered “nonautonomous” systems because they automate tasks, but a person is “in the loop” and using these systems as tools.

These technologies can pose a risk to students’ privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are “risk scores” generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed?

These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives.

Hybrid AI

Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalized feedback tools and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified.

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners and on-demand explainers. Faculty use them to generate rubrics, draft lectures and design syllabuses. Researchers use them to summarize papers, comment on drafts, design experiments and generate code.

This is where the “cheating” conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions.

A college student in discussion in a classroom
If students rely on generative AI to produce work for their classes, and feedback is also generated by AI, how does that affect the relationship between student and professor?
Eric Lee for The Washington Post via Getty Images

One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when you’re interacting with a human and when you’re interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot. A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety and distrust for students. These are problematic outcomes.

A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility – not only for students, but also for faculty.

Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft and learning to spot one’s own mistakes.

Autonomous agents

The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher “in a box” – an agentic AI system that can perform studies on its own – is becoming increasingly realistic.

A biotech researcher working on a computer in a lab
Growing sophistication and autonomy of technology systems means that scientific research can increasingly be automated, potentially leaving people with fewer opportunities to gain skills practicing research methods.
NurPhoto/Getty Images

Agentic tools are anticipated to “free up time” for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labor of instruction can be handed off to systems optimized for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the “routine” responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is “inefficient” and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion and revising weak arguments. This is the work of learning how to learn.

Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research and learning.

An uncomfortable inflection point

So what purpose do universities serve in a world in which knowledge work is increasingly automated?

One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them.

But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgment and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimizing it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgment.

In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.

The Conversation

The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as the data ethics advisor to MindGuard, a startup focused on AI integration into companies’ workflow.

Jacob Burley receives funding from The Applied Ethics Center at UMass Boston.

ref. The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself – https://theconversation.com/the-greatest-risk-of-ai-in-higher-education-isnt-cheating-its-the-erosion-of-learning-itself-270243

Do animals have a future on Hollywood sets?

Source: The Conversation – USA (2) – By Cynthia Chris, Professor of Media Studies, City University of New York

Bear trainer Doug Seus plays with Bart the Bear, who’s appeared in over 20 TV shows and films. Jean-Louis Atlan/Sygma via Getty Images

There is a long and storied history of nonhuman actors, from Luke, the dog of silent star Roscoe “Fatty” Arbuckle, to the collies cast in the role of Lassie in film and on television. Bart the Bear racked up over 20 film and TV credits in the 1980s and 1990s, while countless horses have supported period dramas that now saturate streaming services.

But business has not been as good as it used to be for the animal trainers who specialize in renting creatures of all kinds to film and TV productions.

According to The Hollywood Reporter, it’s a trend that’s been building for at least 25 years, and it’s largely due to a mix of activism and technological advances, which I’ve observed in my studies of animals on screen.

Fewer roles to go around

Hollywood’s adoption of visual effects – also referred to as computer generated imagery, or CGI – has had an outsized role in putting many animal actors out of work. Ever since “Jurassic Park” (1993) dared to comingle CGI dinosaurs with human actors, more and more digital animals have appeared alongside humans on screen.

Other factors have accelerated the trend.

The COVID-19 pandemic, the 2023 Hollywood actors and writers strikes and a recent dip in the number of new TV series being greenlit have meant fewer productions and fewer roles to go around, whether they’re written for humans or animals.

But even before these recent events, there were calls for Hollywood to radically reduce its dependence on animal actors.

In 2012, The Hollywood Reporter – the same trade magazine that recently lamented a downturn in animal rentals – published an exposé cataloging incidents in which animals died, were injured or were put at grievous risk on sets. These productions nonetheless went on to carry the famous “No Animals Were Harmed” credit awarded by the American Humane Association, despite the fact that, well, animals were harmed. American Humane maintained that the incidents were tragic but not the result of negligence.

In 2016, PETA released the results of undercover investigations documenting substandard living conditions and untreated medical conditions at Birds & Animals Unlimited, which operates animal training facilities for film and television. In 2024, the organization detailed neglect of animals in the care of Atlanta Film Animals. Both companies denied the allegations.

There are, of course, any number of ways to minimize or avoid using actual animals in film and TV altogether.

“The Rise of the Planet of the Apes” and its sequels have used motion-capture, with humans performing the movements of characters later rendered as chimpanzees, gorillas, bonobos and orangutans.

For Ang Lee’s 2012 production “Life of Pi,” visual effects artists created thousands of virtual animals, while director Darren Aronofsky opted for completely digital animals, supplemented by some practical props, in 2014’s “Noah.”

Bucking high-tech trends, the 2025 horror film “Primate” went old school without reverting to real animals, deploying a movement artist in a costume and prosthetics to play a murderously rabid chimp.

The 2025 horror flick ‘Primate’ doesn’t deploy CGI or an animal actor, but instead uses a costumed human to portray the maniacal ape.

Can CGI numb viewers to animal violence?

What do digital animals, these bestial avatars, make possible?

Undoubtedly, there are trainers who care deeply for their charges and uphold best practices in animal husbandry. But it stands to reason that the fewer captive animals, the better, and recent advances in AI have made visual effects and CGI even more realistic and easier to model.

However, substituting flesh-and-blood animals with those made of pixels seems to have created a canvas for unfettered abuse. Consider the brutal violence of the “Planet of the Apes” reboots, which include hand-to-hand combat, branding and a torturous crucifixion scene.

In the past, the fact that the animals on set were real sometimes curbed filmmakers’ most savage impulses; violence was implied or took place off-screen in family fare like “The Yearling” (1946) and “Old Yeller” (1957). At the same time, camera tricks and props have been used to create scenes of animal cruelty in many films, from “American Psycho” (2000) to “John Wick” (2014).

While the effects of violent media on viewers are notoriously hard to study, some evidence suggests that some audiences can become desensitized to the real-world consequences of unhealthy and violent content. It’s easy to see how this desensitization could extend to watching cruelty toward animals on screen.

Viewers can still sniff out the virtual

A hybrid approach to portraying animals on screen seems to have taken hold, using what one scholar has called – in a reference to on-screen dogs – “composite canine performances.”

The team behind the 2025 version of “Superman,” for example, sought to create a realistic dog, right down to each scruffy patch of fur. But they needed it to defy gravity and other laws of physics. So they incorporated just enough live animal in preproduction to animate a mostly CGI creature, with director James Gunn’s own dog serving as the “model,” or “reference,” for the superdog, Krypto.

Director James Gunn’s dog was used to model the mostly CGI Krypto in 2025’s ‘Superman.’

This technique recalls the methods of Disney animators who were stumped by the challenge of creating the characters for “Bambi” (1942). So they studied animal anatomy, photographed deer in the wild and sketched animals brought into the studio in order to better capture their movements on paper.

But when it comes to live-action films grounded in everyday life, there’s still work on set for real animals. For one, it’s still usually cheaper to deploy the real thing. Moreover, most of the virtual animals on screen simply don’t look realistic enough to allow for the full suspension of disbelief that makes cinema magic.

That’s why in the 2025 adaptation of Helen MacDonald’s memoir, “H Is for Hawk,” filmmakers reportedly employed five goshawks to portray Mabel, the bird adopted by Helen (Claire Foy). And it’s why Academy Award-nominee “Marty Supremefeatured an entire menagerie of live animals, including a horse, a camel, an armadillo, a dog, a rabbit and even a ping-pong playing sea lion. Yes, the sea lion in the scene was real, but the ball wasn’t.

Future opportunities for trainers and their charges appear to rest on just how good visual effects can get. For some animal activists – not to mention the animals that have no say in their work – that day can’t come soon enough.

Moviegoers and animal advocates, meanwhile, might hope for a middle ground: a future in which only ethically treated animals continue to get to appear on the screen.

The Conversation

Cynthia Chris does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Do animals have a future on Hollywood sets? – https://theconversation.com/do-animals-have-a-future-on-hollywood-sets-273877