Colorado has high levels of radon, which can cause lung cancer – here’s how to lower your risk

Source: The Conversation – USA (3) – By Jan Lowery, Professor of Epidemiology, Colorado School of Public Health, University of Colorado Anschutz Medical Campus

Radon exposure is the leading cause of lung cancer for people who have never used tobacco. Francesco Scatena/iStock via Getty Images

In Colorado, as of 2025, about 500 people a year die from lung cancer as the result of radon gas exposure. Nationally, the number of lung cancer deaths attributed to radon is about 21,000 per year.

Radon is present nearly everywhere outdoors, yet typically at levels that are not harmful. It becomes dangerous when it gets trapped and accumulates inside homes, schools and other buildings.

Radon is a naturally occurring radioactive gas that is produced by the breakdown of uranium, a heavy metal present in the soil. People cannot smell it or see it, which makes radon particularly dangerous. When radon gas forms in the soil, it rises and finds its way into homes old and new through cracked foundations, gaps around sump pumps and drains, and crawl spaces.

Many people are unaware of the radon levels in their home. In Colorado, it is estimated that only 50% of homes have been tested. Thus, many Coloradans may be exposed to elevated radon levels and not know it.

Though tobacco use is the most significant risk factor for lung cancer, accounting for approximately 86% of all lung cancer cases, radon is the leading cause of lung cancer among people who have never used tobacco. Radon also has a compounding effect with tobacco that further increases lung cancer risk among tobacco users. About 7 in 1,000 nontobacco users with prolonged exposure to elevated radon levels may develop lung cancer in their lifetime.

Exposure to radon is preventable. As a cancer epidemiologist, I aim to help all Colorado residents be aware of their home’s radon level and take appropriate actions to mitigate exposure and reduce their and their family’s risk of lung cancer.

Radon in your home

Because of Colorado’s unique geology, including mountainous regions that consist heavily of granite rock that contains uranium, radon levels are higher in Colorado than in other states.

Colorado is among the top 10 states with the highest radon levels across the country. About 50% of Colorado homes tested for radon have levels higher than the recommended threshold set by the Environmental Protection Agency, which is 4 picocuries per liter (pCi/L). The average level of radon in Colorado homes is 6.4 pCi/L, which is equivalent to having 200 chest X-rays each year. Radon levels differ across the 64 counties in Colorado based on their geography and makeup of the soil.

If a home is not adequately vented, radon can build up indoors. When radon decays, it releases radioactive particles that, once inhaled, can damage lung cells. More specifically, these particles can break chemical bonds in the cell’s DNA that, if not repaired, can lead to cancer. Prolonged exposure to high levels of radon, over several years, can cause lung cancer. Similar to tobacco use, it is the cumulative exposure to radon that increases risk for cancer.

Fortunately, there are ways to prevent radon from entering and accumulating inside our homes. Radon mitigation systems use fans and pipes to pull radon gas from below the foundation of the home and vent it outside. These systems can reduce radon levels inside the home by up to 99%.

Know your risk: Testing and mitigating

Testing your home for radon is simple and relatively inexpensive. Test kits are placed in the lowest living area of your house, apartment, condominium or townhome and left for a period of time. The EPA recommends testing for all residential units below the third floor.

There are short-term tests, which take from two to 90 days, and long-term tests, which take 90 days or more. Long-term tests are more accurate for estimating annual average radon levels. Once complete, tests can be mailed directly to a lab for processing.

A step-by-step instructional video on how to test your home for radon from the El Paso County (Colorado) Public Health Department.

Test kits typically cost less than US$50 or may be obtained for free from many sources, including the University of Colorado Anschutz Cancer Center. As of February 2026, the cancer center has distributed more than 1,600 test kits to people in 55 Colorado counties. Nearly 40% of the tests distributed thus far show radon levels above the EPA threshold.

The EPA recommends testing over multiple months, including colder months when windows and doors to the outside are typically closed and radon can become trapped indoors. Testing over several months provides a better understanding of the average annual radon level in the home.

Reduce your risk: Radon mitigation

People with radon levels in their home that are at or above 4 pCi/L are recommended to seek mitigation measures. This may involve sealing cracks in basement walls and foundations and installing a fan and vent pipe to pull radon gas from underneath the home and vent it outside. Mitigation can cost between $1,000 and $3,000 depending on home structure and location.

There are resources available for people who need radon mitigation and can’t afford it. Colorado’s state health department has a low-income radon mitigation assistance program that can pay for radon mitigation for people who are eligible based on income requirements.

Radon may be invisible, but its impact on human health is unmistakably real – and largely preventable. By taking action today – testing your home, sharing this knowledge and seeking help when needed – you are investing in a healthier future for yourself and your community.

The Conversation

Jan Lowery does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Colorado has high levels of radon, which can cause lung cancer – here’s how to lower your risk – https://theconversation.com/colorado-has-high-levels-of-radon-which-can-cause-lung-cancer-heres-how-to-lower-your-risk-273666

I’m a philosopher who tries to see the best in others – but I know there are limits

Source: The Conversation – USA (3) – By Mark Schroeder, Professor of Philosophy, USC Dornsife College of Letters, Arts and Sciences

Interpreting someone’s thoughts or actions can mean balancing their agency against the good. Kateryna Kovarzh/iStock via Getty Images

Understanding one another can be hard. There is a big difference between someone snapping at you out of contempt, and calling you out for a mistake because they believe in you and know you can do better. One of these cases calls for anger, but the other for humility or even embarrassment. Or maybe they are only snapping because they’re “hangry” – they might just need a Snickers bar.

And that’s just with people we know. What about strangers, people across the political divide, or even those with very different backgrounds and cultures than your own?

My field, philosophy, offers a tried-and-true answer to what we need to do in order to understand people and texts from very different backgrounds and cultural assumptions than our own. We need to be charitable.

Charity in this sense isn’t a matter of giving money to those who need it more. Instead, it’s seeing others in a favorable light – of seeing the best in them. In my work, I think of this as seeing other people as protagonists: characters who “do their best” with the predicament in which they find themselves. Interpreting someone charitably doesn’t require agreeing with them. But it does require doing our best to find merit in their point of view.

Of course, people and ideas don’t have unlimited merit. We can err by failing to see the merit of someone’s point of view – or we can err by finding merit that isn’t really there.

But the idea of charity is that it’s worse to make the first kind of error because it prevents us from getting along and learning from one another. By seeing the best in someone else and in their ideas, we can learn productively from engaging with them. Protagonists are people we can learn from and cooperate with.

Taking them seriously

It doesn’t take a genius to observe that we are all better at seeing the best in the people we agree with – and worse with those across the political divide. Political discussions on social media are often dominated by competing attributions of more and more insidious motives to people on the other side. We see them not as protagonists, but as antagonists.

By seeing the worst in someone else’s ideas, we let ourselves off easy. We dismiss them when instead we need to be taking them seriously.

So why, if charity requires seeing the best in others, are we so often tempted to see the worst in them?

A better understanding of charity provides the answer. Seeing the best and the worst in others are not opposite ways of interpreting someone, but simply two sides of the same coin. Here’s why:

A dark-haired man and woman stand as they seem to argue in a dining room, with the man clutching his temples.
Part of charity is sifting out the signal from the noise.
Maskot/Getty Images

Interpretation trade-offs

Interpreting someone isn’t all about figuring out their motives. Sometimes it’s about sorting out what is signal and what is noise. If I snap at you, you could spend a lot of time fixating over whether to be angry or embarrassed. But sometimes the right move is just to pass me a Snickers bar and move on. Our moods and actions are influenced by hunger, hormones, alcohol and lack of sleep, just to name a few. Overinterpreting a snap after I missed breakfast treats as signal what is really just noise.

Overlooking a thing or two when I am hangry can be the best way to see the best in me. When you interpret my snap as merely the result of missing a meal, you don’t really see it as coming from me, the protagonist; but as the result of my predicament. You will judge me, not by whether I am hangry, but by how I overcome that. Your interpretation sees me in a more positive light, by taking away some of my agency.

By “agency,” I mean the extent to which someone gets credit for what they do. You have greater agency over something that you do on purpose, and less if was a foreseen but accepted side effect of your plan. You have less agency if it was an accident, but more if the accident was negligent; less agency if you just snapped because you’re hangry, but more if you know you get hangry and chose to skip lunch anyway.

A perfect agent wouldn’t be affected by hormones and hunger. They would simply make rational choices that advance their goals. But humans aren’t like that. We are imperfectly embodied agents, at best. So interpreting one another well sometimes requires seeing the good in one another, at the cost of agency. In other words, it has to balance agency against the good, as I have argued in my recent work.

But you can’t find the best in someone by just ignoring more and more until all the bad things are trimmed away and only something good is left. Your interpretation has to fit with the facts of what they do and say.

And sometimes the trade-offs between agency and the good go the other way – we interpret each other in ways that attribute more agency but less good. If passing me a Snickers bar seems to calm me down, you might try it again the next time I snap. But one day you realize that you have started carrying extra Snickers bars everywhere you go in case you run into me, and a different interpretation presents itself: Maybe instead of being a decent but mood-challenged friend, I have just been using you for your candy bars.

A young bearded man in a yellow shirt grins as he holds up a chocolate bar and sits with his feet on an office table.
Truly angry, just hangry, or taking advantage of your chocolate supply?
Deagreez/iStock via Getty Images Plus

This creates tipping points for charitable interpretation. When we cross the tipping point, you switch from seeing someone as an imperfectly embodied protagonist to seeing them as an antagonist.

Charity without a cost

All of this is a way of arguing that it is sometimes right to see the worst in others. Sometimes other people really are the worst, and understanding them requires understanding their agency, not what is good about them. Protagonists and antagonists are just two sides of the same coin: The very same interpretive process can lead us in either direction.

Unfortunately, this means there is no simple test for when you are doing well enough at seeing the best in others. In particular, there is no test that we can agree about across our political differences. Interpreting someone charitably requires looking hard enough for good in them, but part of what we disagree with one another about is precisely what is good. So we are bound to disagree with one another about who is being sufficiently charitable.

But as a personal aspiration, a little more charity can go a long way. We can be generous not just with money, but in how we interpret others. But unlike giving money away, we don’t lose anything when we try harder to see the best in someone else.

The Conversation

Mark Schroeder does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. I’m a philosopher who tries to see the best in others – but I know there are limits – https://theconversation.com/im-a-philosopher-who-tries-to-see-the-best-in-others-but-i-know-there-are-limits-273446

Trump administration axed nutrition education program that saved more money than it cost, even as government encourages healthier eating

Source: The Conversation – USA (2) – By Diane Cress, Associate Professor of Nutrition and Food Science, Wayne State University

If the government had found a way to save US$10 for every dollar it spent helping low-income people get healthier, wouldn’t it make sense for it to keep doing that?

Well, that’s exactly what the U.S. government did when it piloted the SNAP-Ed program in 1977. This U.S. Department of Agriculture program persisted for nearly 50 years until the Trump administration shuttered it in 2025.

SNAP-Ed served as the nutrition education arm of the Supplemental Nutrition Assistance Program, which helps more than 40 million Americans buy groceries.

SNAP-Ed complemented SNAP by teaching people who get those benefits how best to use that government assistance. It paid for nutrition educators to teach lessons at schools, community centers and university extension offices. The educators led grocery store tours, taught label reading and budget comparisons, and taught cooking classes. And they offered a mix of printed and online resources to support good nutrition in the home.

While the federal government fully funded the program, the states, along with Washington, D.C., and Puerto Rico, administered and implemented SNAP-Ed through local community programs, often partnering with nonprofits. It cost only one penny for every SNAP dollar spent, and it worked.

But as of Oct. 1, 2025, SNAP-Ed ceased to exist due to spending cuts that were part of the big tax reform and budget package President Donald Trump signed into law three months earlier.

Dealing with the aftermath

To see why focusing on teaching food preparation skills is so critical, imagine discovering a flat tire. Do you need someone to tell you to fix it or someone to show you how? Nutrition works the same way.

We’ve all left the doctor’s office with instructions to “eat better,” which is essentially useless without the tools to do so. SNAP-Ed taught people how to identify healthy food patterns, keep food safe and navigate a complex food environment.

It also taught low-income Americans how to improve their budgeting and planning for meals that balance cost and nutrition. It’s nearly impossible to meet your basic nutritional needs if you are relying on SNAP dollars alone to fill your grocery cart. Skills are required.

States are getting creative to find ways to preserve aspects of the SNAP-Ed program. In Georgia, alternative funding sources might keep programs running for about a year. In Wyoming, a less local, more regional model has helped allow for the continuation of some programs previously funded by the SNAP-Ed program.

In my own state, Michigan State University Extension, which served as Michigan’s statewide implementing partner for SNAP-Ed, lost over $10 million in federal support when SNAP-Ed was defunded. The extension’s staff is working to keep its curricula, lesson plans, recipes and other training materials available online to the public in an effort to sustain its work.

Educating 1.2 million people

Because SNAP-Ed funding has been eliminated, the programs it supported are disappearing or shrinking. As a result, every SNAP dollar may not be spent as wisely as before.

In 2025, SNAP spending was over $100 billion, while SNAP-Ed operated on a $536 million budget, educating over 1.2 million people on how best to spend their SNAP dollars and improve their health.

SNAP-Ed’s benefits persist today, but without continued training and support its impact will diminish, decades of trust built in communities will be lost, and the health of communities no longer served will suffer.

But for now, at least, SNAP-Ed’s online resources remain freely available.

The SNAP-Ed program explained.

Reducing diabetes risks

As a dietitian and a professor, I often conduct community-based participatory research aimed at improving health in low-income populations, especially those at risk for developing Type 2 diabetes.

In a pilot study my research team helped conduct in Detroit in 2018, we paired the Centers for Disease Control’s National Diabetes Prevention Program with Cooking Matters, a course funded by SNAP-Ed that taught meal planning, hands-on meal prep and food resource management.

We wanted to see whether SNAP-Ed skills training would amplify the benefits of the National Diabetes Prevention Program in a low-income community.

It did.

All 23 participants in this Detroit pilot lost weight and lowered their hemoglobin A1c, a key marker of diabetes risk.

All but one participant moved from prediabetic to nondiabetic sugar levels, effectively reversing prediabetes.

The National Diabetes Prevention Program often has trouble retaining study participants in low-income communities where Type 2 diabetes risk and health care costs are significant problems.

Not only did our findings show how SNAP-ED was boosting health in several at-risk communities, but they also provided evidence for the economic benefits of the program.

To estimate how much money the government saved through SNAP-Ed, the USDA compiled data from multiple studies like ours, finding that every dollar spent in community health education ultimately saved $10.64 in Medicaid spending by the government.

If a drugmaker invented a pill that cut diabetes risk by 40% and reduced a key diabetes marker like HbA1c by nearly one percentage point, I have no doubt that it would be hailed as a miracle.

Our study achieved exactly these outcomes through inexpensive, skills-based education. And yet the Trump administration ended the education program that funding this kind of work.

Conflicting with the administration’s own goals

The Make America Healthy Again movement has both embraced Trump and a core principle: Healthy habits prevent chronic disease. It doesn’t make sense to me, in light of that movement, for the Trump administration to stop funding SNAP-Ed.

The program has helped reduce the prevalence of many chronic diseases, and this could have been expected to yield up to $1 trillion in health care savings by 2030.

As the popular proverb goes: “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” SNAP-Ed taught over 1.2 million people how to fish every year, all for a little more than the latest estimates of what it’s going to cost to build the White House ballroom.

The Conversation

Diane Cress previously received funding from Gleaners Community Food Bank.

ref. Trump administration axed nutrition education program that saved more money than it cost, even as government encourages healthier eating – https://theconversation.com/trump-administration-axed-nutrition-education-program-that-saved-more-money-than-it-cost-even-as-government-encourages-healthier-eating-272002

Sixth year of drought in Texas and Oklahoma leaves ranchers facing wildfires and bracing for another tough year

Source: The Conversation – USA (2) – By Joel Lisonbee, Senior Associate Scientist, Cooperative Institute for Research in the Environmental Sciences, University of Colorado Boulder

Cattle auctions aren’t often all-night affairs. But in Texas Lake Country in June 2022, ranchers facing dwindling water supplies and dried out pastures amid a worsening drought sold off more than 4,000 animals in an auction that lasted nearly 24 hours – about 200 cows an hour.

It was the height of a drought that has gripped the Southern Plains for the past six years – a drought that is still holding on in much of the region in 2026.

The drought cost the agriculture industry across Kansas, Oklahoma and Texas an estimated US$23.6 billion in lost crops, higher feed costs and selling off cattle from 2020 through 2024 alone. As rangeland dried out, it has also fueled wildfires, including several in Texas in early 2026.

Historically, droughts of this magnitude happen in the Southern Plains about once a decade, but the severe droughts of this century have been lasting longer, leaving water supplies, native rangelands and farms with little time to recover before the next one hits.

Many cattle producers and rangelands were still recovering from a severe 2010-2015 drought when a flash drought hit western Texas in spring 2020, marking the beginning of the current multibillion-dollar, multiyear and multistate drought. Ample spring rainfall in 2025 and severe flooding in central Texas that year weren’t enough to end the drought, and a powerful winter storm in late January 2026 missed the driest parts of the region.

A map shows heavy precipitation across a large part of the country, but it mostly missed the areas facing the worse drought in the Southern Plains.
Precipitation from a severe winter storm in late January 2026, shown in blue and measured in inches, largely missed the areas with the worst drought conditions, indicated by red contour lines.
UC Merced, NDMC

In a recent study with colleagues at the Southern Regional Climate Center and the National Integrated Drought Information System, we assessed the causes and damage from the ongoing drought in the Southern Plains.

We found three key reasons for the enduring drought and its damage: rising temperatures and a La Niña climate pattern; water supply shortages; and lingering economic impacts from the previous drought.

Weather and climate helped drive the drought

The Southern Plains is known to be a hot spot for rapid drought development, and the ongoing drought that started in 2020 is no exception.

Documented “flash droughts” – defined as periods of rapid drought onset or intensification of existing droughts – occurred at least five times in the region from 2020 to 2025. As global temperatures rise and climates warm, research warns that the frequency and severity of flash drought events will increase.

Maps show how the current drought progressed and moved around the region. It was at its height in 2022.
The U.S. Drought Monitor’s monthly updates from January 2020 through January 2026 show how drought moved around in the Southern Plains over those years but never let go. Darker colors reflect the intensity of drought in each location.
Joel Lisonbee; compiled from U.S. Drought Monitor

For the southern part of the Southern Plains, winter precipitation is closely linked to the El Niño–Southern Oscillation, a climate pattern that affects weather around the world. Five of the past six years exhibited a La Niña pattern, which typically means the region sees winters that are warmer and drier than normal.

La Niña was likely the primary driver – although not the only driver – of the drought for Texas and southwest Oklahoma, and one of the reasons drought conditions have continued into 2026.

The Southern Plains have a long history with severe droughts. The Dust Bowl of the early 1930s may be the best-known example. But a history with drought doesn’t make it any easier to manage when crops and water supplies dry up.

Deeply rooted water shortages

The heat and dryness since 2020 have left many of the region’s rivers, reservoirs and even groundwater reserves well below average.

San Antonio’s reservoirs all reached record-low levels in 2024 and 2025, as did the Edwards Aquifer, which provides water for roughly 2.5 million people. They were still low as 2026 began. Surface water and groundwater resources across central and western Texas have been depleted to the point that even a few big storms can’t replenish them.

A few major rivers flow into the Southern Plains from other drought-affected regions. Consider the Rio Grande, which begins in Colorado and winds through New Mexico and along Texas’ southern border: Not only has the Lower Rio Grande valley in southern Texas missed out on needed precipitation this winter, so did the Rio Grande headwaters in southern Colorado.

Colorado is facing a snow drought in winter 2026, as is much of the western U.S. If it continues, there will be less snowmelt come summer to feed rivers, such as the Rio Grande, or fill reservoirs. In early February, the Elephant Butte, Amistad and Falcon reservoirs, along the Rio Grande, were only 11%, 34%, and 20% full, respectively.

Lingering economic impacts

Like water supplies, the economy doesn’t just recover when the rains return.

One of the reasons the current drought has been so costly is that parts of the region had not fully recovered from the 2010-2015 drought when the latest one began in 2020. With only a five-year break between droughts, the landscape behaved like someone with an already weakened immune system who caught a cold.

Severe droughts over time in the Southern Plains
The percentage of land in different levels of drought or wetness for each month based on the nine-month Standardized Precipitation Index leading up to the selected date. Reds indicate drier conditions; blues indicate wetter conditions.
National Integrated Drought Information System, NOAA Drought.gov

During the 2010-2015 drought, cattle producers in Texas sold off about 20% of the statewide herd as water became scarce and rangeland dried up. Rebuilding a herd after a drought is a slow process. Pasture recovery can take a year or more, and a newborn heifer will take two years to mature and produce her own first calf.

Cattle herds had still not returned to pre-2010 levels when the 2022 drought peak forced another mass sell-off. From 2020 through 2024, Texas’s herd size declined from 13.1 million to 12 million; Oklahoma’s declined from 5.3 million to 4.7 million; and Kansas’ declined from 6.5 million to 6.15 million.

Looking beyond livestock, a large percentage of the Southern Plains’ crops failed in 2022, the peak year of the drought. In Texas, 25% of the corn crop was planted but never harvested, and 45% of the soybean crop was similarly abandoned. A normal season would have yielded a $2.4 billion cotton crop in Texas, but 74% of that crop was abandoned, slashing its value to roughly $640 million.

Ending the Southern Plains drought

Is the end in sight? With La Niña fading in early 2026 and its opposite, El Niño, potentially on the horizon, there’s a chance for wetter conditions that could reduce the drought in the fall and winter months of 2026.

But the Southern Plains still have to get through spring and summer first. Ending a drought like this requires consistent precipitation over several months, and drought conditions are likely to get worse before they get better.

This article, originally published Feb. 9, 2026, has been updated with new wildfires in Texas.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Sixth year of drought in Texas and Oklahoma leaves ranchers facing wildfires and bracing for another tough year – https://theconversation.com/sixth-year-of-drought-in-texas-and-oklahoma-leaves-ranchers-facing-wildfires-and-bracing-for-another-tough-year-275219

Why the ‘Streets of Minneapolis’ have echoed with public support – unlike the campus of Kent State in 1970

Source: The Conversation – USA – By Gregory P. Magarian, Thomas and Karole Green Professor of Law, Washington University in St. Louis

Ohio National Guardsmen on the Kent State University campus prepare to disperse student protesters on May 4, 1970. Troops later opened fire on students, killing four. Howard Ruffner/Getty Images

The president announces an aggressive, controversial policy. Large groups of protesters take to the streets. Government agents open fire and kill protesters.

All of these events, familiar from Minneapolis in 2026, also played out at Ohio’s Kent State University in 1970. In my academic writing about the First Amendment, I have described Kent State as a key moment when the government silenced free speech.

In Minneapolis, free speech has weathered the crisis better, as seen in the protests themselves, the public’s responses – and even the protest songs the two events inspired.

Protests and shootings, then and now

In 1970, President Richard Nixon announced he had expanded the Vietnam War by bombing Cambodia. Student anti-war protests, already fervent, intensified.

In Ohio, Gov. James Rhodes deployed the National Guard to quell protests at Kent State University. Monday, May 4, saw a large midday protest on the main campus commons. Students exercised their First Amendment rights by chanting and shouting at the Guard troops, who dispersed protesters with tear gas before regrouping on a nearby hill.

A video compilation of the deadly events at Kent State University on May 4, 1970.

With the nearest remaining protesters 20 yards from the Guard troops and most more than 60 yards away, 28 guardsmen inexplicably fired on students, killing four students and wounding nine others.

After the killings, the government sought to shift blame to the slain students. Nixon stated: “When dissent turns to violence, it invites tragedy.”

Minneapolis in 2026 presents vivid parallels.

As part of a sweeping campaign to deport undocumented immigrants, President Donald Trump in early January 2026 deployed armed U.S. Immigration and Customs Enforcement and Customs and Border Protection agents to Minneapolis.

Many residents protested, exercising their First Amendment rights by using smartphones and whistles to record and call out what they saw as ICE and CBP abuses. On Jan. 7, 2026, an ICE agent shot and killed activist Renee Good in her car. On Jan. 24, two CBP agents shot and killed protester Alex Pretti on the street.

The government sought to blame Good and Pretti for their own killings.

Different public reactions

After Kent State, amid bitter conservative opposition to student protesters, most Americans blamed the fallen students for their deaths. When students in New York City protested the Kent State shootings, construction workers attacked and beat the students in what became known as the “hard hat riot.” Afterward, Nixon hosted construction union leaders at the White House, where they gave him an honorary hard hat.

A huge crowd of protesters carrying anti-ICE signs.
Protesters march through the streets of downtown Minneapolis on Jan. 25, 2026, one day after federal agents shot dead U.S. citizen Alex Pretti.
Roberto Schmidt/AFP via Getty Images

In contrast, most Americans believe the Trump administration has used excessive force in Minneapolis. Majorities both oppose the federal agents’ actions against protesters and approve of protesting and recording the agents.

The public response to Minneapolis has made a difference. The Trump administration has announced an end to its immigration crackdown in the Twin Cities. Trump has backed off attacks on Good and Pretti. Congressional opposition to ICE funding has grown. Overall public support for Trump and his policies has fallen.

Free speech in protests, recordings and songs

What has caused people to view the killings in Minneapolis so differently from Kent State? One big factor, I believe, is how free speech has shaped the public response.

The Minneapolis protests themselves have sent the public a more focused message than what emerged from the student protests against the Vietnam War.

Anti-war protests in 1970 targeted military action on the other side of the world. Organizers had to plan and coordinate through in-person meetings and word of mouth. Student protesters needed the institutional news media to convey their views to the public.

In contrast, the anti-ICE protests in Minneapolis target government action at the protesters’ doorsteps. Organizers can use local networks and social media to plan, coordinate and communicate directly with the public. The protests have succeeded in deepening public opposition to ICE.

In addition, the American people have witnessed the Minneapolis shootings.

Kent State produced a famous photograph of a surviving student’s anguish but only hazy, chaotic video of the shootings.

In contrast, widely circulated video evidence showed the Minneapolis killings in horrifying detail. Within days of each shooting, news organizations had compiled detailed visual timelines, often based on recordings by protesters and observers, that sharply contradicted government accounts of what happened to Good and Pretti.

Finally, consider two popular protest songs that emerged from Kent State and Minneapolis: Crosby, Stills, Nash & Young’s “Ohio” and Bruce Springsteen’s “Streets of Minneapolis.”

Bruce Springsteen sings ‘Streets of Minneapolis.’

Crosby, Stills, Nash & Young recorded, pressed and released “Ohio” with remarkable speed for 1970. The vinyl single reached record stores and radio stations on June 4, a month after the Kent State shootings. The song peaked at No. 14 on the Billboard chart two months later.

Neil Young’s lyrics described the Kent State events in mythic terms, warning of “tin soldiers” and telling young Americans: “We’re finally on our own.” Young did not describe the shootings in detail. The song does not name Kent State, the National Guard or the fallen students. Instead, it presents the events as symbolic of a broader generational conflict over the Vietnam War.

Springsteen released “Streets of Minneapolis” on Jan. 28, 2026 – just four days after CBP agents killed Pretti. Two days later, the song topped streaming charts worldwide.

The internet and social media let Springsteen document Minneapolis, almost in real time, for a mass audience. Springsteen’s lyrics balance symbolism with specificity, naming not just “King Trump” but also victims Pretti and Good, key Trump officials Stephen Miller and Kristi Noem, main Minneapolis artery Nicollet Avenue, and the protesters’ “whistles and phones,” before fading on a chant of “ICE out!”

Critics offer compelling arguments that 21st-century mass communication degrades social relationships, elections and culture. In Minneapolis, disinformation has muddied crucial facts about the protests and killings.

At the same time, Minneapolis has shown how networked communication can promote free speech. Through focused protests, recordings of government action, and viral popular culture, today’s public can get fuller, clearer information to help critically assess government actions.

The Conversation

Gregory P. Magarian does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the ‘Streets of Minneapolis’ have echoed with public support – unlike the campus of Kent State in 1970 – https://theconversation.com/why-the-streets-of-minneapolis-have-echoed-with-public-support-unlike-the-campus-of-kent-state-in-1970-274917

Last nuclear weapons limits expired – pushing world toward new arms race

Source: The Conversation – USA – By Matthew Bunn, Professor of the Practice of Energy, National Security and Foreign Policy, Harvard Kennedy School

Russian ballistic missiles roll in Red Square during a Victory Day military parade. AP Photo/Alexander Zemlianichenko

For the first time in more than half a century, there are no binding restraints on the buildup of the largest nuclear forces on Earth. The New START treaty expired on Feb. 5, 2026, ending the last agreed limits on U.S. and Russian nuclear forces.

New START limited the number of strategic nuclear weapons the United States and Russia could deploy to 1,550 each. It also limited the missiles and bombers those warheads were loaded on, required on-site inspections and data exchanges, barred interference with satellite monitoring, and established a joint commission to discuss disputes. It did not limit the number of nuclear weapons each side could hold in reserve.

With China rapidly building up its nuclear forces, intense rivalry between the United States, China and Russia, and evolving technologies – from precision conventional weapons to artificial intelligence complicating nuclear balances – there is a real potential of an unpredictable three-way nuclear arms competition.

Such a competition could increase the danger of nuclear conflict, which I believe is higher than it has been in decades.

The security of agreed restraint

While the particular numbers of warheads and delivery vehicles an accord specifies may not make an immense difference, nuclear agreements offer important advantages in four key areas:

  • Predictability, limiting the pressures to build up nuclear arsenals that come from worst-case analysis of what adversaries might build and the destabilization that unexpected new weapons can bring.

  • Transparency, elements such as data exchanges, on-site inspections and limits on interfering with satellite monitoring, giving each side a better ability to understand what is going on with the others’ nuclear forces.

  • Reduced first-strike incentives, from banning or limiting particularly dangerous types of weapons.

  • Improved relations, through the mere fact that the other side is willing to limit the nuclear forces arrayed against you, which undermines the belief that they are implacably bent on your utter destruction. This reduces the intensity of hostility that can drive crises and escalation.

The expiration of the New START treaty upends decades of international nuclear stability.

After 1962’s Cuban missile crisis, President John F. Kennedy realized that relying on nuclear deterrence without any agreed nuclear restraints or risk-reduction measures is just too dangerous. He moved quickly to negotiate the Limited Test Ban Treaty in 1963 and put in place a U.S.-Soviet hotline for crisis communication.

He also launched a series of initiatives that led to reductions in defense spending on both sides, cuts in production of nuclear materials for weapons, and even troop pullbacks in Europe. Every subsequent U.S. president has pursued nuclear arms control accords.

Moreover, the countries that have promised not to get nuclear weapons under the Nuclear Nonproliferation Treaty want to see the nuclear-armed nations living up to their treaty obligation to negotiate in good faith toward nuclear disarmament. As pressure builds for countries to get their own nuclear weapons, maintaining the nonproliferation regime and getting the non-nuclear countries’ votes for stronger nuclear safeguards or export controls is likely to require the nuclear-armed nations to accept at least some constraints of their own.

Critics of arms control point out that Russia has violated many past accords – and the Trump administration has accused both Russia and China of carrying out illicit nuclear tests, though his administration has not offered solid evidence in public so far. But despite these very real issues, key elements of these agreements were implemented, and they “left the United States safer,” as Secretary of State Marco Rubio has noted. More than four-fifths of the nuclear weapons that used to exist in the world have been dismantled.

New limits or buildup?

a miissile breaks the surface of the ocean
The U.S. is developing a new type of cruise missile that can carry a nuclear warhead and, like this Tomahawk, can be launched from submerged submarines.
U.S. Navy via Getty Images

So, what’s next? President Donald Trump ignored Russian President Vladimir Putin’s proposal that both sides stay within the limits of New START while they explored options for new steps. But Trump said he wants to negotiate a “better” deal on fewer nuclear weapons – a deal that would not only limit U.S. and Russian strategic forces but also China’s much smaller but rapidly growing nuclear forces and Russia’s large force of nonstrategic nuclear weapons – that is, ones for battlefield or regional use.

So far, though, no negotiations on follow-on accords are underway, and the administration has not offered to negotiate about any of the U.S. weapons systems that worry Russia and China.

Moreover, there is strong pressure in Washington to build up U.S. nuclear forces rather than reduce them, to deter both Russia and China – while also dealing with the smaller but still dangerous North Korean nuclear force. The United States has many hundreds of nuclear weapons in storage that could be brought out and put on existing missiles, along with empty missile tubes on submarines that could again be filled with missiles. And the U.S. is developing new weapons, such as a nuclear-armed, sea-launched cruise missile.

Constraints and challenges

In my view, the more than 1,500 strategic nuclear weapons the United States already has deployed – with a major modernization underway – provide a sufficient deterrent to aggression. And if the United States begins to build up, Russia will respond in kind, and China may go even further. Once a multisided buildup is underway, its momentum will be more difficult to reverse.

Fortunately, the United States, Russia and China all have strong national interests in avoiding an unrestrained nuclear race, which would leave all of them poorer and no more secure. While the United States has quite a few nuclear weapons in storage, its nuclear modernization is struggling with enormous delays and cost overruns, and its industrial base is simply not prepared for a major nuclear expansion.

Putin is building a war economy that can churn out a lot of weapons – but he knows his economy is a 10th the size of the U.S.’s, and he wants to focus on rebuilding the conventional forces being chewed up in his war on Ukraine, making nuclear competition a bad idea. China has an economy to match the U.S.’s and an unrivaled manufacturing capacity, but it, too, would be worse off if its buildup provokes a U.S. buildup in response and a collapse of nuclear restraints.

Despite these common interests, finding a path to new accords among at least three parties, rather than two, will not be easy. Coalitions in each capital will have to win arguments that an accord is in their nation’s interest at the same time. The parties will have to address in some way the non-nuclear technologies that affect nuclear balances, and technologies such as cyber weapons and artificial intelligence would be hard to count or verify.

U.S. political polarization might make it very difficult to get a two-thirds vote in the Senate to ratify a treaty – though there are many other possible approaches, from reciprocal political commitments to executive agreements.

Famously unpredictable, Trump might still reverse course and agree to some version of Putin’s proposal for a “strategic pause” in which neither the United States nor Russia would build up its nuclear capabilities for the time being, while talks on next steps were underway. That would have the advantage of offering time to explore the options before new nuclear buildups got locked in.

And that would give him more chance of reaching his oft-stated goal of being the one to bring home a deal to reduce nuclear weapons and the dangers they pose.

The Conversation

Matthew Bunn is a member of the Board of Directors of the Arms Control Association; is a member of the Committee on International Security and Arms Control of the National Academy of Sciences; has consulted for several U.S. national laboratories; and has served on the Academic Alliance of U.S. Strategic Command.

ref. Last nuclear weapons limits expired – pushing world toward new arms race – https://theconversation.com/last-nuclear-weapons-limits-expired-pushing-world-toward-new-arms-race-275749

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

Source: The Conversation – USA (2) – By Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities? Hill Street Studios/DigitalVision via Getty Images

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it?

These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom.

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag “at-risk” students, optimize course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses and researchers use them to write code, scan literature and compress hours of tedious work into minutes.

People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve?

Over the past eight years, we’ve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences.

As these technologies become better at producing knowledge work – designing classes, writing papers, suggesting experiments and summarizing difficult texts – they don’t just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend.

Nonautonomous AI

Consider three kinds of AI systems and their respective impacts on university life:

AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising and institutional risk assessment. These are considered “nonautonomous” systems because they automate tasks, but a person is “in the loop” and using these systems as tools.

These technologies can pose a risk to students’ privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are “risk scores” generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed?

These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives.

Hybrid AI

Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalized feedback tools and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified.

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners and on-demand explainers. Faculty use them to generate rubrics, draft lectures and design syllabuses. Researchers use them to summarize papers, comment on drafts, design experiments and generate code.

This is where the “cheating” conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions.

A college student in discussion in a classroom
If students rely on generative AI to produce work for their classes, and feedback is also generated by AI, how does that affect the relationship between student and professor?
Eric Lee for The Washington Post via Getty Images

One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when you’re interacting with a human and when you’re interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot. A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety and distrust for students. These are problematic outcomes.

A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility – not only for students, but also for faculty.

Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft and learning to spot one’s own mistakes.

Autonomous agents

The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher “in a box” – an agentic AI system that can perform studies on its own – is becoming increasingly realistic.

A biotech researcher working on a computer in a lab
Growing sophistication and autonomy of technology systems means that scientific research can increasingly be automated, potentially leaving people with fewer opportunities to gain skills practicing research methods.
NurPhoto/Getty Images

Agentic tools are anticipated to “free up time” for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labor of instruction can be handed off to systems optimized for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the “routine” responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is “inefficient” and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion and revising weak arguments. This is the work of learning how to learn.

Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research and learning.

An uncomfortable inflection point

So what purpose do universities serve in a world in which knowledge work is increasingly automated?

One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them.

But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgment and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimizing it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgment.

In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.

The Conversation

The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as the data ethics advisor to MindGuard, a startup focused on AI integration into companies’ workflow.

Jacob Burley receives funding from The Applied Ethics Center at UMass Boston.

ref. The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself – https://theconversation.com/the-greatest-risk-of-ai-in-higher-education-isnt-cheating-its-the-erosion-of-learning-itself-270243

Do animals have a future on Hollywood sets?

Source: The Conversation – USA (2) – By Cynthia Chris, Professor of Media Studies, City University of New York

Bear trainer Doug Seus plays with Bart the Bear, who’s appeared in over 20 TV shows and films. Jean-Louis Atlan/Sygma via Getty Images

There is a long and storied history of nonhuman actors, from Luke, the dog of silent star Roscoe “Fatty” Arbuckle, to the collies cast in the role of Lassie in film and on television. Bart the Bear racked up over 20 film and TV credits in the 1980s and 1990s, while countless horses have supported period dramas that now saturate streaming services.

But business has not been as good as it used to be for the animal trainers who specialize in renting creatures of all kinds to film and TV productions.

According to The Hollywood Reporter, it’s a trend that’s been building for at least 25 years, and it’s largely due to a mix of activism and technological advances, which I’ve observed in my studies of animals on screen.

Fewer roles to go around

Hollywood’s adoption of visual effects – also referred to as computer generated imagery, or CGI – has had an outsized role in putting many animal actors out of work. Ever since “Jurassic Park” (1993) dared to comingle CGI dinosaurs with human actors, more and more digital animals have appeared alongside humans on screen.

Other factors have accelerated the trend.

The COVID-19 pandemic, the 2023 Hollywood actors and writers strikes and a recent dip in the number of new TV series being greenlit have meant fewer productions and fewer roles to go around, whether they’re written for humans or animals.

But even before these recent events, there were calls for Hollywood to radically reduce its dependence on animal actors.

In 2012, The Hollywood Reporter – the same trade magazine that recently lamented a downturn in animal rentals – published an exposé cataloging incidents in which animals died, were injured or were put at grievous risk on sets. These productions nonetheless went on to carry the famous “No Animals Were Harmed” credit awarded by the American Humane Association, despite the fact that, well, animals were harmed. American Humane maintained that the incidents were tragic but not the result of negligence.

In 2016, PETA released the results of undercover investigations documenting substandard living conditions and untreated medical conditions at Birds & Animals Unlimited, which operates animal training facilities for film and television. In 2024, the organization detailed neglect of animals in the care of Atlanta Film Animals. Both companies denied the allegations.

There are, of course, any number of ways to minimize or avoid using actual animals in film and TV altogether.

“The Rise of the Planet of the Apes” and its sequels have used motion-capture, with humans performing the movements of characters later rendered as chimpanzees, gorillas, bonobos and orangutans.

For Ang Lee’s 2012 production “Life of Pi,” visual effects artists created thousands of virtual animals, while director Darren Aronofsky opted for completely digital animals, supplemented by some practical props, in 2014’s “Noah.”

Bucking high-tech trends, the 2025 horror film “Primate” went old school without reverting to real animals, deploying a movement artist in a costume and prosthetics to play a murderously rabid chimp.

The 2025 horror flick ‘Primate’ doesn’t deploy CGI or an animal actor, but instead uses a costumed human to portray the maniacal ape.

Can CGI numb viewers to animal violence?

What do digital animals, these bestial avatars, make possible?

Undoubtedly, there are trainers who care deeply for their charges and uphold best practices in animal husbandry. But it stands to reason that the fewer captive animals, the better, and recent advances in AI have made visual effects and CGI even more realistic and easier to model.

However, substituting flesh-and-blood animals with those made of pixels seems to have created a canvas for unfettered abuse. Consider the brutal violence of the “Planet of the Apes” reboots, which include hand-to-hand combat, branding and a torturous crucifixion scene.

In the past, the fact that the animals on set were real sometimes curbed filmmakers’ most savage impulses; violence was implied or took place off-screen in family fare like “The Yearling” (1946) and “Old Yeller” (1957). At the same time, camera tricks and props have been used to create scenes of animal cruelty in many films, from “American Psycho” (2000) to “John Wick” (2014).

While the effects of violent media on viewers are notoriously hard to study, some evidence suggests that some audiences can become desensitized to the real-world consequences of unhealthy and violent content. It’s easy to see how this desensitization could extend to watching cruelty toward animals on screen.

Viewers can still sniff out the virtual

A hybrid approach to portraying animals on screen seems to have taken hold, using what one scholar has called – in a reference to on-screen dogs – “composite canine performances.”

The team behind the 2025 version of “Superman,” for example, sought to create a realistic dog, right down to each scruffy patch of fur. But they needed it to defy gravity and other laws of physics. So they incorporated just enough live animal in preproduction to animate a mostly CGI creature, with director James Gunn’s own dog serving as the “model,” or “reference,” for the superdog, Krypto.

Director James Gunn’s dog was used to model the mostly CGI Krypto in 2025’s ‘Superman.’

This technique recalls the methods of Disney animators who were stumped by the challenge of creating the characters for “Bambi” (1942). So they studied animal anatomy, photographed deer in the wild and sketched animals brought into the studio in order to better capture their movements on paper.

But when it comes to live-action films grounded in everyday life, there’s still work on set for real animals. For one, it’s still usually cheaper to deploy the real thing. Moreover, most of the virtual animals on screen simply don’t look realistic enough to allow for the full suspension of disbelief that makes cinema magic.

That’s why in the 2025 adaptation of Helen MacDonald’s memoir, “H Is for Hawk,” filmmakers reportedly employed five goshawks to portray Mabel, the bird adopted by Helen (Claire Foy). And it’s why Academy Award-nominee “Marty Supremefeatured an entire menagerie of live animals, including a horse, a camel, an armadillo, a dog, a rabbit and even a ping-pong playing sea lion. Yes, the sea lion in the scene was real, but the ball wasn’t.

Future opportunities for trainers and their charges appear to rest on just how good visual effects can get. For some animal activists – not to mention the animals that have no say in their work – that day can’t come soon enough.

Moviegoers and animal advocates, meanwhile, might hope for a middle ground: a future in which only ethically treated animals continue to get to appear on the screen.

The Conversation

Cynthia Chris does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Do animals have a future on Hollywood sets? – https://theconversation.com/do-animals-have-a-future-on-hollywood-sets-273877

‘Learning to be humble meant taming my need to stand out from the group’ – a humility scholar explains how he became more grounded

Source: The Conversation – USA (3) – By Barret Michalec, Research Associate Professor of Nursing and Health Innovation, Arizona State University

A need to be seen as the biggest fish may stem from pride and insecurity. ballyscanlon via Getty Images

“Humble” is not a word my colleagues would use to describe me, especially early in my career.

In fact, when word got around that I was researching humility, I suspect more than a few choked on their coffee.

And even though I have spent over a decade exploring the concept as an attribute and as a practice, it wasn’t until I recently reflected on my own professional challenges that I truly understood how to embrace humility.

I want to share my journey, but first it is important to understand what humility is – and isn’t. It’s been extolled as a virtue for centuries, but it’s often mischaracterized.

In today’s culture, it can be mistaken as a humblebrag, which disguises a boast as modesty – for example, “I really hate talking about myself, but people keep asking how I managed to run a marathon while working full time.” Or it can resemble impostor phenomenon, the persistent experience of feeling intellectually or professionally fraudulent despite clear evidence of competence or success.

But research shows that humble people hold accurate views of their own abilities and achievements. They openly acknowledge their mistakes and limitations and are receptive to new ideas. Overall, they recognize their places within a larger whole and genuinely appreciate the value of others.

Humility doesn’t always earn praise. Sometimes the humble may be seen as meek, subservient or self-abasing.

For instance, many people praised former New Zealand Prime Minister Jacinda Ardern’s empathetic, self-effacing leadership during the COVID-19 pandemic, with an openness and deference to experts. But some critics dismissed it as weak or soft. These negative views show the various ways people “see” humility.

Generally, though, when humility is understood as grounded self-awareness rather than self-erasure, it’s viewed as something worth cultivating and practicing. We see openness, curiosity, acknowledgment of others and a lack of ego in fictional characters like Ted Lasso, hero of the same-titled Apple TV series; Samwise Gamgee in the “Lord of the Rings” books; and Jean-Luc Picard, commander of the USS Enterprise in “Star Trek: The Next Generation.”

Humility is also evident in public figures, such as former President Jimmy Carter, children’s television host Fred Rogers, and Nelson Mandela, the Black nationalist who served as the first Black president of South Africa.

An elderly man in a dark suit stands in front of a church congregation, raising a hand in greeting.
Former U.S. President Jimmy Carter speaks to the congregation at Maranatha Baptist Church before teaching Sunday school in his hometown of Plains, Ga., on April 28, 2019, at age 94. After leaving the White House in 1981, Carter taught Sunday school at the church on a regular basis.
Paul Hennessy/NurPhoto via Getty Images

I’m a sociologist with a focus on medical education and health care providers. At Arizona State University’s Edson College of Nursing and Health Innovation, I explore issues including causes of burnout, elements of team-based care and opportunities for emphasizing the human side of health care. In recent years, my work has focused on humility.

From my research and my own experience, I’ve learned that true humility isn’t self-erasure. It’s a sense of security and confidence that your value doesn’t depend on recognition and that you are just one member of a larger system with a multitude of contributors. By removing the need to dominate, humility fosters openness to collaboration, innovation and an awareness of how the systems around us work.

Still, in a world of Instagram likes and LinkedIn accolades, humility can be the virtue everyone seems to admire but few practice It’s the one we say we want – until it requires us to confront the parts of ourselves that crave affirmation.

Climbing the professional ladder

I tend to stand out in a crowd. I’m 6-foot-4, with close-cropped hair, a heavy beard and tattoos. I also push myself to stand out professionally.

Starting in graduate school, I was determined to make my voice heard and sought after. I pursued nearly every opportunity, committee and position that came my way. No role was too small for me to accept.

I strived to present my work in top-tier journals and at conferences, and I cold-called prominent scholars to propose working together. And I constantly shared my findings and thoughts on social media.

Like many workplaces, the academic world has a set of defined success metrics, such as publications, citations of your work, grant funding and teaching evaluations from students. School culture and leadership influence what each college or university considers more or less valuable among those measures. To advance and get promoted, particularly to get tenure, it’s important to learn at an early stage what one’s department, college or university truly prioritizes.

I wanted to get tenure but also to be seen as an active citizen of academia – energetic, outspoken and unafraid to push boundaries. When my department chair described me as having my hair on fire, I took it as a compliment. I called it “making positive noise.”

Initially, the system rewarded that noise. I earned tenure at the University of Delaware and received departmental, college and national awards. I also was appointed to serve as associate dean and to direct a new research center. I felt validated, visible and valuable.

The sociology department at the University of Delaware had a typical academic culture that’s often summarized as “publish or perish.” The most important measures of scholars’ work were writing, publishing their work in respected journals and having other researchers cite those studies. Securing external funding from government, private companies or foundations was valued but was not as high a priority as publishing.

Screen shot of author Barret Michalec's 2019-2026 citations from his Google Scholar profile.
For many academic researchers, their number of publications and the frequency with which other scholars cite their articles are important measures of professional success.
Barret Michalec

A new beginning that felt like an end

In 2020 I received a new opportunity at Arizona State University, a much larger school that branded itself as a hub of innovation and entrepreneurship. I was offered the chance to direct the Center for Advancing Interprofessional Practice, Education and Research and to step into the shoes of a leader I deeply admired. I arrived expecting to be a big fish in a bigger pond.

I couldn’t have been more wrong.

I showed up imagining there’d be a bit of buzz around my arrival given my time at the University of Delaware. But reality didn’t match the script: no greeting, office or nameplate marked my place when I arrived.

Early conversations with administrators weren’t about my research or teaching visions – the things that I thought set me apart. Instead, I felt they tended to focus on how much external funding I could raise from foundations and government agencies. My new colleagues often spoke in a shorthand of grant-based acronyms when referring to what projects they were working on, a “language” I was woefully unfamiliar with.

To make matters worse, I arrived during COVID-19, with classes either canceled or taught online and faculty members working mainly from home. The hallway chatter, open doors and spontaneous collaboration that I was accustomed to were absent. I began to feel alienated and disoriented as a scholar.

Even after ASU resumed in-person classes in the fall of 2021, I felt like the silence and distance lingered. No students waited for office hours. I struggled to make connections with my colleagues. I eagerly proposed collaborations when really everyone was just trying to find their footing in this new era of education.

My proposals for new classes and curricular programs hit up against institutional barriers I was unaware of. At one point, a college administrator asked, “How do we get you on other people’s grants?” – a question that I took to imply that they felt my research wasn’t strong enough.

It appeared that my colleagues in Edson College were accustomed to these values and spoke the language. I was a stranger in a strange land. Although I was producing some of my best work, measured in terms of publications and citations, I felt no one seemed interested. I had come from an environment where I felt known and valued to one where I seemed to be a nobody.

I felt as though I needed to staple my resume to my forehead and parade around the hallways asserting, like Ron Burgundy in the movie “Anchorman,” “I’m not quite sure how to put this, but … I’m kind of a big deal. People know me.”

Newsman Ron Burgundy gets a cool reception in a new media market in ‘Anchorman.’

The impact of feeling unseen

For people who have built careers by being highly engaged and visible, suddenly feeling unseen can be devastating. In any profession, a fear that you don’t belong at your workplace can be debilitating and make you question your own value.

I sought advice from peers and college leaders, and even hired a professional coach. Things only worsened. Curricular proposals were stalled or turned down. My center was shuttered in a restructuring, although it was meeting its goals and earning international recognition.

At first, I blamed ASU and Edson College for my feelings of disconnection. I thought the leadership structure and style was dysfunctional; that many colleagues were cold, unfriendly and conformist; and that the college’s stated values were inauthentic.

This series of what I came to call “unacknowledgments” sent me into a personal and professional tailspin. Negativity and self-doubt consumed me, and I truly worried that my career was over. Had I been blackballed? Why did it feel as though no one cared?

When the noise turns inward

I had spent years studying empathy – the ability to understand and feel what someone else is feeling – and how to cultivate it among health care professionals and students in order to support more patient-centered care. To that end, at the University of Delaware I had developed a program designed to foster empathy across health professions. It aimed to help students see one another as collaborators, build shared respect and recognize their collective role on the same health care delivery team.

But when I further analyzed the program’s outcomes from my office at ASU, I realized that empathy wasn’t enough. It could help students feel with others, but it didn’t necessarily help them see themselves, or others, differently.

I realized that what I really wanted the students to develop was humility. This step would require them to recognize their limits, accept that they were fallible, see themselves as part of a larger team and value others’ contributions.

That realization changed my research trajectory – and eventually, my professional life.

Medical personnel in protective gear stand around a surgical patient during a procedure.
Health care often involves teams whose members play varying roles. Here, Dr. Akrum Al-Zubaidi performs a bronchoscopy on patient Orlando Carrasco, with the help of his team, from left, Ana Stefan, R.N., Mike Galloway, respiratory therapist, and anesthesiologist Michael Kessler, M.D., on Aug. 7, 2017, at National Jewish Health in Denver, Colo.
Helen H. Richardson/The Denver Post via Getty Images

Research becomes a mirror

Initially, I approached humility solely as a scholar. I examined the history of the concept and gaps in existing research on it, and I analyzed how humility was connected to uncertainty and the impostor phenomenon. I explored how humility could enhance team-based care and developed a new way to define humility among health care professionals in order to promote more collaboration and patient-centeredness.

As my own professional world began to unravel, and as I dived deeper into the concept of humility through my research, something unexpected happened. I realized that humility wasn’t just an idea to study – it was becoming a mirror that made me rethink my own perspective.

Slowly, I began to see how pride and insecurity were entwined in my reactions to my new setting at ASU. I realized that my need to be noticed, and my insistence that others validate my worth, represented my own kind of arrogance.

Perhaps my ambition had been less about contributing and more about gaining external validation. I had lost the selfless wonder and awe that drive scholarly inquiry and curiosity. And now I had to confront what remained when the spotlight dimmed.

Humility, I began to understand, wasn’t just an abstract concept to explore “out there” among others. I needed to hone it internally by thinking beyond myself. By decentering my ego, I realized that I could nurture and sustain curiosity in its own right.

In short, I needed to practice what I was preaching. It wasn’t an easy lesson. I assume that cultivating humility never is.

To that end, I felt that it was essential to develop a program to help build humility “muscles.” In 2024 I developed HIIT for Humility, an online training package for individuals or groups, modeled after the fitness concept of high-intensity interval training. This program provides evidence-based strategies to help users start building “habits of humility,” such as acknowledgment of others and self-awareness.

Just as physical exercise requires consistency to produce results, so does the cultivation of humility. Leaning into HIIT for Humility workouts gradually eased my sense of alienation and defensiveness. I became more appreciative of others, less quick to judge and better able to listen to others’ perspectives. In doing so, I started to feel more confident and secure.

While I still took pride in my work, I began to see that my contributions were not the only ones that mattered. I also found that I could stretch into unfamiliar but necessary tasks, such as working harder to win federal and foundation grants and seeing the value of my colleagues’ contributions to science.

Why am I here?

Only a few years into this process, I can see that ASU and Edson College have unintentionally taught me humility by signaling, often quietly, which contributions are deemed essential and which forms of success carry the most weight. Navigating stalled proposals, shifting priorities and structural reorganizations have required me to recalibrate my ego, expectations and identity.

Not being seen as a “big fish” and being expected to persist without consistent recognition have required me to understand my work as part of a larger system with differing values and, at times, challenging constraints. Shifting to ASU forced me to rethink my identity as a professor and to reevaluate my sense of purpose from the inside out.

A colleague of mine often asks students who he feels are coasting along, “Why are you here?” Lately, I’ve taken that question personally. What is the point of being a professor – writing papers, submitting grant proposals, teaching courses? Why did I choose this path in the first place?

When I feel unseen, unheard or unappreciated, pondering why I’m here helps ground me. For anyone who is struggling to feel visible or valued at work, I strongly recommend considering this simple question.

Over time, I’ve stopped needing to be the big fish in the pond and measuring my worth in titles and awards. I now see that my responsibility as a scholar, teacher and human being is to stay curious, listen more deeply and make space for others’ voices.

Embracing humility, and consistently using my humility muscles, have helped me realize that I’m here to be part of the creative energy of academia, do the work and cultivate curiosity in my students, my peers and myself.

The Conversation

Barret Michalec does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. ‘Learning to be humble meant taming my need to stand out from the group’ – a humility scholar explains how he became more grounded – https://theconversation.com/learning-to-be-humble-meant-taming-my-need-to-stand-out-from-the-group-a-humility-scholar-explains-how-he-became-more-grounded-273402

Why Michelangelo’s ‘Last Judgment’ endures

Source: The Conversation – USA (3) – By Virginia Raguin, Distinguished Professor of Humanities Emerita, College of the Holy Cross

Michelangelo’s 16th-century fresco ‘The Last Judgment.’ Sistine Chapel collection via Wikimedia Commons

Michelangelo’s fresco of “The Last Judgment,” covering the wall behind the altar of the Sistine Chapel in Vatican City, is being restored. The work, which started on Feb. 1, 2026, is expected to continue for three months.

The Sistine Chapel is one of the great masterpieces of Renaissance art. As the setting where the College of Cardinals of the Catholic Church meets to elect a new pope, it was decorated by the most prestigious painters of the day. In 1480, Pope Sixtus IV commissioned Domenico Ghirlandaio, Sandro Botticelli, Pietro Perugino and Cosimo Rosselli to paint the walls. On the south are six scenes of the “Life of Moses,” and across on the north are six scenes of the “Life of Christ.”

In 1508, Pope Julius II commissioned Michelangelo to paint the ceiling. The theme is the Book of Genesis, the first book of the Bible. The images show God creating the world through the story of Noah, who was directed by God to shelter humans and animals on an ark during the great flood. The ceiling’s most famous scene may be “God Creating Adam,” where Adam reaches out his arm to the outstretched arm of God the Father, but their fingers fail to meet.

At the sides, the artist juxtaposed the male Hebrew prophets and the female Greek and Roman sybils who were inspired by the gods to foretell the future. It was completed in 1512; then in 1536, Michelangelo was asked to create a painting for the wall behind the altar. For this immense work of 590 square feet (about square meters), filled with 391 figures, he labored until 1541. He was then nearly 67 years old.

As an art historian, I have been aware how, from the beginning, Michelangelo’s “The Last Judgment” sparked controversy for its bold and heroic portrayal of the male nude.

Many layers of meaning

Michelangelo liked to consider himself primarily a sculptor, expressing himself in variations of the nude male body. Most famous may be the Old Testament figure of David about to slay Goliath, originally made for the Cathedral of Florence.

The artist’s ceiling for the Sistine Chapel had included 20 nude males as supporting figures above the prophets and sibyls. Originally, Michelangelo’s Christ of “The Last Judgment” was entirely nude. A later painter was hired to provide drapery over the loins of Christ and other figures.

“The Last Judgment” scene also contains multiple references to pagan gods and mythology. The image of Christ is inspired by early Christian images showing Christ beardless and youthful, similar to the pagan god of light, Apollo.

A section of a fresco shows a naked man bound by a coiling snake, and donkey's ears, surrounded by beastlike figures.
Group of the damned with Minos, judge of the underworld.
Sistine Chapel Collection, Michelangelo via Wikimedia Commons

At the bottom of the composition is the figure of Charon, a personage from Greek mythology who rowed souls over the river Styx to enter the pagan underworld. Minos, the judge of the underworld, is on the extreme right.

Giorgio Vasari, a fellow artist and historian who knew Michelangelo personally, later recounted the criticism by a senior Vatican official, Biagio da Cesena. The official stated that it was disgraceful that nude figures were exposed so shamefully and that the painting seemed more fit for public baths and taverns.

Michelangelo’s response was to place the face of Biagio on Minos, the judge of the underworld, and give him donkey’s ears, symbolizing stupidity.

A painted scene shows a bearded man holding a knife in one hand and a flayed skin with a human face in the other, while another figure sits just behind him.
A detail of a scene connected to the Apostle Bartholomew in ‘The Last Judgment.’
Sistine Chapel Collection via Wikimedia

Michelangelo included a reference to his own life in a detail connected to the Apostle Bartholomew, who is located to the lower right of Christ. The apostle was believed to have met his martyrdom by being flayed alive. In his right hand, he holds a knife and, in his left, his flayed skin whose face is a distorted portrait of the artist.

Michelangelo thus placed himself among the blessed in heaven, but also made it into a joke.

Thought-provoking imagery

The Last Judgment is a common theme in Christian art. Michelangelo, however, pushes beyond simple illustration to include pagan myths as well as to challenge traditional depiction of a calm, bearded judge. He uses dramatic imagery to provoke deeper thought: After all, how does anyone on Earth know what the saints do in heaven?

In these decisions, Michelangelo displayed his sense of self-confidence to introduce new ideas and his goal to engage the viewer in new ways.

A digital reproduction of the painting will be displayed on a screen for visitors to the Sistine Chapel during this period of restoration. Behind the screen, technicians from the Vatican Museums’ Restoration Laboratory will work to restore the masterpiece.

The Conversation

Virginia Raguin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Michelangelo’s ‘Last Judgment’ endures – https://theconversation.com/why-michelangelos-last-judgment-endures-275323