Overwhelm the public with muzzle-velocity headlines: A strategy rooted in racism and authoritarianism

Source: The Conversation – USA – By Angie Chuang, Associate Professor of Journalism, University of Colorado Boulder

The seemingly unending barrage of stressful news is a strategy with ties to the past. zimmytws/iStock via Getty Images

The headlines documenting President Donald Trump’s plan to send federal troops to San Francisco followed a familiar arc. “Trump claims ‘unquestioned power’ in vow to send troops to San Francisco,” The Guardian reported on Oct. 20, 2025. The next day, the San Francisco Chronicle blared: “S.F. threatens to sue if Trump brings in National Guard.” Then, on Oct. 23, “Trump reverses his decision to send troops to San Francisco,” as ABC News put it, after Trump posted that conversations with the city’s mayor and tech moguls had swayed him.

It was another example of how Trump’s shifting policy positions, racially inflammatory statements and threats frequently fuel a flurry of headlines, reflecting what some psychologists are calling “media saturation overload” or “Trump stress disorder.”

This barrage of information may seem like overcommunication from a hyperactive administration. But it is much more than that.

Scholars have found that the constant, often conflicting and at times false information coming out of the White House and shared via social media posts and the conventional news media causes members of the public to see truth and fact as relative and makes them more likely to dismiss those who disagree with them as untruthful. This leaves doubt about what’s real and what isn’t.

This citizen paralysis creates what philosopher Hannah Arendt described in “The Origins of Totalitarianism” as a general public “for whom the distinction between fact and fiction … no longer exist.” When lies are truth and truth is derided as lies, Arendt wrote, ordinary people lose their bearings and can be manipulated for totalitarian objectives.

Meanwhile, many journalists have openly acknowledged fatigue with the pace and nature of the Trump administrations’ news cycles, amid frequent newsroom layoffs, mergers and closures.

I am a longtime journalist and now scholar of journalism and race, trained to see the methods and aims behind political leaders’ press operations. And as I show in my forthcoming book, the Trump administration’s rhetorical strategies echo the playbooks of authoritarian and white supremacist organizations such as the Third Reich and some factions of the modern alt-right movement. They are intended to narrow the scope of who belongs as an American.

Headlines at ‘muzzle velocity’

The Trump administration’s rhetorical strategies include claiming victim status while often laying blame on immigrants or other scapegoats in ways that I believe betray racist intent. At the same time it has overwhelmed journalists and the public with breaking news.

This strategy was laid out by Steve Bannon, an influential Trump supporter and strategist in his first administration, during a 2019 PBS “Frontline” interview, when he described the media as “the opposition party.”

“They’re dumb and they’re lazy, they can only focus on one thing at a time,” he said. “All we have to do is flood the zone. … Bang, bang, bang. These guys will never – will never be able to recover. But we’ve got to start with muzzle velocity.”

Steve Bannon outlined the strategy of overwhelming people with announcements at what he termed muzzle velocity in a 2019 interview with “Frontline.”

Bannon has long been associated with the alt-right, a movement known for rhetorical tactics that minimize and obfuscate its true aims.

A strategy forged in Trump’s first term

As I detail in my book, “American Otherness in Journalism: News Media Representations of Identity and Belonging,” Trump and his key advisers have been developing, refining and ramping up their news media manipulation for a long time.

An early example of this is the way the administration used these tactics through Trump’s public responses to the fatal violence at the August 2017 Unite the Right protest in Charlottesville, Virginia.

The two-day rally was organized by a white nationalist blogger and attended by members of neo-Nazi, white supremacist and far-right militias protesting the removal of a statue of Confederate Gen. Robert E. Lee from a Charlottesville park. They marched with tiki torches, flew Confederate and Nazi flags and chanted antisemitic and racist slogans.

Amid violent clashes with counterprotesters on the second day, a neo-Nazi sympathizer drove into a crowd, killing a 32-year-old woman and injuring many others.

Rescue personnel working on someone on a stretcher in a street crowd
Emergency workers help people after a car drove into a large group of counterprotesters in the aftermath of a white nationalist rally in Charlottesville, Va., on Aug. 12, 2017, killing one and injuring 19.
AP Photo/Steve Helber

My study of television news coverage of Unite the Right found that the majority of news reports focused on the contradictory and inflammatory statements that Trump made about the antisemitic and racist protesters. Trump’s Aug. 15, 2017, press conference remark about blame on both sides after what happened garnered the most news media attention: “I think there is blame on both sides,” he said. “You had some very bad people in that group. You also had some very fine people on both sides.”

Exploiting chaos

The uncertainty surrounding what he meant created a cycle of news stories implying and denying that he sympathizes with white supremacists.

This is-he-or-isn’t-he intrigue spurred a surge of what fits the description of Bannon’s “muzzle-velocity” news headlines: “Trump declares ‘racism is evil’ amid pressure over Charlottesville” followed closely by “Trump defends White-nationalist protesters” and “Why Trump can’t get his story straight on Charlottesville.”

With the focus on Trump’s comments and what he might have really meant, the news media ultimately missed covering at the time the long-term threat posed by these white supremacist and other extremist groups.

Echoing a playbook from the past

Scholars have identified the fascist roots of these “post-truth” strategies: strongmen leaders uninterested in establishing leadership through honesty and transparency.

A recent scholarly analysis of Trump’s leadership concludes that the second-term president is overwhelming the public into “organized despair” by pitting races against each other while targeting minority groups as scapegoats, a tactic that hearkens back to 1930s Germany.

A 2019 analysis of Trump’s narrative style describes how he presents himself as a “strongman” fighting invisible forces of censorship and suppression. It also points out that this was part of the appeal of fascist leaders such as Mussolini and Hitler.

Researchers of Nazi propaganda identified key tactics in the German press such as name-calling and lumping together groups seen as opposition – communists, liberals and Jews – until public understanding of those groups blur into phrases like “enemies of Germany.” The messaging was constant and immersive, carried in local and national newspapers, radio, film and posters.

A key part of Trump’s rhetorical strategy is using race without directly referring to it. For example, Trump has described cities with large nonwhite populations such as Washington, D.C., and Chicago as “out of control” or “dirty,” contrary to actual crime statistics. He’s also questioned Kamala Harris’ racial identity, suggesting she “happened to turn Black.” And referring to Black football players who had been protesting systemic racism by kneeling during the national anthem, Trump said, “Get that son of a bitch off the field right now,” which many observers interpreted as racist because he was insulting people of color for the act of protesting racism.

This racial coding has been used by white supremacist groups to mask their true intent. They also use less overt labels such as “alt-right” or “pro-white” as a “rhetorical bridge” to the mainstream public.

In the case of the NFL protesters, the plausible deniability became an actual denial. Trump perfected this move when, during a 2020 debate with Joe Biden, he said, “Proud Boys – stand back and stand by,” referencing another group accused of thinly veiled racism.

Drowning in headlines

I believe that the endgame for this strategy is authoritarian power that greatly narrows the scope of who truly belongs and has rights in this country as an American.

This media saturation – drowning the public with a thousand Trump-generated headlines – allows his administration to keep dominating and controlling national attention.

But the media-consuming public can use the tools they have to encourage news outlets to better inform the public by identifying the media saturation strategy and reporting on why leaders are using it.

Otherwise, if news consumers let the headline overload do what it’s intended to do, and become overwhelmed and paralyzed, they become pawns in what I consider a ploy to make America less egalitarian and less democratic.

The Conversation

Angie Chuang is affiliated with the Association for Education in Journalism and Mass Communication and the Boulder Faculty Assembly.

ref. Overwhelm the public with muzzle-velocity headlines: A strategy rooted in racism and authoritarianism – https://theconversation.com/overwhelm-the-public-with-muzzle-velocity-headlines-a-strategy-rooted-in-racism-and-authoritarianism-267491

Seashells from centuries ago show that seagrass meadows on Florida’s Nature Coast are thriving

Source: The Conversation – USA (2) – By Michal Kowalewski, Thompson Chair of Invertebrate Paleontology, University of Florida

Seagrass meadows are an essential part of Florida’s coastal ecosystem. Jenny Adler

During a day at the beach, it’s common to see people walking up and down the shore collecting seashells.

As a paleontologist and marine ecologist, we look at shells a bit differently than the average beachcomber. Most people dig up shells in the sand and see beautiful color patterns or unusual shapes. But we tend to focus on how old these shells are and what they tell us about the habitat they come from.

You may be surprised to learn that the translucent spiral shell you plucked from the sand belonged to a snail that lived long before Columbus sailed to the New World. And that unassuming clamshell you might nonchalantly toss away belonged to a mollusk that filtered seawater when pharaohs ruled Egypt.

In recent decades, scientists have used methods such as radiocarbon dating to assess the age of shells, along with bones and other skeletal remains, scattered around Earth’s surface.

Increasingly, paleontologists and conservation biologists like us are turning to these remains as potential treasure troves of information about what various habitats were like before humans entered the picture. The insights we glean from this approach, known as conservation paleobiology, can result in more effective conservation, restoration and management strategies aimed at the protection or recovery of many essential habitats.

This approach has proved, among other things, that cows reshaped shellfish communities on the California shelf, caribou used the same calving grounds for millennia, and Caribbean sharks were much more diverse in the past.

Over the past decade, we have applied conservation paleobiology to Florida’s Nature Coast, home to an extensive and intricate patchwork of seagrass meadows and sand. Prior to our studies, scientists’ understanding of those meadows was largely uninformed by historical data.

manatee floating in water
A curious young manatee approached our team of scientific divers at work in Wakulla Springs in May 2020. This charismatic marine mammal inhabits seagrass meadows along Florida coasts, but in the winter and spring it shelters in warm waters of Florida springs and rivers.
Michal Kowalewski

Why seagrass matters

It may not be obvious at first glance why we should be interested in the past history of seagrass meadows.

But these meadows are among the most important structural habitats on our planet. Myriad species, including sea turtles and manatees, forage, shelter or reproduce in those habitats, making seagrasses major hot spots of biodiversity.

Beyond these benefits, seagrasses offer extremely valuable services. They oxygenate ocean waters, draw down carbon dioxide and stabilize bottom sediments. And critically for Florida’s coastline, seagrass can dampen wave energy, which helps to protect shorelines and coastal communities from the punishing effects of tropical storms and hurricanes.

By providing all these services, seagrasses fuel a tremendous economic engine that generates global revenue in excess of US$6 trillion annually, according to an analysis published in the journal Nature Reviews Biodiversity in February 2025.

Unfortunately, seagrass meadows are in decline globally, vanishing rapidly due to broad-scale environmental changes and an onslaught of local human impacts. Efforts are underway all over the world to protect seagrasses that still exist and restore those that have been lost.

fossilized rock with impressions of blades of seagrass
Found in Citrus County, Florida, in 1989, this exceptional rock slab preserves multiple blades of seagrass, proving that these grasses have been around Florida for at least 40 million years.
Roger W. Portell, Florida Museum of Natural History

Shells in Florida’s seagrass meadows

The challenge inherent to our research is that seagrasses don’t have a hard skeleton, so they are very rarely found in the fossil record.

Fortunately, we found that the shells of mollusks that prefer to dwell in seagrass are a reliable proxy for the grass itself. In general, the quality of ecological data provided by fossil shellfish is outstanding.

When living and dead organisms are alike, we can infer that local ecosystems have not changed notably despite human activities. Conversely, when live and dead mollusk species differ, it usually is a sign that a habitat has been heavily altered by humans.

Location, location, location

In our initial study, our team examined about a 40-mile (65-kilometer) swath of nearshore habitats in an area just north of the Suwannee River.

We found that seagrass meadows often span only a few acres, forming a regional patchwork of vegetated and open-sand habitats. We also observed that distinct sets of mollusk species inhabit meadows and open sands today. This was not surprising, as many previous studies have shown that different mollusks live in seagrass and open-sand habitats.

Next, we looked at the shells of dead mollusks found in surface sediments in the area. Using radiocarbon dating, we showed that about half of these shells belonged to mollusks that lived prior to the Industrial Revolution. Many shells dated back to previous millennia.

If these small patches of seagrass meadows were waxing, waning or shifting location over the recent centuries, then we would expect each spot on the seafloor to harbor a mix of dead shells representing species from both habitats. However, we found that the species of dead mollusks in seagrass patches were remarkably similar to those that live there now. The same was the case for the mollusks from open sands.

This suggests that this mosaic of seagrass patches and open-sand bottoms has been remarkably stable for hundreds of years. We do not know why the seagrass consistently thrived for centuries in specific spots within a seemingly uniform environment. But whatever the reason, this habitat is not a mosaic of meadows in constant flux, but rather, a seascape that has remained the same for a long time.

This is an important find for conservation efforts. It means that it may be unwise to assume that we can compensate for seagrass losses by simply planting new meadows in open-sand habitats.

5 rows of a variety of mollusk shells on a black background
These mollusk shells were collected by divers from Florida seagrass meadows in Tampa Bay in October 2025. Such shells typically provide a record of diverse organisms that inhabited the area over hundreds of years.
Invertebrate Paleontology Division, Florida Museum of Natural History

Broadening the scope

In our newest study, we broadened our scope to compare living mollusks and dead mollusks across multiple estuaries along the Nature Coast, a 93-mile (150-kilometer) stretch of Florida’s Gulf Coast.

As with our first study, this broader study revealed many remarkable similarities between the mollusks that live there now and the mollusks from previous centuries and millennia, documented by shells.

We found that the mollusks that are common today and those that were common in the past represent virtually the same suite of species, and their relative abundance stayed steady, too.

Even more remarkably, both the live mollusks and shells from previous centuries document the same changes in dominant mollusk species between the southern and northern regions of the study area.

Today, mollusks are not the same everywhere along the Nature Coast. This reflects the fact that coastal waters are increasingly nutritious in the north. Consequently, seagrass is taller and denser moving north, and the suites of mollusk species that live in them change as well.

The shells of dead mollusks tell the same story. This indicates that not much has changed along this stretch of the Gulf Coast since preindustrial times.

Highlighting what’s working

Knowing that seagrass meadows in this area have maintained their ecological character and integrity for centuries or longer is a powerful argument for their continued protection.

Understandably, most conservation paleobiology studies have focused on threatened species, degraded habitats or imperiled systems, such as reef sharks, oyster beds or freshwater mussels. As a result, these studies generally document population collapse, biodiversity loss, habitat shrinking and overall ecosystem decline.

But we believe it is equally important for investigators in our field to study systems that are believed to be stable and resilient. In this case, the unspoiled status of the Nature Coast seagrass meadows makes them a much-needed benchmark to assess the state of other seagrass systems that have been altered by human activities. This can offer insights into which conservation efforts are working and how best to restore and maintain similar habitats elsewhere.

The Conversation

Michal Kowalewski receives funding from the US National Science Foundation, University of Florida Foundation and the Felburn Foundation, Florida.

Thomas K. Frazer receives funding from the National Oceanographic and Atmospheric Administration, Florida Fish and Wildlife Conservation Commission, Florida Department of Environmental Protection, Florida Department of Transportation, and South Florida Water Management District and The Ocean Conservancy.

ref. Seashells from centuries ago show that seagrass meadows on Florida’s Nature Coast are thriving – https://theconversation.com/seashells-from-centuries-ago-show-that-seagrass-meadows-on-floridas-nature-coast-are-thriving-264170

AI could worsen inequalities in schools – teachers are key to whether it will

Source: The Conversation – USA (2) – By Katie Davis, Professor Information School and Adjunct Associate Professor, College of Education, University of Washington

Meeting about AI: Teachers see some efficiencies with AI but don’t always feel like they have the resources to learn how to best use it for teaching. Joe Lamberti/AP Images

Today’s teachers find themselves thrust into a difficult position with generative AI. New tools are coming online at a blistering pace and being adopted just as quickly, whether they’re personalized tutors and study buddies for students or lesson plan generators and assignment graders for teachers. Schools are traditionally slow to adapt to change, which makes such rapid-fire developments especially destabilizing.

The uncertainties accompanying the artificial intelligence onslaught come amid existing challenges the teaching profession has faced for years. Teachers have been working with increasingly scarce resources – and even scarcer time – while facing mounting expectations not only for their students’ academic performance, but also their social-emotional development. Many teachers are burned out, and they’re leaving the profession in record numbers.

All of this matters because teacher quality is the single most important factor in school influencing student achievement. And the impact of teachers is greatest for students who are most disadvantaged. How teachers end up using, or not using, AI to support their teaching – and their students’ learning – may be the most crucial determinant of whether AI’s use in schools narrows or widens existing equity gaps.

We have been conducting research on how public school teachers feel about generative AI technologies.

The initial results, which are currently under review, reveal deep ambivalence about AI’s growing role in K-12 education. Our work also shows how inadequate training and unclear communications could worsen existing inequalities among schools.

A ‘thought partner’ for busy teachers

As part of a larger project examining AI integration in education, we interviewed 22 teachers in a large public school district in the United States that has been an early and enthusiastic adopter of AI. The district serves a multilingual and socioeconomically diverse student population, with over 160 languages spoken and approximately three-quarters of students eligible for free or reduced-price lunch.

The teachers who participated in our study spanned elementary, middle school and high school grade levels, and represented a variety of subject areas, including science, technology, engineering and mathematics, social studies, special education, and culturally and linguistically diverse education. We asked these teachers to describe how they first encountered generative AI tools, how they currently use them, and the broader shifts they have observed in their schools. Teachers also reflected on both the opportunities and challenges of using AI tools in their classrooms.

Mirroring a recent survey finding that AI has helped teachers save up to six hours per week of work, the teachers in our study pointed to AI’s ability to create more space in the day for themselves and their students. Turning to AI for help writing lesson plans and assessments not only saves time, but it also gives teachers a tool for brainstorming ideas, helping them feel less isolated in their work. One high school teacher with over 11 years’ experience reflected:

“The most significant benefit that AI has brought to my life as a teacher is having work-life balance. It has decreased my stress 80-fold because I am able to have a thought partner. Teachers are really isolated, even though we work with people constantly … When I’m exhausted, it gives me support and help with ideas.”

Why lack of training matters

However, not all teachers felt well-equipped to benefit from AI. Much of what they told us boiled down to a lack of resources and other professional support. An elementary school classroom teacher explained:

“It’s just a lack of time. We don’t really get much planning time, and it would be a new tool to learn, so we would have to take the time personally to learn how to use it and where to find everything.”

Many teachers underscored the need for – and current lack of – professional development offerings to help them understand and integrate AI into their teaching.

Research on previous waves of technological innovations shows that under-resourced schools serving disadvantaged students are typically the least well-equipped to provide teachers with the professional support they need to make the most of new technologies.

Because well-resourced schools are far more likely to offer such support, the introduction of new technologies in schools tends to reinforce existing inequities in the education system.

When it comes to AI, well-resourced schools are best positioned to give teachers time, support and encouragement to “tinker” with AI and discover how and whether it can support their teaching and learning goals.

‘You need a relationship’ to learn

Our research also uncovered the importance of preserving the relational nature of teaching and learning, even – or perhaps especially – in the age of AI. As one middle school social studies teacher observed:

“A machine can give you information, but most students we know are not able to get information from something that’s just printed out for them and put it into their heads. You need a relationship. Some kids can do online school or read a book and teach themselves, but that’s like 2%. Most kids need a social environment to do it.”

A teacher sitting at head of class with AI policies posted on screen above him.
Even as schools integrate AI into classwork, teachers still need to learn how to implement the technology to help their students learn.
Jae C. Hong/AP Images

Here again, prior research shows us that teachers in well-resourced schools are better equipped to introduce new technologies in ways that augment rather than undermine the relational dimensions of teaching and learning. And again, teachers are crucial in determining how and whether AI, like all new technologies, is used to support their teaching and student learning.

That’s why we believe the practices established during this current period of rapid AI development and adoption will profoundly influence whether educational inequities are dismantled or deepened.

Grounded in the classroom

Going forward, we see the need for research to examine how generative AI is changing teachers’ practice and relationship to their work. Their input can inform practices that empower teachers as professionals and advance student learning.

This approach requires adequate institutional support at the school and district levels. It also means listening to the real experiences of teachers and students instead of responding to the promised benefits touted by education technologies companies.

The Conversation

Katie Davis has received funding from the Spencer Foundation.

Aayushi Dangol does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI could worsen inequalities in schools – teachers are key to whether it will – https://theconversation.com/ai-could-worsen-inequalities-in-schools-teachers-are-key-to-whether-it-will-266140

Anxiety over school admissions isn’t limited to college – parents of young children are also feeling pressure, some more acutely than others

Source: The Conversation – USA (2) – By Bailey A. Brown, Assistant Professor of Sociology, Spelman College

Shifting policies such as school choice give parents more school options than they had a few decades before. iStock/Getty Images Plus

Deciding where to send your child to kindergarten has become one of the most high-stakes moments in many American families’ lives.

A few factors have made selecting an elementary school particularly challenging in recent years. For one, there are simply more schools for parents to pick from over the past few decades, ranging from traditional public and private to a growing number of magnet and charter programs. There are also new policies in some places, such as New York City, that allow parents to select not just their closest neighborhood public school but schools across and outside of the districts where they live.

As a scholar of sociology and education, I have seen how the expanding range of school options – sometimes called school choice – has spread nationwide and is particularly a prominent factor in New York City.

I spoke with a diverse range of more than 100 New York City parents across income levels and racial and ethnic backgrounds from 2014 to 2019 as part of research for my 2025 book, “Kindergarten Panic: Parental Anxiety and School Choice Inequality.”

All of these parents felt pressure trying to select a school for their elementary school-age children, and school choice options post-COVID-19 have only increased.

Some parents experience this pressure a bit more acutely than others.

Women often see their choice of school as a reflection of whether they are good moms, my interviews show. Parents of color feel pressure to find a racially inclusive school. Other parents worry about finding niche schools that offer dual-language programs, for example, or other specialties.

Several children and adults walk into a large brick building with green doors.
Children arrive for class at an elementary school in Brooklyn in 2020.
Angela Weiss/AFP via Getty Images

Navigating schools in New York City

Every year, about 65,000 New York City kindergartners are matched to more than 700 public schools.

New York City kindergartners typically attend their nearest public school in the neighborhood and get a priority place at this school. This school is often called someone’s zoned school.

Even so, a spot at your local school isn’t guaranteed – students get priority if they apply on time.

While most kindergartners still attend their zoned schools, their attendance rate is decreasing. While 72% of kindergartners in the city attended their zoned school in the 2007-08 school year, 60% did so in the 2016-17 school year.

One reason is that since 2003, New York City parents have been able to apply to out-of-zone schools when seats were available. And in 2020, when the COVID-19 pandemic began, all public school applications moved entirely online. This shift allowed parents to easily rank 12 different school options they liked, in and outside of their zones.

Still, New York City public schools remain one of the most segregated in the country, divided by race and class.

Pressure to be a good mom

Many of the mothers I interviewed from 2015 through 2019 said that getting their child into what they considered a “good” school reflected good mothering.

Mothers took the primary responsibility for their school search, whether they had partners or not, and regardless of their social class, as well as racial and ethnic background.

In 2017, I spoke with Janet, a white, married mother who at the time was 41 years old and had an infant and a 3-year-old. Janet worked as a web designer and lived in Queens. She explained that she started a group in 2016 to connect with other mothers, in part to discuss schools.

Though Janet’s children were a few years away from kindergarten, she believed that she had started her research for public schools too late. She spent multiple hours each week looking up information during her limited spare time. She learned that other moms were talking to other parents, researching test results, analyzing school reviews and visiting schools in person.

Janet said she wished she had started looking for schools when her son was was 1 or 2 years old, like other mothers she knew. She expressed fear that she was failing as a mother. Eventually, Janet enrolled her son in a nonzoned public school in another Queens neighborhood.

Pressure to find an inclusive school

Regardless of their incomes, Black, Latino and immigrant families I interviewed also felt pressure to evaluate whether the public schools they considered were racially and ethnically inclusive.

Parents worried that racially insensitive policies related to bullying, curriculum and discipline would negatively affect their children.

In 2015, I spoke with Fumi, a Black, immigrant mother of two young children. At the time, Fumi was 37 years old and living in Washington Heights in north Manhattan. She described her uncertain search for a public school.

Fumi thought that New York City’s gifted and talented programs at public schools might be a better option academically than other public schools that don’t offer an advanced track for some students. But the gifted and talented programs often lacked racial diversity, and Fumi did not want her son to be the only Black student in his class.

Still, Fumi had her son tested for the 2015 gifted and talented exam and enrolled him in one of these programs for kindergarten.

Once Fumi’s son began attending the gifted and talented school, Fumi worried that the constant bullying he experienced was racially motivated.

Though Fumi remained uneasy about the bullying and lack of diversity, she decided to keep him at the school because of the school’s strong academic quality.

Pressure to find a niche school

Many of the parents I interviewed who earned more than US$50,000 a year wanted to find specialty schools that offered advanced courses, dual-language programs and progressive-oriented curriculum.

Parents like Renata, a 44-year-old Asian mother of four, and Stella, a 39-year-old Black mother of one, sent their kids to out-of-neighborhood public schools.

In 2016, Renata described visiting multiple schools and researching options so she could potentially enroll her four children in different schools that met each of their particular needs.

Stella, meanwhile, searched for schools that would de-emphasize testing, nurture her son’s creativity and provide flexible learning options.

In contrast, the working-class parents I interviewed who made less than $50,000 annually often sought schools that mirrored their own school experiences.

Few working-class parents I spoke with selected out-of-neighborhood and high academically performing schools.

New York City data points to similar results – low-income families are less likely than people earning more than them to attend schools outside of their neighborhoods.

For instance, Black working-class parents like 47-year-old Risha, a mother of four, and 53-year-old Jeffery, a father of three, who attended New York City neighborhood public schools themselves as children told me in 2016 that they decided to send their children to local public schools.

Based on state performance indicators, students at these particular schools performed lower on standard assessments than schools on average.

A group of young children wearing face masks sit at a table and color on white paper.
Students write down and draw positive affirmations on poster board at P.S. 5 Port Morris, a Bronx elementary school, in 2021.
Brittainy Newman/Associated Press

Cracks in the system

The parents I spoke with all live in New York City, which has a uniquely complicated education system. Yet the pressures they face are reflective of the evolving public school choice landscape for parents across the country.

Parents nationwide are searching for schools with vastly different resources and concerns about their children’s future well-being and success.

When parents panic about kindergarten, they reveal cracks in the foundation of American schooling. In my view, parental anxiety about kindergarten is a response to an unequal, high-stakes education system.

The Conversation

Bailey A. Brown does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Anxiety over school admissions isn’t limited to college – parents of young children are also feeling pressure, some more acutely than others – https://theconversation.com/anxiety-over-school-admissions-isnt-limited-to-college-parents-of-young-children-are-also-feeling-pressure-some-more-acutely-than-others-265537

FDA recall of blood pressure pills due to cancer-causing contaminant may point to higher safety risks in older generic drugs

Source: The Conversation – USA (3) – By C. Michael White, Distinguished Professor of Pharmacy Practice, University of Connecticut

Nitrosamines are by-products of many common chemical reactions. FatCamera/iStock via Getty Images Plus

A generic blood pressure drug called prazosin, made by Teva Pharmaceuticals, is being recalled by the Food and Drug Administration because it contains elevated levels of cancer-causing chemicals called nitrosamines.

The recall, which Teva announced on Oct. 7, 2025, affects more than 580,000 prazosin capsules. Prazosin is prescribed to around 510,000 patients yearly and is used to treat post-traumatic stress disorder as well as high blood pressure.

I am a pharmacologist and pharmacist who has studied nitrosamine contamination of popular blood pressure, diabetes and heartburn drugs, as well as other issues in generic drug manufacturing.

Prazosin has been available as a generic medication for more than 25 years and, like many generics that have been around that long, is now produced by multiple manufacturers. This ratchets up competition on price, which may explain why older generics are more prone to manufacturing issues that may harm patient health.

What are nitrosamines and where do they come from?

Nitrosamines are by-products of many common chemical reactions. They form when a type of chemical building block called a nitrite group interacts with another type called an amine group.

Industrial processes like rocket fuel, rubber and sealant manufacturing can produce high concentrations of nitrosamines during chemical reactions. Bacon, pepperoni and salami are high in nitrite preservatives that interact with the amine groups in the meats to form small amounts of nitrosamines. The chemical reaction that happens when chlorinated water interacts with naturally occurring chemicals that contain nitrogen and oxygen can also form small amounts of nitrosamines.

Occasional and small exposures to nitrosamines are not thought to be dangerous. But some studies have found that certain nitrosamines are carcinogenic when ingested in high amounts for long periods of time

European regulators first discovered in 2018 that prescription drugs could also be contaminated when testing revealed that an active ingredient in a blood pressure drug called valsartan contained a nitrosamine chemical. Since the Chinese company that made the drug’s active ingredient sold it to multiple manufacturers of valsartan tablets, many companies, including Teva Pharmaceuticals, recalled the drug at the time.

Gloved hands overflowing with manufactured tablets
Drugmakers have identified nitrosamine contamination in many widely used drugs.
Starkovphoto/iStock via Getty Images Plus

The FDA then launched a major effort to identify nitrosamines in prescription and over-the-counter drugs and to define unsafe levels for tablets and capsules. It published an initial industry guidance in 2021 and an updated version in 2024.

Based on the agency’s new testing requirements, drugmakers have identified nitrosamine contamination in widely used blood pressure, diabetes, heartburn, antibiotic and smoking cessation drugs. Most of the recalled drugs were contaminated during the chemical processing at a manufacturing plant.

What should people who take prazosin do?

Teva Pharmaceuticals’ prazosin is just one of many generic versions – but it’s the only one that is contaminated. You can determine whether your medication came from Teva by looking at your prescription label. Search for the abbreviations MFG or MFR, which stand for “manufacturing” or “manufacturer.” If it says “MFG Teva” or “MFR Teva,” that means Teva Pharmaceuticals supplied the medication.

The first four numbers of a National Drug Code, abbreviated as NDC on the prescription label, also reveal the manufacturer or distributor. Teva products have the number 0093.

If Teva Pharmaceuticals is the distributor, a pharmacist can cross-reference your prescription number to obtain the lot number and compare it with the posted lot numbers on the FDA website for recalled prazosin. If your product has been recalled, your pharmacy may have other generic versions of prazosin in stock that are not part of this recall.

Based on its risk assessment for these tablets, the FDA gave the recall a Class II status, which means that the medication could cause “temporary or medically reversible adverse health consequences.” If no other prazosin version exists at your pharmacy, do not stop taking your drug without talking with your physician first. The risk of temporarily taking tablets with an elevated amount of nitrosamines may be less than the risk of suddenly stopping this medication.

Prazosin, the drug being recalled, is prescribed to more than a half-million patients each year.

Your physician may also be able to prescribe an alternative treatment such as clonidine or trazodone.

Do older generics made overseas pose higher risks?

Until recently, it wasn’t possible to compare whether the safety records of generic drugs manufactured overseas differed from the same generics made in the U.S., because the FDA does not disclose which manufacturing plants companies use to create their tablets and capsules. But in a 2025 study, researchers managed to triangulate that information from an FDA dataset.

They found that the risk of serious adverse events was 54.3% higher with generics made in India as compared with those made in the United States. And the longer a drug has been available in generic form, the greater the difference in safety risk between its U.S.- and India-made forms. As my colleague and I wrote in a commentary accompanying the study, the findings suggest that when the market for generic drugs is crowded by multiple manufacturers, lower-priced options naturally sell better. As a result, manufacturers in developing countries are more apt to produce poorer quality products that are less expensive to produce.

Teva Pharmaceuticals has manufacturing plants around the world, including in India. The company has not disclosed where its recalled prazosin capsules and their active and inactive ingredients were manufactured.

The FDA publishes ratings on generic drug quality and claims that generics with an “A” rating meet the same manufacturing quality standards and achieve the same blood concentrations as brand-name drugs. But pharmacies can’t tell from those ratings if a drug comes from manufacturing plants that are at higher risk for quality issues.

Patients are at the mercy of choices pharmacies make in the generic versions of drugs they procure for their stores. In my view, if pharmacies could access reliable information about quality, they might be able to make choices that are safer for American consumers.

The Conversation

C. Michael White does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. FDA recall of blood pressure pills due to cancer-causing contaminant may point to higher safety risks in older generic drugs – https://theconversation.com/fda-recall-of-blood-pressure-pills-due-to-cancer-causing-contaminant-may-point-to-higher-safety-risks-in-older-generic-drugs-268968

Brewery waste can be repurposed to make nanoparticles that can fight bacteria

Source: The Conversation – USA – By Alcina Johnson Sudagar, Research Scientist in Chemistry, Washington University in St. Louis

Some compounds in waste produced in the brewing process could be repurposed for antibacterial drugs. Iuri Gagarin/iStock via Getty Images

Modern beer production is a US$117 billion business in the United States, with brewers producing over 170 million barrels of beer per year. The brewing process is time- and energy-intensive, and each step generates large amounts of waste.

Solid components such as used grains and yeast from this waste end up in landfills, where harmful compounds can leach into the soil. Brewing wastewater that makes it into aquatic ecosystems can contaminate streams and lakes, decrease oxygen levels in those environments and threaten organisms.

To keep this waste from going into the environment, scientists like me are exploring how to manufacture beer brewing waste into useful products. I’m a chemist, and my research team and I are interested in figuring out how to recycle and repurpose brewery waste into tiny particles that can be used to make new types of prescription drugs.

The brewing process

The brewing process takes raw cereal grain – usually from barley – and converts its starch and proteins into simpler chemicals by malting. Brewers initiate this process by adding water, which wakes the seed from dormancy, and then keeping the seeds at a controlled temperature to sprout the grain.

During this time, important enzymes are released that can convert the starch and proteins in the grains to fermentable sugars and amino acids. They then heat up the resulting product, called the malt, to dry it out and stop further sprouting. After this malting process, they add hot water and mash the malt to release the compounds that give the beer its iconic flavor.

A diagram showing the stages of beer brewing -- and flagging four sources of waste: brewer's spent grains, hot trub, brewer's spent yeast and filtrate.
The brewing process produces waste at four main stages.
Alcina Johnson Sudagar, CC BY-SA

The brewers then separate the sweet malt extract, called wort, and the leftover solid is removed as waste, called brewer’s spent grains. About 30% of the weight of the raw grain ends up as spent grain waste. This waste is either used as animal feed or discarded. About 30 million tons of spent grain is generated annually.

Brewers add a cone-shaped flower of the Humulus lupulus plant, called hops, to the wort, then boil and clarify it. The hops flower is the key ingredient that gives beer its bitterness and aroma. The undissolved hops and proteins get collected during clarification to form hot trub, the second major waste from breweries. Roughly 85% of the hops are removed as waste material.

The clear wort is then cooled and fermented by adding yeast. The yeast filtered out after fermentation, called brewer’s spent yeast, forms the third type of waste that breweries generate. The spent yeast is one of the major byproducts of the brewing industry. This waste has a large quantity of water and solid material: 100 liters of beer generate 2 to 4 kilograms (4.4 to 8.8 lbs.) of spent yeast.

Finally, the fermented beer is filtered before entering the production line, where the beer is bottled for consumption. The wastewater generated at this last stage forms the filtration waste. A medium-size brewery generates about 8 tons of dense sludge and five to seven times – or 40 to 56 tons – of wastewater as filtration waste monthly. Several tons of waste from breweries remain largely underused due to their low economic value.

The brewery waste problem

These wastes have several compounds, such as carbohydrates, proteins, amino acids, minerals and vitamins that can potentially be repurposed. Scientists have tried to reuse the wastes in creative ways by creating biofuels and vegan leather using either some compounds extracted from the waste or the entire waste.

Breweries can send their solid wastes to farms that repurpose it as soil fertilizer, compost or animal feed, but a major fraction of it industrywide is discarded as landfill. The wastewater is discharged into the sewage lines, which can challenge sewage treatment systems, as they contain more than 30 times higher pollutants than the typical residential sewage.

Although breweries are becoming more aware of their waste and moving toward sustainable approaches, demand for beer has continued to rise, and a large amount of waste remains to be dealt with.

Repurposing waste in nanoparticles

In my research, I’m interested in determining whether compounds from brewery waste can help create nanoparticles that are compatible with human cells but fight against bacteria. Nanoparticles are extremely tiny particles that have sizes in the range of one-billionth of a meter.

A size scale going as small as 0.1 nm, the size of a molecule, up to 1 m, the size of a guitar. Nanoparticles are between 1 and 100 nm.
Nanoparticles are smaller than bacteria – they can be the size of viruses or even human DNA.
Alcina Johnson Sudagar, CC BY-SA

In medicine, when the same antibiotics are used over and over, bacteria can evolve resistance against them. One potential use of nanoparticles is as an active component in certain antibiotic drugs. These nanoparticles could also work as disinfectants and cleaning chemicals.

My team and I developed nanoparticles coated with some of the compounds found in brewery waste – an invention which we have since patented but are not actively commercializing. We created the particles by adding waste from any stage of brewing to a metal source.

When we added a chemical containing silver – for example, silver nitrate – to the waste, a combination of processes converted silver compound into nanoparticles. One process is called reduction: Here, compounds found in the brewery waste undergo a chemical reaction that converts the silver ions from the silver nitrate to a metallic nanoparticle.

The other process, called precipitation, is similar to how chalky soap scum forms in your sink when soap reacts with minerals such as calcium in hard water. Oxide and phosphate from the brewery waste combine with a silver ion from the silver nitrate, causing the silver to form a solid compound that makes up the nanoparticle’s core.

The organic compounds from the brewing waste such as proteins, carbohydrates, polyphenols and sugars form a coating on the nanoparticles. This coating prevents any other reaction from happening on the surface of these particles, which is very important for making the nanoparticles stable for their applications. These nanoparticles prepared from brewery waste were made of three components: silver metal, silver oxide and silver phosphate.

The steps involved in the creation of green nanoparticles using brewery wastes from different stages of brewing
Nanoparticles preparation using one-pot method.
Alcina Johnson Sudagar, CC BY-SA

Environmentally friendly processes that reduce the use of hazardous chemicals and minimize harmful side products are known as green chemistry. Because our procedure was so simple and did not use any other chemicals, it falls into this green chemistry category.

Nanoparticle safety

My colleague Neha Rangam found that the coating formed by the brewery waste compounds makes these nanoparticles nontoxic to human cells in the lab. However, the silver from these nanoparticles killed Escherichia coli, a common bacterium responsible for intestinal illness around the world.

We found that a special type of nanoparticle containing high amounts of silver phosphate worked against E. coli. It appeared that this silver phosphate nanoparticle had a thinner coating of the organic compounds from the brewery waste than silver metal and oxides, which led to better contact with the bacteria. That meant enough silver could reach the bacteria to disrupt its cellular structure. Silver has long been known to have an antimicrobial effect. By creating nanoparticles from silver, we get lots of surface area available for eliminating bacteria.

Several nanoparticles have been in clinical trials and some have been FDA approved for use in drugs for pain management, dental treatment and diseases such as cancer and COVID-19. Most research into nanoparticles in biotechnology has dealt with carbon-based nanoparticles. Scientists still need to see how these metal nanoparticles would interact with the human body and whether they could potentially cause other health problems.

Because they’re so tiny, these particles are difficult to remove from the body unless they are attached to drug carriers designed to transport the nanoparticles safely. Before doctors can use these nanoparticles as antibacterial drugs, scientists will need to study the fate of these materials once they enter the body.

Some engineered nanoparticles can be toxic to living organisms, so research will need to address whether these brewery waste-derived nanoparticles are safe for the human body before they’re used as a new antibacterial drug component.

The Conversation

Alcina Johnson Sudagar received funding from the European Union’s Marie Curie Horizon 2020 program for this work. Part of the work has been patented, Polish patent valid since August 2020 (Patent no: P.435084)

ref. Brewery waste can be repurposed to make nanoparticles that can fight bacteria – https://theconversation.com/brewery-waste-can-be-repurposed-to-make-nanoparticles-that-can-fight-bacteria-264847

A brief history of congressional oversight, from Revolutionary War financing to Pam Bondi

Source: The Conversation – USA – By Gibbs Knotts, Professor of Political Science, Coastal Carolina University

U.S. Sen. Amy Klobuchar of Minnesota speaks at an oversight hearing before the Senate Judiciary Committee on Oct. 7, 2025. AP Photo/Allison Robbert

Routine congressional oversight hearings usually don’t make headlines. Historically, these often low-key events have been the sorts of things you catch only on C-SPAN – procedural, polite and largely ignored outside the Beltway.

But their tone has shifted dramatically during the second Trump administration.

When Attorney General Pam Bondi appeared before the Senate Judiciary Committee on Oct. 7, 2025, what took place was a contentious, highly partisan, made-for-TV-and-social-media confrontation.

The hearing occurred on the heels of the indictment of former FBI Director James Comey, which many legal experts view as an example of a president targeting his political enemies. Bondi came ready to fight. She refused to answer many questions from Democrats, instead launching personal attacks against these members of the U.S. Senate.

When Illinois Sen. Dick Durbin, a Democrat, asked about the deployment of National Guard troops in Chicago, Bondi retorted, “I wish you loved Chicago as much as you hate President Trump.” The clip went viral, as Bondi likely intended.

From our perspective as political scientists who study the U.S. Congress, congressional oversight has played an important role in American democracy. Here’s a brief history.

Congressional oversight hearings help keep executive branch agencies accountable to the public.

Inquisitory powers

In simple terms, oversight is the ability of Congress to ensure that the laws it passes are faithfully executed. This generally means asking questions, demanding information, convening hearings and holding the executive branch accountable for its actions.

Oversight isn’t specifically mentioned in the Constitution. Article 1, Section 8, which lists the powers of Congress, includes the power “to make all laws which shall be necessary and proper,” without identifying an oversight role. Once laws are enacted, Article 2, Section 3, states that the president “shall take Care that the Laws be faithfully executed.”

However, the framers viewed congressional oversight as a key component of legislative authority. They wanted presidents to take Congress seriously and structured the Constitution to ensure that the executive would be accountable to the legislature. As James Madison urged in Federalist 51, the separate branches of government should have the power to keep each other from becoming too powerful. “Ambition must be made to counteract ambition,” Madison wrote.

The framers drew from the examples of the British Parliament and Colonial legislatures. In 1621, Sir Francis Bacon was charged with corruption and impeached as Lord High Chancellor after an investigation by a committee of the British Parliament. And in 1768, the Massachusetts Assembly conducted an investigation of Gov. Francis Bernard that led to a formal request to the King of England for his removal.

At the Federal Convention in 1787 that produced the Constitution, Delegate George Mason noted that members of Congress possessed “inquisitory powers” and “must meet frequently to inspect the Conduct of public officials.” Even though this idea was never written down, it was a habit of self-government that early Congresses put into practice.

A white-haired man, wearing glasses and holding a sheet of paper, sits at a dais speaking into a microphone.
Sen. Sam Ervin, chair of the Senate Watergate Committee, announces on July 23,1973, that the committee has decided to subpoena White House tapes and documents related to the Watergate burglary and cover-up.
AP Photo

Early oversight hearings

Congressional oversight began almost as soon as the first Congress met. In 1790, Robert Morris, the superintendent of finances during the Continental Congress and a financier of the American Revolution, asked Congress to investigate his handling of the country’s finances and was exonerated of any wrongdoing.

During this period, congressional investigations were often referred to select committees – bodies created to perform special functions. These panels had the power to issue subpoenas and hold individuals in contempt. Since there was no official record of debates and proceedings, the public relied on newspaper accounts to learn about what had happened.

In March 1792, congressional oversight exposed businessman William Duer, who signed contracts with the War Department but failed to furnish the needed military supplies. This shortfall contributed to a stunning U.S. military defeat against a confederation of Native American tribes in the Northwest Territory.

Congress eventually removed the quartermaster general from his role for mismanaging the contracts. Duer was simultaneously involved in perhaps the first American economic bubble, which burst at the same time as Congress’ hearings. He ended up in a debtor’s prison, where he died in 1799.

Throughout the 19th century, Congress continued to quietly exercise this power. The work was often invisible to the public, but the issues were important. Hearings from December 1861 to May 1865 on the conduct of the U.S. Civil War produced a detailed record of the war, exposed military wrongdoing and condemned slavery. In 1871, the Senate created a select committee to investigate Ku Klux Klan violence during Reconstruction.

Investigating corruption and criminal acts

Congress started to use its oversight power more aggressively in the 1920s with the Senate Committee on Public Land and Surveys’ high-profile investigations into the Teapot Dome scandal.

Hearings revealed that Interior Secretary Albert Bacon Fall had secretly leased federal oil reserves in Wyoming to two private corporations and had received personal loans and gifts from the companies in return.

The investigation found clear evidence of corruption. Fall was indicted and became the first U.S. Cabinet member to be convicted of a felony.

The U.S. Supreme Court helped to shape the legal foundation of congressional oversight. In McGrain v. Daugherty, decided in 1927, the court held that congressional committees could issue subpoenas, force witnesses to testify and hold them in contempt if they fail to appear. Two years later, in Sinclair v. United States, the court ruled that witnesses who lied to Congress could be charged with perjury.

These cases granted the judicial branch’s sanction to what had long been an implied legislative power, cementing the constitutionality of congressional oversight.

Oversight highs and lows

The modern era of congressional oversight has produced some very important reforms – and some truly regrettable spectacles.

The most important example of bipartisan congressional oversight came in response to reporting by The Washington Post’s Carl Bernstein and Bob Woodward. The two journalists wrote about the 1972 burgling of Democratic National Committee offices in Washington, D.C.’s Watergate Hotel and subsequent cover-up efforts by the Nixon administration.

On Feb. 7, 1973, the U.S. Senate voted 77-0 to establish a Select Committee on Presidential Campaign Activities, which brought together Democrats and Republicans to investigate what came to be known as the “Watergate scandal.” The committee’s work spurred action in Congress to impeach President Richard Nixon, leading to Nixon’s resignation in 1974 and to the enactment of legal reforms to provide an institutional check on presidential power.

Another high point for congressional oversight came after the 9/11 terrorist attacks in 2001. Seeking to learn how the deadliest terrorist strike on American soil had occurred, Democratic Sen. Bob Graham and Republican Rep. Porter Goss, who chaired the Senate and House Intelligence committees, formed a joint committee to investigate intelligence failures before and after the attacks.

This inquiry produced several important recommendations that were ultimately adopted, including the creation of a director of national intelligence and a Department of Homeland Security, as well as better information sharing among law enforcement agencies.

Firefighters train hoses over the rubble at the former site of the World Trade Center towers in New York City.
After the Sept. 11, 2001, attacks on the World Trade Center in New York City, shown here, and targets in Washington, D.C., a congressional committee investigated intelligence failures that had impeded detection of the terrorist plot.
Universal History Archive/UIG via Getty Images

Congress’ oversight can extend beyond the executive branch when the actions of private actors raise questions about existing laws or spur the need for new ones. As examples, investigations into medical device safety and Enron’s 2001 collapse examined malfeasance in the private sphere that existing regulations failed to prevent.

However, the power to expose corruption can also be used as a tool to score partisan points and generate outrage, rather than holding the executive branch accountable for actual malfeasance. Notably, in the 1950s, Wisconsin Sen. Joseph McCarthy turned oversight into inquisition and used the power of media to amplify his accusations of communist influence within the federal government.

Democracy needs oversight

Congressional oversight has strengthened the democratic system at many points. But hearings like Bondi’s recent session before the Senate Judiciary Committee aren’t the first, and likely won’t be the last, to substitute sound bites for substance.

As we see it, the problem with allowing oversight to become political theater is that it distracts Congress from quieter and more meaningful oversight work. Slow, procedural work isn’t likely to go viral, but it helps keep government accountable. The task of a deliberate legislative body is to reconcile those very different impulses.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. A brief history of congressional oversight, from Revolutionary War financing to Pam Bondi – https://theconversation.com/a-brief-history-of-congressional-oversight-from-revolutionary-war-financing-to-pam-bondi-267623

Trump’s White House renovations fulfill Obama’s prediction, kind of

Source: The Conversation – USA – By Chris Lamb, Professor of Journalism, Indiana University

The facade of the East Wing of the White House is seen on Oct. 20, 2025. Kevin Dietsch/Getty Images

President Barack Obama famously chided Donald Trump in April 2011 during the annual White House correspondents’ dinner. The reality show star had repeatedly and falsely claimed that Obama had not been born in the United States and was therefore ineligible to be president.

Trump’s demands that Obama release his birth certificate had, in part, made Trump a front-runner among Republican hopefuls for their party’s nomination in the following year’s presidential election.

Obama referred to Trump’s presidential ambitions by joking that, if elected, Trump would bring some changes to the White House.

Obama then called attention to a satirical photo the guests could see of a remodeled White House with the words “Trump” and “The White House” in large purple letters followed by the words “hotel,” “casino” and “golf course.”

Obama’s ridicule of Trump that evening has been credited with inspiring Trump to run for president in 2016.

My book, “The Art of the Political Putdown,” includes Obama’s chiding of Trump at the correspondents’ dinner to demonstrate how politicians use humor to establish superiority over a rival.

Obama’s ridicule humiliated Trump, who temporarily dropped the birther conspiracy before reviving it. But Trump may have gotten the last laugh by using the humiliation of that night, as some think, as motivation in his run for the president in 2016.

There is a further twist to Obama joking about Trump’s renovations to the White House if Trump became president. Trump has fulfilled Obama’s prediction, kind of.

The Trump administration has razed the East Wing, which sits adjacent to the White House, and will replace it with a 90,000-square-foot, gold-encrusted ballroom that appears to reflect the ostentatious tastes of the president.

The US$300 million ballroom will be twice the size of the White House.

It’s expected to be big enough to accommodate nearly a thousand people. Design renderings suggest that the ballroom will resemble the ballroom at Mar-a-Lago, the president’s private estate in Palm Beach, Florida.

“I don’t have any plan to call it after myself,” Trump said recently. “That was fake news. Probably going to call it the presidential ballroom or something like that. We haven’t really thought about a name yet.”

But senior administration officials told ABC News that they were already referring to the structure as “The President Donald J. Trump Ballroom.”

The renovation will have neither a hotel, casino nor golf course, as Obama mentioned in his light-hearted speech at the 2011 correspondents’ dinner.

A video is shown depicting a fictitious White House.
A video is shown as President Barack Obama speaks about Donald Trump at the White House Correspondents’ Association dinner in Washington on April 30, 2011.
AP Photo/Manuel Balce Ceneta

Obama pokes fun at Trump

In the months before the 2011 correspondents’ dinner, Trump had repeatedly claimed that Obama had not been born in Hawaii but had instead been born outside the United States, perhaps in his father’s home country of Kenya.

The baseless conspiracy theory became such a distraction that Obama released his long-form birth certificate in April 2011.

Three days later, Obama delivered his speech at the correspondents’ dinner with Trump in the audience, where he said that Trump, having put the birther conspiracy behind him, could move to other conspiracy theories like claims the moon landing was staged, aliens landed in Roswell, New Mexico, or the unsolved murders of rappers Biggie Smalls and Tupac Shakur.

“Did we fake the moon landing?” Obama said. “What really happened at Roswell? And where are Biggie and Tupac?”

Obama then poked fun at Trump’s reality show, “The Apprentice,” and referred to how Trump, who owned hotels, casinos and golf courses, might renovate the White House.

When Obama was finished, Seth Meyers, the host of the dinner, made additional jokes at Trump’s expense.

“Donald Trump has been saying that he will run for president as a Republican – which is surprising, since I just assumed that he was running as a joke,” Meyers said.

Trump gets the last laugh

The New Yorker magazine writer Adam Gopnik remembered watching Trump as the jokes kept coming at his expense.

Trump’s humiliation was as absolute, and as visible, as any I have ever seen: his head set in place, like a man on a pillory, he barely moved or altered his expression as wave after wave of laughter struck him,” Gopnik wrote. “There was not a trace of feigning good humor about him.”

A man in a tuxedo and woman in a dress pose for photos.
Donald Trump and Melania Trump arrive for the White House correspondents’ dinner in Washington on April 30, 2011.
AP Photo/Alex Brandon, File

Roger Stone, one of Trump’s top advisers, said Trump decided to run for president after he felt he had been publicly humiliated.

“I think that is the night he resolves to run for president,” Stone said in an interview with the PBS program “Frontline.” “I think that he is kind of motivated by it. ‘Maybe I’ll just run. Maybe I’ll show them all.‘”

Trump, if Stone and other political observers are correct, sought the presidency to avenge that humiliation.

“I thought, ‘Oh, Barack Obama is starting something that I don’t know if he’ll be able to finish,’” said Omarosa Manigault, a former “Apprentice” contestant who became Trump’s director of African American outreach during his first term.

“Every critic, every detractor, will have to bow down to President Trump,” she said. “It is everyone who’s ever doubted Donald, whoever disagreed, whoever challenged him – it is the ultimate revenge to become the most powerful man in the universe.”

The notoriously thin-skinned Trump did not attend the White House correspondents’ dinner during his first presidency. He also did not attend the dinner during the first year of his second presidency.

Although Trump has never publicly acknowledged the importance of that event in 2011, a number of people have noted how pivotal it was, demonstrating how the putdown can be a powerful weapon in politics – even, perhaps, extending to tearing down the White House’s East Wing.

The Conversation

Chris Lamb does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump’s White House renovations fulfill Obama’s prediction, kind of – https://theconversation.com/trumps-white-house-renovations-fulfill-obamas-prediction-kind-of-268458

How the US cut climate-changing emissions while its economy more than doubled

Source: The Conversation – USA (2) – By Valerie Thomas, Professor of Industrial Engineering, Georgia Institute of Technology

Wind power near Dodge City, Kan. Halbergman/iStock/Getty Images Plus

Countries around the world have been discussing the need to rein in climate change for three decades, yet global greenhouse gas emissions – and global temperatures with them – keep rising.

When it seems like we’re getting nowhere, it’s useful to step back and examine the progress that has been made.

Let’s take a look at the United States, historically the world’s largest greenhouse gas emitter. Over those three decades, the U.S. population soared by 28% and the economy, as measured by gross domestic product adjusted for inflation, more than doubled.

Yet U.S. emissions from many of the activities that produce greenhouse gases – transportation, industry, agriculture, heating and cooling of buildings – have remained about the same over the past 30 years. Transportation is a bit up; industry a bit down. And electricity, once the nation’s largest source of greenhouse gas emissions, has seen its emissions drop significantly.

Overall, the U.S. is still among the countries with the highest per capita emissions, so there’s room for improvement, and its emissions haven’t fallen enough to put the country on track to meet its pledges under the 10-year-old Paris climate agreement. But U.S. emissions are down about 15% over the past 10 years.

Here’s how that happened:

US electricity emissions have fallen

U.S. electricity use has been rising lately with the shift toward more electrification of cars and heating and cooling and expansion of data centers, yet greenhouse gas emissions from electricity are down by almost 30% since 1995.

One of the main reasons for this big drop is that Americans are using less coal and more natural gas to make electricity.

Both coal and natural gas are fossil fuels. Both release carbon dioxide to the atmosphere when they are burned to make electricity, and that carbon dioxide traps heat, raising global temperatures. But power plants can make electricity more efficiently using natural gas compared with coal, so it produces less emissions per unit of power.

Why did the U.S. start using more natural gas?

Research and technological innovation in fracking and horizontal drilling have allowed companies to extract more oil and gas at lower cost, making it cheaper to produce electricity from natural gas rather than coal.

As a result, utilities have built more natural gas power plants – especially super-efficient combined cycle gas power plants, which produce power from gas turbines and also capture waste heat from those turbines to generate more power. More coal plants have been shutting down or running less.

Because natural gas is a more efficient fuel than coal, it has been a win for climate in comparison, even though it’s a fossil fuel. The U.S. has reduced emissions from electricity as a result.

Significant improvements in energy efficiency, from appliances to lighting, have also played a role. Even though tech gadgets seem to be recharging everywhere all the time today, household electricity use, per person, plateaued over the first two decades of the 2000s after rising continuously since the 1940s.

Costs for renewable electricity, batteries fall

U.S. renewable electricity generation, including wind, solar and hydro power, has nearly tripled since 1995, helping to further reduce emissions from electricity generation.

Costs for solar and wind power have fallen so much that they are now cheaper than coal and competitive with natural gas. Fourteen states, including most of the Great Plains, now get at least 30% of their power from solar, wind and battery storage.

While wind power has been cost competitive with fossil fuels for at least 20 years, solar photovoltaic power has only been competitive with fossil fuels for about 10 years. So expect deployment of solar PV to continue to increase, both in the U.S. and internationally, even as U.S. federal subsidies disappear.

Both wind and solar provide intermittent power: The sun does not always shine, and the wind does not always blow. There are a number of ways utilities are dealing with this. One way is to use demand management, offering lower prices for power during off-peak periods or discounts for companies that can cut their power use during high demand. Virtual power plants aggregate several kinds of distributed energy resources – solar panels on homes, batteries and even smart thermostats – to manage power supply and demand. The U.S. had an estimated 37.5 gigawatts of virtual power plants in 2024, equivalent to about 37.5 nuclear power reactors.

Charts show cost decline compared with fossil fuels.
Globally, the costs of solar, onshore wind and EV batteries fell quickly over the first two decades of the 2000s.
IPCC 6th Assessment Report

Another energy management method is battery storage, which is just now beginning to take off. Battery costs have come down enough in the past few years to make utility-scale battery storage cost-effective.

What about driving?

In the U.S., gasoline consumption has remained roughly constant and electric vehicle sales have been slow. Some of this could be due to the success of fracking: U.S. petroleum production has increased, and gasoline and diesel prices have remained relatively low.

People in other countries are switching to electric vehicles more rapidly than in the U.S. as the cost of EVs has fallen. Chinese consumers can buy an entry-level EV for under US$10,000 in China with the help of government subsidies, and the country leads the world in EV sales.

In 2024, people in the U.S. bought 1.6 million EVs, and global sales reached 17 million, which was up 25% from the year before.

The unknowns ahead: What about data centers?

The construction of new data centers, in part to serve the explosive growth of artificial intelligence, is drawing a lot of attention to future energy demand and to the uncertainty ahead.

Data centers are increasing electricity demand in some locations, such as northern Virginia, Dallas, Phoenix, Chicago and Atlanta. The future electricity demand growth from data centers is still unclear, though, meaning the effects of data centers on electric rates and power system emissions are also uncertain.

However, AI is not the only reason to watch for increased electricity demand: The U.S. can expect growing electricity demand for industrial processes and electric vehicles, as well as the overall transition from using oil and gas for heating and appliances to using electricity that continues across the country.

The Conversation

Valerie Thomas receives funding from the US Department of Energy

ref. How the US cut climate-changing emissions while its economy more than doubled – https://theconversation.com/how-the-us-cut-climate-changing-emissions-while-its-economy-more-than-doubled-268763

Chatbots don’t judge! Customers prefer robots over humans when it comes to those ’um, you know’ purchases

Source: The Conversation – USA (2) – By Jianna Jin, Assistant Professor of Marketing at Mendoza College of Business, University of Notre Dame

When it comes to inquiring about – ahem – certain products, shoppers prefer the inhuman touch.

That is what we found in a study of consumer habits when it comes to products that traditionally have come with a degree of embarrassment – think acne cream, diarrhea medication, adult sex toys or personal lubricant.

While brands may assume consumers hate chatbots, our series of studies involving more than 6,000 participants found a clear pattern: When it comes to purchases that make people feel embarrassed, consumers prefer chatbots over human service reps.

In one experiment, we asked participants to imagine shopping for medications for diarrhea and hay fever. They were offered two online pharmacies, one with a human pharmacist and the other with a chatbot pharmacist.

The medications were packaged identically, with the only difference being their labels for “diarrhea” or “hay fever.” More than 80% of consumers looking for diarrhea treatment preferred a store with a clearly nonhuman chatbot. In caparison, just 9% of those shopping for hay fever medication preferred nonhuman chatbots.

This is because, participants told us, they did not think chatbots have “minds” – that is, the ability to judge or feel.

In fact, when it comes to selling embarrassing products, making chatbots look or sound human can actually backfire. In another study, we asked 1,500 people to imagine buying diarrhea pills online. Participants were randomly assigned to one of three conditions: an online drugstore with a human service rep, the same store with a humanlike chatbot with a profile photo and name, or the same store with a chatbot that was clearly botlike in both its name and icon.

We then asked participants how likely they would be to seek help from the service agent. The results were clear: Willingness to interact dropped as the agent seemed more human. Interest peaked with the clearly machinelike chatbot and hit its lowest point with the human service rep.

Why it matters

As a scholar of marketing and consumer behavior, I know Chatbots play an increasingly large part in e-retail. In fact, one report found 80% of retail and e-commerce business use AI chatbots or plan to use them in the near future.

When it comes to chatbots, companies want to answer two questions: When should they deploy chatbots? And how should the chatbots be designed?

Many companies may assume the best strategy is to make bots look and sound more human, intuiting that consumers don’t want to talk to machines.

But our findings show the opposite can be true. In moments when embarrassment looms large, humanlike chatbots can backfire.

The practical takeaway is that brands should not default to humanizing their chatbots. Sometimes the most effective bot is the one that looks and sounds like a machine.

What still isn’t known

So far, we’ve looked at everyday purchases where embarrassment is easy to imagine, such as hemorrhoid cream, anti-wrinkle cream, personal lubricant and adult toys.

However, we believe the insights extend more broadly. For example, women getting a quote for car repair may be more self-conscious, as this is a purchase context where women have been traditionally more stigmatized. Similarly, men shopping for cosmetic products may feel judged in a category that has traditionally been marketed to women.

In contexts like these, companies could deploy chatbots – especially ones that clearly sound machinelike – to reduce discomfort and provide a better service. But more work is needed to test that hypothesis.

The Research Brief is a short take on interesting academic work.

The Conversation

Jianna Jin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Chatbots don’t judge! Customers prefer robots over humans when it comes to those ’um, you know’ purchases – https://theconversation.com/chatbots-dont-judge-customers-prefer-robots-over-humans-when-it-comes-to-those-um-you-know-purchases-266105