AI could worsen inequalities in schools – teachers are key to whether it will

Source: The Conversation – USA (2) – By Katie Davis, Professor Information School and Adjunct Associate Professor, College of Education, University of Washington

Meeting about AI: Teachers see some efficiencies with AI but don’t always feel like they have the resources to learn how to best use it for teaching. Joe Lamberti/AP Images

Today’s teachers find themselves thrust into a difficult position with generative AI. New tools are coming online at a blistering pace and being adopted just as quickly, whether they’re personalized tutors and study buddies for students or lesson plan generators and assignment graders for teachers. Schools are traditionally slow to adapt to change, which makes such rapid-fire developments especially destabilizing.

The uncertainties accompanying the artificial intelligence onslaught come amid existing challenges the teaching profession has faced for years. Teachers have been working with increasingly scarce resources – and even scarcer time – while facing mounting expectations not only for their students’ academic performance, but also their social-emotional development. Many teachers are burned out, and they’re leaving the profession in record numbers.

All of this matters because teacher quality is the single most important factor in school influencing student achievement. And the impact of teachers is greatest for students who are most disadvantaged. How teachers end up using, or not using, AI to support their teaching – and their students’ learning – may be the most crucial determinant of whether AI’s use in schools narrows or widens existing equity gaps.

We have been conducting research on how public school teachers feel about generative AI technologies.

The initial results, which are currently under review, reveal deep ambivalence about AI’s growing role in K-12 education. Our work also shows how inadequate training and unclear communications could worsen existing inequalities among schools.

A ‘thought partner’ for busy teachers

As part of a larger project examining AI integration in education, we interviewed 22 teachers in a large public school district in the United States that has been an early and enthusiastic adopter of AI. The district serves a multilingual and socioeconomically diverse student population, with over 160 languages spoken and approximately three-quarters of students eligible for free or reduced-price lunch.

The teachers who participated in our study spanned elementary, middle school and high school grade levels, and represented a variety of subject areas, including science, technology, engineering and mathematics, social studies, special education, and culturally and linguistically diverse education. We asked these teachers to describe how they first encountered generative AI tools, how they currently use them, and the broader shifts they have observed in their schools. Teachers also reflected on both the opportunities and challenges of using AI tools in their classrooms.

Mirroring a recent survey finding that AI has helped teachers save up to six hours per week of work, the teachers in our study pointed to AI’s ability to create more space in the day for themselves and their students. Turning to AI for help writing lesson plans and assessments not only saves time, but it also gives teachers a tool for brainstorming ideas, helping them feel less isolated in their work. One high school teacher with over 11 years’ experience reflected:

“The most significant benefit that AI has brought to my life as a teacher is having work-life balance. It has decreased my stress 80-fold because I am able to have a thought partner. Teachers are really isolated, even though we work with people constantly … When I’m exhausted, it gives me support and help with ideas.”

Why lack of training matters

However, not all teachers felt well-equipped to benefit from AI. Much of what they told us boiled down to a lack of resources and other professional support. An elementary school classroom teacher explained:

“It’s just a lack of time. We don’t really get much planning time, and it would be a new tool to learn, so we would have to take the time personally to learn how to use it and where to find everything.”

Many teachers underscored the need for – and current lack of – professional development offerings to help them understand and integrate AI into their teaching.

Research on previous waves of technological innovations shows that under-resourced schools serving disadvantaged students are typically the least well-equipped to provide teachers with the professional support they need to make the most of new technologies.

Because well-resourced schools are far more likely to offer such support, the introduction of new technologies in schools tends to reinforce existing inequities in the education system.

When it comes to AI, well-resourced schools are best positioned to give teachers time, support and encouragement to “tinker” with AI and discover how and whether it can support their teaching and learning goals.

‘You need a relationship’ to learn

Our research also uncovered the importance of preserving the relational nature of teaching and learning, even – or perhaps especially – in the age of AI. As one middle school social studies teacher observed:

“A machine can give you information, but most students we know are not able to get information from something that’s just printed out for them and put it into their heads. You need a relationship. Some kids can do online school or read a book and teach themselves, but that’s like 2%. Most kids need a social environment to do it.”

A teacher sitting at head of class with AI policies posted on screen above him.
Even as schools integrate AI into classwork, teachers still need to learn how to implement the technology to help their students learn.
Jae C. Hong/AP Images

Here again, prior research shows us that teachers in well-resourced schools are better equipped to introduce new technologies in ways that augment rather than undermine the relational dimensions of teaching and learning. And again, teachers are crucial in determining how and whether AI, like all new technologies, is used to support their teaching and student learning.

That’s why we believe the practices established during this current period of rapid AI development and adoption will profoundly influence whether educational inequities are dismantled or deepened.

Grounded in the classroom

Going forward, we see the need for research to examine how generative AI is changing teachers’ practice and relationship to their work. Their input can inform practices that empower teachers as professionals and advance student learning.

This approach requires adequate institutional support at the school and district levels. It also means listening to the real experiences of teachers and students instead of responding to the promised benefits touted by education technologies companies.

The Conversation

Katie Davis has received funding from the Spencer Foundation.

Aayushi Dangol does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI could worsen inequalities in schools – teachers are key to whether it will – https://theconversation.com/ai-could-worsen-inequalities-in-schools-teachers-are-key-to-whether-it-will-266140

Anxiety over school admissions isn’t limited to college – parents of young children are also feeling pressure, some more acutely than others

Source: The Conversation – USA (2) – By Bailey A. Brown, Assistant Professor of Sociology, Spelman College

Shifting policies such as school choice give parents more school options than they had a few decades before. iStock/Getty Images Plus

Deciding where to send your child to kindergarten has become one of the most high-stakes moments in many American families’ lives.

A few factors have made selecting an elementary school particularly challenging in recent years. For one, there are simply more schools for parents to pick from over the past few decades, ranging from traditional public and private to a growing number of magnet and charter programs. There are also new policies in some places, such as New York City, that allow parents to select not just their closest neighborhood public school but schools across and outside of the districts where they live.

As a scholar of sociology and education, I have seen how the expanding range of school options – sometimes called school choice – has spread nationwide and is particularly a prominent factor in New York City.

I spoke with a diverse range of more than 100 New York City parents across income levels and racial and ethnic backgrounds from 2014 to 2019 as part of research for my 2025 book, “Kindergarten Panic: Parental Anxiety and School Choice Inequality.”

All of these parents felt pressure trying to select a school for their elementary school-age children, and school choice options post-COVID-19 have only increased.

Some parents experience this pressure a bit more acutely than others.

Women often see their choice of school as a reflection of whether they are good moms, my interviews show. Parents of color feel pressure to find a racially inclusive school. Other parents worry about finding niche schools that offer dual-language programs, for example, or other specialties.

Several children and adults walk into a large brick building with green doors.
Children arrive for class at an elementary school in Brooklyn in 2020.
Angela Weiss/AFP via Getty Images

Navigating schools in New York City

Every year, about 65,000 New York City kindergartners are matched to more than 700 public schools.

New York City kindergartners typically attend their nearest public school in the neighborhood and get a priority place at this school. This school is often called someone’s zoned school.

Even so, a spot at your local school isn’t guaranteed – students get priority if they apply on time.

While most kindergartners still attend their zoned schools, their attendance rate is decreasing. While 72% of kindergartners in the city attended their zoned school in the 2007-08 school year, 60% did so in the 2016-17 school year.

One reason is that since 2003, New York City parents have been able to apply to out-of-zone schools when seats were available. And in 2020, when the COVID-19 pandemic began, all public school applications moved entirely online. This shift allowed parents to easily rank 12 different school options they liked, in and outside of their zones.

Still, New York City public schools remain one of the most segregated in the country, divided by race and class.

Pressure to be a good mom

Many of the mothers I interviewed from 2015 through 2019 said that getting their child into what they considered a “good” school reflected good mothering.

Mothers took the primary responsibility for their school search, whether they had partners or not, and regardless of their social class, as well as racial and ethnic background.

In 2017, I spoke with Janet, a white, married mother who at the time was 41 years old and had an infant and a 3-year-old. Janet worked as a web designer and lived in Queens. She explained that she started a group in 2016 to connect with other mothers, in part to discuss schools.

Though Janet’s children were a few years away from kindergarten, she believed that she had started her research for public schools too late. She spent multiple hours each week looking up information during her limited spare time. She learned that other moms were talking to other parents, researching test results, analyzing school reviews and visiting schools in person.

Janet said she wished she had started looking for schools when her son was was 1 or 2 years old, like other mothers she knew. She expressed fear that she was failing as a mother. Eventually, Janet enrolled her son in a nonzoned public school in another Queens neighborhood.

Pressure to find an inclusive school

Regardless of their incomes, Black, Latino and immigrant families I interviewed also felt pressure to evaluate whether the public schools they considered were racially and ethnically inclusive.

Parents worried that racially insensitive policies related to bullying, curriculum and discipline would negatively affect their children.

In 2015, I spoke with Fumi, a Black, immigrant mother of two young children. At the time, Fumi was 37 years old and living in Washington Heights in north Manhattan. She described her uncertain search for a public school.

Fumi thought that New York City’s gifted and talented programs at public schools might be a better option academically than other public schools that don’t offer an advanced track for some students. But the gifted and talented programs often lacked racial diversity, and Fumi did not want her son to be the only Black student in his class.

Still, Fumi had her son tested for the 2015 gifted and talented exam and enrolled him in one of these programs for kindergarten.

Once Fumi’s son began attending the gifted and talented school, Fumi worried that the constant bullying he experienced was racially motivated.

Though Fumi remained uneasy about the bullying and lack of diversity, she decided to keep him at the school because of the school’s strong academic quality.

Pressure to find a niche school

Many of the parents I interviewed who earned more than US$50,000 a year wanted to find specialty schools that offered advanced courses, dual-language programs and progressive-oriented curriculum.

Parents like Renata, a 44-year-old Asian mother of four, and Stella, a 39-year-old Black mother of one, sent their kids to out-of-neighborhood public schools.

In 2016, Renata described visiting multiple schools and researching options so she could potentially enroll her four children in different schools that met each of their particular needs.

Stella, meanwhile, searched for schools that would de-emphasize testing, nurture her son’s creativity and provide flexible learning options.

In contrast, the working-class parents I interviewed who made less than $50,000 annually often sought schools that mirrored their own school experiences.

Few working-class parents I spoke with selected out-of-neighborhood and high academically performing schools.

New York City data points to similar results – low-income families are less likely than people earning more than them to attend schools outside of their neighborhoods.

For instance, Black working-class parents like 47-year-old Risha, a mother of four, and 53-year-old Jeffery, a father of three, who attended New York City neighborhood public schools themselves as children told me in 2016 that they decided to send their children to local public schools.

Based on state performance indicators, students at these particular schools performed lower on standard assessments than schools on average.

A group of young children wearing face masks sit at a table and color on white paper.
Students write down and draw positive affirmations on poster board at P.S. 5 Port Morris, a Bronx elementary school, in 2021.
Brittainy Newman/Associated Press

Cracks in the system

The parents I spoke with all live in New York City, which has a uniquely complicated education system. Yet the pressures they face are reflective of the evolving public school choice landscape for parents across the country.

Parents nationwide are searching for schools with vastly different resources and concerns about their children’s future well-being and success.

When parents panic about kindergarten, they reveal cracks in the foundation of American schooling. In my view, parental anxiety about kindergarten is a response to an unequal, high-stakes education system.

The Conversation

Bailey A. Brown does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Anxiety over school admissions isn’t limited to college – parents of young children are also feeling pressure, some more acutely than others – https://theconversation.com/anxiety-over-school-admissions-isnt-limited-to-college-parents-of-young-children-are-also-feeling-pressure-some-more-acutely-than-others-265537

FDA recall of blood pressure pills due to cancer-causing contaminant may point to higher safety risks in older generic drugs

Source: The Conversation – USA (3) – By C. Michael White, Distinguished Professor of Pharmacy Practice, University of Connecticut

Nitrosamines are by-products of many common chemical reactions. FatCamera/iStock via Getty Images Plus

A generic blood pressure drug called prazosin, made by Teva Pharmaceuticals, is being recalled by the Food and Drug Administration because it contains elevated levels of cancer-causing chemicals called nitrosamines.

The recall, which Teva announced on Oct. 7, 2025, affects more than 580,000 prazosin capsules. Prazosin is prescribed to around 510,000 patients yearly and is used to treat post-traumatic stress disorder as well as high blood pressure.

I am a pharmacologist and pharmacist who has studied nitrosamine contamination of popular blood pressure, diabetes and heartburn drugs, as well as other issues in generic drug manufacturing.

Prazosin has been available as a generic medication for more than 25 years and, like many generics that have been around that long, is now produced by multiple manufacturers. This ratchets up competition on price, which may explain why older generics are more prone to manufacturing issues that may harm patient health.

What are nitrosamines and where do they come from?

Nitrosamines are by-products of many common chemical reactions. They form when a type of chemical building block called a nitrite group interacts with another type called an amine group.

Industrial processes like rocket fuel, rubber and sealant manufacturing can produce high concentrations of nitrosamines during chemical reactions. Bacon, pepperoni and salami are high in nitrite preservatives that interact with the amine groups in the meats to form small amounts of nitrosamines. The chemical reaction that happens when chlorinated water interacts with naturally occurring chemicals that contain nitrogen and oxygen can also form small amounts of nitrosamines.

Occasional and small exposures to nitrosamines are not thought to be dangerous. But some studies have found that certain nitrosamines are carcinogenic when ingested in high amounts for long periods of time

European regulators first discovered in 2018 that prescription drugs could also be contaminated when testing revealed that an active ingredient in a blood pressure drug called valsartan contained a nitrosamine chemical. Since the Chinese company that made the drug’s active ingredient sold it to multiple manufacturers of valsartan tablets, many companies, including Teva Pharmaceuticals, recalled the drug at the time.

Gloved hands overflowing with manufactured tablets
Drugmakers have identified nitrosamine contamination in many widely used drugs.
Starkovphoto/iStock via Getty Images Plus

The FDA then launched a major effort to identify nitrosamines in prescription and over-the-counter drugs and to define unsafe levels for tablets and capsules. It published an initial industry guidance in 2021 and an updated version in 2024.

Based on the agency’s new testing requirements, drugmakers have identified nitrosamine contamination in widely used blood pressure, diabetes, heartburn, antibiotic and smoking cessation drugs. Most of the recalled drugs were contaminated during the chemical processing at a manufacturing plant.

What should people who take prazosin do?

Teva Pharmaceuticals’ prazosin is just one of many generic versions – but it’s the only one that is contaminated. You can determine whether your medication came from Teva by looking at your prescription label. Search for the abbreviations MFG or MFR, which stand for “manufacturing” or “manufacturer.” If it says “MFG Teva” or “MFR Teva,” that means Teva Pharmaceuticals supplied the medication.

The first four numbers of a National Drug Code, abbreviated as NDC on the prescription label, also reveal the manufacturer or distributor. Teva products have the number 0093.

If Teva Pharmaceuticals is the distributor, a pharmacist can cross-reference your prescription number to obtain the lot number and compare it with the posted lot numbers on the FDA website for recalled prazosin. If your product has been recalled, your pharmacy may have other generic versions of prazosin in stock that are not part of this recall.

Based on its risk assessment for these tablets, the FDA gave the recall a Class II status, which means that the medication could cause “temporary or medically reversible adverse health consequences.” If no other prazosin version exists at your pharmacy, do not stop taking your drug without talking with your physician first. The risk of temporarily taking tablets with an elevated amount of nitrosamines may be less than the risk of suddenly stopping this medication.

Prazosin, the drug being recalled, is prescribed to more than a half-million patients each year.

Your physician may also be able to prescribe an alternative treatment such as clonidine or trazodone.

Do older generics made overseas pose higher risks?

Until recently, it wasn’t possible to compare whether the safety records of generic drugs manufactured overseas differed from the same generics made in the U.S., because the FDA does not disclose which manufacturing plants companies use to create their tablets and capsules. But in a 2025 study, researchers managed to triangulate that information from an FDA dataset.

They found that the risk of serious adverse events was 54.3% higher with generics made in India as compared with those made in the United States. And the longer a drug has been available in generic form, the greater the difference in safety risk between its U.S.- and India-made forms. As my colleague and I wrote in a commentary accompanying the study, the findings suggest that when the market for generic drugs is crowded by multiple manufacturers, lower-priced options naturally sell better. As a result, manufacturers in developing countries are more apt to produce poorer quality products that are less expensive to produce.

Teva Pharmaceuticals has manufacturing plants around the world, including in India. The company has not disclosed where its recalled prazosin capsules and their active and inactive ingredients were manufactured.

The FDA publishes ratings on generic drug quality and claims that generics with an “A” rating meet the same manufacturing quality standards and achieve the same blood concentrations as brand-name drugs. But pharmacies can’t tell from those ratings if a drug comes from manufacturing plants that are at higher risk for quality issues.

Patients are at the mercy of choices pharmacies make in the generic versions of drugs they procure for their stores. In my view, if pharmacies could access reliable information about quality, they might be able to make choices that are safer for American consumers.

The Conversation

C. Michael White does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. FDA recall of blood pressure pills due to cancer-causing contaminant may point to higher safety risks in older generic drugs – https://theconversation.com/fda-recall-of-blood-pressure-pills-due-to-cancer-causing-contaminant-may-point-to-higher-safety-risks-in-older-generic-drugs-268968

Brewery waste can be repurposed to make nanoparticles that can fight bacteria

Source: The Conversation – USA – By Alcina Johnson Sudagar, Research Scientist in Chemistry, Washington University in St. Louis

Some compounds in waste produced in the brewing process could be repurposed for antibacterial drugs. Iuri Gagarin/iStock via Getty Images

Modern beer production is a US$117 billion business in the United States, with brewers producing over 170 million barrels of beer per year. The brewing process is time- and energy-intensive, and each step generates large amounts of waste.

Solid components such as used grains and yeast from this waste end up in landfills, where harmful compounds can leach into the soil. Brewing wastewater that makes it into aquatic ecosystems can contaminate streams and lakes, decrease oxygen levels in those environments and threaten organisms.

To keep this waste from going into the environment, scientists like me are exploring how to manufacture beer brewing waste into useful products. I’m a chemist, and my research team and I are interested in figuring out how to recycle and repurpose brewery waste into tiny particles that can be used to make new types of prescription drugs.

The brewing process

The brewing process takes raw cereal grain – usually from barley – and converts its starch and proteins into simpler chemicals by malting. Brewers initiate this process by adding water, which wakes the seed from dormancy, and then keeping the seeds at a controlled temperature to sprout the grain.

During this time, important enzymes are released that can convert the starch and proteins in the grains to fermentable sugars and amino acids. They then heat up the resulting product, called the malt, to dry it out and stop further sprouting. After this malting process, they add hot water and mash the malt to release the compounds that give the beer its iconic flavor.

A diagram showing the stages of beer brewing -- and flagging four sources of waste: brewer's spent grains, hot trub, brewer's spent yeast and filtrate.
The brewing process produces waste at four main stages.
Alcina Johnson Sudagar, CC BY-SA

The brewers then separate the sweet malt extract, called wort, and the leftover solid is removed as waste, called brewer’s spent grains. About 30% of the weight of the raw grain ends up as spent grain waste. This waste is either used as animal feed or discarded. About 30 million tons of spent grain is generated annually.

Brewers add a cone-shaped flower of the Humulus lupulus plant, called hops, to the wort, then boil and clarify it. The hops flower is the key ingredient that gives beer its bitterness and aroma. The undissolved hops and proteins get collected during clarification to form hot trub, the second major waste from breweries. Roughly 85% of the hops are removed as waste material.

The clear wort is then cooled and fermented by adding yeast. The yeast filtered out after fermentation, called brewer’s spent yeast, forms the third type of waste that breweries generate. The spent yeast is one of the major byproducts of the brewing industry. This waste has a large quantity of water and solid material: 100 liters of beer generate 2 to 4 kilograms (4.4 to 8.8 lbs.) of spent yeast.

Finally, the fermented beer is filtered before entering the production line, where the beer is bottled for consumption. The wastewater generated at this last stage forms the filtration waste. A medium-size brewery generates about 8 tons of dense sludge and five to seven times – or 40 to 56 tons – of wastewater as filtration waste monthly. Several tons of waste from breweries remain largely underused due to their low economic value.

The brewery waste problem

These wastes have several compounds, such as carbohydrates, proteins, amino acids, minerals and vitamins that can potentially be repurposed. Scientists have tried to reuse the wastes in creative ways by creating biofuels and vegan leather using either some compounds extracted from the waste or the entire waste.

Breweries can send their solid wastes to farms that repurpose it as soil fertilizer, compost or animal feed, but a major fraction of it industrywide is discarded as landfill. The wastewater is discharged into the sewage lines, which can challenge sewage treatment systems, as they contain more than 30 times higher pollutants than the typical residential sewage.

Although breweries are becoming more aware of their waste and moving toward sustainable approaches, demand for beer has continued to rise, and a large amount of waste remains to be dealt with.

Repurposing waste in nanoparticles

In my research, I’m interested in determining whether compounds from brewery waste can help create nanoparticles that are compatible with human cells but fight against bacteria. Nanoparticles are extremely tiny particles that have sizes in the range of one-billionth of a meter.

A size scale going as small as 0.1 nm, the size of a molecule, up to 1 m, the size of a guitar. Nanoparticles are between 1 and 100 nm.
Nanoparticles are smaller than bacteria – they can be the size of viruses or even human DNA.
Alcina Johnson Sudagar, CC BY-SA

In medicine, when the same antibiotics are used over and over, bacteria can evolve resistance against them. One potential use of nanoparticles is as an active component in certain antibiotic drugs. These nanoparticles could also work as disinfectants and cleaning chemicals.

My team and I developed nanoparticles coated with some of the compounds found in brewery waste – an invention which we have since patented but are not actively commercializing. We created the particles by adding waste from any stage of brewing to a metal source.

When we added a chemical containing silver – for example, silver nitrate – to the waste, a combination of processes converted silver compound into nanoparticles. One process is called reduction: Here, compounds found in the brewery waste undergo a chemical reaction that converts the silver ions from the silver nitrate to a metallic nanoparticle.

The other process, called precipitation, is similar to how chalky soap scum forms in your sink when soap reacts with minerals such as calcium in hard water. Oxide and phosphate from the brewery waste combine with a silver ion from the silver nitrate, causing the silver to form a solid compound that makes up the nanoparticle’s core.

The organic compounds from the brewing waste such as proteins, carbohydrates, polyphenols and sugars form a coating on the nanoparticles. This coating prevents any other reaction from happening on the surface of these particles, which is very important for making the nanoparticles stable for their applications. These nanoparticles prepared from brewery waste were made of three components: silver metal, silver oxide and silver phosphate.

The steps involved in the creation of green nanoparticles using brewery wastes from different stages of brewing
Nanoparticles preparation using one-pot method.
Alcina Johnson Sudagar, CC BY-SA

Environmentally friendly processes that reduce the use of hazardous chemicals and minimize harmful side products are known as green chemistry. Because our procedure was so simple and did not use any other chemicals, it falls into this green chemistry category.

Nanoparticle safety

My colleague Neha Rangam found that the coating formed by the brewery waste compounds makes these nanoparticles nontoxic to human cells in the lab. However, the silver from these nanoparticles killed Escherichia coli, a common bacterium responsible for intestinal illness around the world.

We found that a special type of nanoparticle containing high amounts of silver phosphate worked against E. coli. It appeared that this silver phosphate nanoparticle had a thinner coating of the organic compounds from the brewery waste than silver metal and oxides, which led to better contact with the bacteria. That meant enough silver could reach the bacteria to disrupt its cellular structure. Silver has long been known to have an antimicrobial effect. By creating nanoparticles from silver, we get lots of surface area available for eliminating bacteria.

Several nanoparticles have been in clinical trials and some have been FDA approved for use in drugs for pain management, dental treatment and diseases such as cancer and COVID-19. Most research into nanoparticles in biotechnology has dealt with carbon-based nanoparticles. Scientists still need to see how these metal nanoparticles would interact with the human body and whether they could potentially cause other health problems.

Because they’re so tiny, these particles are difficult to remove from the body unless they are attached to drug carriers designed to transport the nanoparticles safely. Before doctors can use these nanoparticles as antibacterial drugs, scientists will need to study the fate of these materials once they enter the body.

Some engineered nanoparticles can be toxic to living organisms, so research will need to address whether these brewery waste-derived nanoparticles are safe for the human body before they’re used as a new antibacterial drug component.

The Conversation

Alcina Johnson Sudagar received funding from the European Union’s Marie Curie Horizon 2020 program for this work. Part of the work has been patented, Polish patent valid since August 2020 (Patent no: P.435084)

ref. Brewery waste can be repurposed to make nanoparticles that can fight bacteria – https://theconversation.com/brewery-waste-can-be-repurposed-to-make-nanoparticles-that-can-fight-bacteria-264847

A brief history of congressional oversight, from Revolutionary War financing to Pam Bondi

Source: The Conversation – USA – By Gibbs Knotts, Professor of Political Science, Coastal Carolina University

U.S. Sen. Amy Klobuchar of Minnesota speaks at an oversight hearing before the Senate Judiciary Committee on Oct. 7, 2025. AP Photo/Allison Robbert

Routine congressional oversight hearings usually don’t make headlines. Historically, these often low-key events have been the sorts of things you catch only on C-SPAN – procedural, polite and largely ignored outside the Beltway.

But their tone has shifted dramatically during the second Trump administration.

When Attorney General Pam Bondi appeared before the Senate Judiciary Committee on Oct. 7, 2025, what took place was a contentious, highly partisan, made-for-TV-and-social-media confrontation.

The hearing occurred on the heels of the indictment of former FBI Director James Comey, which many legal experts view as an example of a president targeting his political enemies. Bondi came ready to fight. She refused to answer many questions from Democrats, instead launching personal attacks against these members of the U.S. Senate.

When Illinois Sen. Dick Durbin, a Democrat, asked about the deployment of National Guard troops in Chicago, Bondi retorted, “I wish you loved Chicago as much as you hate President Trump.” The clip went viral, as Bondi likely intended.

From our perspective as political scientists who study the U.S. Congress, congressional oversight has played an important role in American democracy. Here’s a brief history.

Congressional oversight hearings help keep executive branch agencies accountable to the public.

Inquisitory powers

In simple terms, oversight is the ability of Congress to ensure that the laws it passes are faithfully executed. This generally means asking questions, demanding information, convening hearings and holding the executive branch accountable for its actions.

Oversight isn’t specifically mentioned in the Constitution. Article 1, Section 8, which lists the powers of Congress, includes the power “to make all laws which shall be necessary and proper,” without identifying an oversight role. Once laws are enacted, Article 2, Section 3, states that the president “shall take Care that the Laws be faithfully executed.”

However, the framers viewed congressional oversight as a key component of legislative authority. They wanted presidents to take Congress seriously and structured the Constitution to ensure that the executive would be accountable to the legislature. As James Madison urged in Federalist 51, the separate branches of government should have the power to keep each other from becoming too powerful. “Ambition must be made to counteract ambition,” Madison wrote.

The framers drew from the examples of the British Parliament and Colonial legislatures. In 1621, Sir Francis Bacon was charged with corruption and impeached as Lord High Chancellor after an investigation by a committee of the British Parliament. And in 1768, the Massachusetts Assembly conducted an investigation of Gov. Francis Bernard that led to a formal request to the King of England for his removal.

At the Federal Convention in 1787 that produced the Constitution, Delegate George Mason noted that members of Congress possessed “inquisitory powers” and “must meet frequently to inspect the Conduct of public officials.” Even though this idea was never written down, it was a habit of self-government that early Congresses put into practice.

A white-haired man, wearing glasses and holding a sheet of paper, sits at a dais speaking into a microphone.
Sen. Sam Ervin, chair of the Senate Watergate Committee, announces on July 23,1973, that the committee has decided to subpoena White House tapes and documents related to the Watergate burglary and cover-up.
AP Photo

Early oversight hearings

Congressional oversight began almost as soon as the first Congress met. In 1790, Robert Morris, the superintendent of finances during the Continental Congress and a financier of the American Revolution, asked Congress to investigate his handling of the country’s finances and was exonerated of any wrongdoing.

During this period, congressional investigations were often referred to select committees – bodies created to perform special functions. These panels had the power to issue subpoenas and hold individuals in contempt. Since there was no official record of debates and proceedings, the public relied on newspaper accounts to learn about what had happened.

In March 1792, congressional oversight exposed businessman William Duer, who signed contracts with the War Department but failed to furnish the needed military supplies. This shortfall contributed to a stunning U.S. military defeat against a confederation of Native American tribes in the Northwest Territory.

Congress eventually removed the quartermaster general from his role for mismanaging the contracts. Duer was simultaneously involved in perhaps the first American economic bubble, which burst at the same time as Congress’ hearings. He ended up in a debtor’s prison, where he died in 1799.

Throughout the 19th century, Congress continued to quietly exercise this power. The work was often invisible to the public, but the issues were important. Hearings from December 1861 to May 1865 on the conduct of the U.S. Civil War produced a detailed record of the war, exposed military wrongdoing and condemned slavery. In 1871, the Senate created a select committee to investigate Ku Klux Klan violence during Reconstruction.

Investigating corruption and criminal acts

Congress started to use its oversight power more aggressively in the 1920s with the Senate Committee on Public Land and Surveys’ high-profile investigations into the Teapot Dome scandal.

Hearings revealed that Interior Secretary Albert Bacon Fall had secretly leased federal oil reserves in Wyoming to two private corporations and had received personal loans and gifts from the companies in return.

The investigation found clear evidence of corruption. Fall was indicted and became the first U.S. Cabinet member to be convicted of a felony.

The U.S. Supreme Court helped to shape the legal foundation of congressional oversight. In McGrain v. Daugherty, decided in 1927, the court held that congressional committees could issue subpoenas, force witnesses to testify and hold them in contempt if they fail to appear. Two years later, in Sinclair v. United States, the court ruled that witnesses who lied to Congress could be charged with perjury.

These cases granted the judicial branch’s sanction to what had long been an implied legislative power, cementing the constitutionality of congressional oversight.

Oversight highs and lows

The modern era of congressional oversight has produced some very important reforms – and some truly regrettable spectacles.

The most important example of bipartisan congressional oversight came in response to reporting by The Washington Post’s Carl Bernstein and Bob Woodward. The two journalists wrote about the 1972 burgling of Democratic National Committee offices in Washington, D.C.’s Watergate Hotel and subsequent cover-up efforts by the Nixon administration.

On Feb. 7, 1973, the U.S. Senate voted 77-0 to establish a Select Committee on Presidential Campaign Activities, which brought together Democrats and Republicans to investigate what came to be known as the “Watergate scandal.” The committee’s work spurred action in Congress to impeach President Richard Nixon, leading to Nixon’s resignation in 1974 and to the enactment of legal reforms to provide an institutional check on presidential power.

Another high point for congressional oversight came after the 9/11 terrorist attacks in 2001. Seeking to learn how the deadliest terrorist strike on American soil had occurred, Democratic Sen. Bob Graham and Republican Rep. Porter Goss, who chaired the Senate and House Intelligence committees, formed a joint committee to investigate intelligence failures before and after the attacks.

This inquiry produced several important recommendations that were ultimately adopted, including the creation of a director of national intelligence and a Department of Homeland Security, as well as better information sharing among law enforcement agencies.

Firefighters train hoses over the rubble at the former site of the World Trade Center towers in New York City.
After the Sept. 11, 2001, attacks on the World Trade Center in New York City, shown here, and targets in Washington, D.C., a congressional committee investigated intelligence failures that had impeded detection of the terrorist plot.
Universal History Archive/UIG via Getty Images

Congress’ oversight can extend beyond the executive branch when the actions of private actors raise questions about existing laws or spur the need for new ones. As examples, investigations into medical device safety and Enron’s 2001 collapse examined malfeasance in the private sphere that existing regulations failed to prevent.

However, the power to expose corruption can also be used as a tool to score partisan points and generate outrage, rather than holding the executive branch accountable for actual malfeasance. Notably, in the 1950s, Wisconsin Sen. Joseph McCarthy turned oversight into inquisition and used the power of media to amplify his accusations of communist influence within the federal government.

Democracy needs oversight

Congressional oversight has strengthened the democratic system at many points. But hearings like Bondi’s recent session before the Senate Judiciary Committee aren’t the first, and likely won’t be the last, to substitute sound bites for substance.

As we see it, the problem with allowing oversight to become political theater is that it distracts Congress from quieter and more meaningful oversight work. Slow, procedural work isn’t likely to go viral, but it helps keep government accountable. The task of a deliberate legislative body is to reconcile those very different impulses.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. A brief history of congressional oversight, from Revolutionary War financing to Pam Bondi – https://theconversation.com/a-brief-history-of-congressional-oversight-from-revolutionary-war-financing-to-pam-bondi-267623

Trump’s White House renovations fulfill Obama’s prediction, kind of

Source: The Conversation – USA – By Chris Lamb, Professor of Journalism, Indiana University

The facade of the East Wing of the White House is seen on Oct. 20, 2025. Kevin Dietsch/Getty Images

President Barack Obama famously chided Donald Trump in April 2011 during the annual White House correspondents’ dinner. The reality show star had repeatedly and falsely claimed that Obama had not been born in the United States and was therefore ineligible to be president.

Trump’s demands that Obama release his birth certificate had, in part, made Trump a front-runner among Republican hopefuls for their party’s nomination in the following year’s presidential election.

Obama referred to Trump’s presidential ambitions by joking that, if elected, Trump would bring some changes to the White House.

Obama then called attention to a satirical photo the guests could see of a remodeled White House with the words “Trump” and “The White House” in large purple letters followed by the words “hotel,” “casino” and “golf course.”

Obama’s ridicule of Trump that evening has been credited with inspiring Trump to run for president in 2016.

My book, “The Art of the Political Putdown,” includes Obama’s chiding of Trump at the correspondents’ dinner to demonstrate how politicians use humor to establish superiority over a rival.

Obama’s ridicule humiliated Trump, who temporarily dropped the birther conspiracy before reviving it. But Trump may have gotten the last laugh by using the humiliation of that night, as some think, as motivation in his run for the president in 2016.

There is a further twist to Obama joking about Trump’s renovations to the White House if Trump became president. Trump has fulfilled Obama’s prediction, kind of.

The Trump administration has razed the East Wing, which sits adjacent to the White House, and will replace it with a 90,000-square-foot, gold-encrusted ballroom that appears to reflect the ostentatious tastes of the president.

The US$300 million ballroom will be twice the size of the White House.

It’s expected to be big enough to accommodate nearly a thousand people. Design renderings suggest that the ballroom will resemble the ballroom at Mar-a-Lago, the president’s private estate in Palm Beach, Florida.

“I don’t have any plan to call it after myself,” Trump said recently. “That was fake news. Probably going to call it the presidential ballroom or something like that. We haven’t really thought about a name yet.”

But senior administration officials told ABC News that they were already referring to the structure as “The President Donald J. Trump Ballroom.”

The renovation will have neither a hotel, casino nor golf course, as Obama mentioned in his light-hearted speech at the 2011 correspondents’ dinner.

A video is shown depicting a fictitious White House.
A video is shown as President Barack Obama speaks about Donald Trump at the White House Correspondents’ Association dinner in Washington on April 30, 2011.
AP Photo/Manuel Balce Ceneta

Obama pokes fun at Trump

In the months before the 2011 correspondents’ dinner, Trump had repeatedly claimed that Obama had not been born in Hawaii but had instead been born outside the United States, perhaps in his father’s home country of Kenya.

The baseless conspiracy theory became such a distraction that Obama released his long-form birth certificate in April 2011.

Three days later, Obama delivered his speech at the correspondents’ dinner with Trump in the audience, where he said that Trump, having put the birther conspiracy behind him, could move to other conspiracy theories like claims the moon landing was staged, aliens landed in Roswell, New Mexico, or the unsolved murders of rappers Biggie Smalls and Tupac Shakur.

“Did we fake the moon landing?” Obama said. “What really happened at Roswell? And where are Biggie and Tupac?”

Obama then poked fun at Trump’s reality show, “The Apprentice,” and referred to how Trump, who owned hotels, casinos and golf courses, might renovate the White House.

When Obama was finished, Seth Meyers, the host of the dinner, made additional jokes at Trump’s expense.

“Donald Trump has been saying that he will run for president as a Republican – which is surprising, since I just assumed that he was running as a joke,” Meyers said.

Trump gets the last laugh

The New Yorker magazine writer Adam Gopnik remembered watching Trump as the jokes kept coming at his expense.

Trump’s humiliation was as absolute, and as visible, as any I have ever seen: his head set in place, like a man on a pillory, he barely moved or altered his expression as wave after wave of laughter struck him,” Gopnik wrote. “There was not a trace of feigning good humor about him.”

A man in a tuxedo and woman in a dress pose for photos.
Donald Trump and Melania Trump arrive for the White House correspondents’ dinner in Washington on April 30, 2011.
AP Photo/Alex Brandon, File

Roger Stone, one of Trump’s top advisers, said Trump decided to run for president after he felt he had been publicly humiliated.

“I think that is the night he resolves to run for president,” Stone said in an interview with the PBS program “Frontline.” “I think that he is kind of motivated by it. ‘Maybe I’ll just run. Maybe I’ll show them all.‘”

Trump, if Stone and other political observers are correct, sought the presidency to avenge that humiliation.

“I thought, ‘Oh, Barack Obama is starting something that I don’t know if he’ll be able to finish,’” said Omarosa Manigault, a former “Apprentice” contestant who became Trump’s director of African American outreach during his first term.

“Every critic, every detractor, will have to bow down to President Trump,” she said. “It is everyone who’s ever doubted Donald, whoever disagreed, whoever challenged him – it is the ultimate revenge to become the most powerful man in the universe.”

The notoriously thin-skinned Trump did not attend the White House correspondents’ dinner during his first presidency. He also did not attend the dinner during the first year of his second presidency.

Although Trump has never publicly acknowledged the importance of that event in 2011, a number of people have noted how pivotal it was, demonstrating how the putdown can be a powerful weapon in politics – even, perhaps, extending to tearing down the White House’s East Wing.

The Conversation

Chris Lamb does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump’s White House renovations fulfill Obama’s prediction, kind of – https://theconversation.com/trumps-white-house-renovations-fulfill-obamas-prediction-kind-of-268458

How the US cut climate-changing emissions while its economy more than doubled

Source: The Conversation – USA (2) – By Valerie Thomas, Professor of Industrial Engineering, Georgia Institute of Technology

Wind power near Dodge City, Kan. Halbergman/iStock/Getty Images Plus

Countries around the world have been discussing the need to rein in climate change for three decades, yet global greenhouse gas emissions – and global temperatures with them – keep rising.

When it seems like we’re getting nowhere, it’s useful to step back and examine the progress that has been made.

Let’s take a look at the United States, historically the world’s largest greenhouse gas emitter. Over those three decades, the U.S. population soared by 28% and the economy, as measured by gross domestic product adjusted for inflation, more than doubled.

Yet U.S. emissions from many of the activities that produce greenhouse gases – transportation, industry, agriculture, heating and cooling of buildings – have remained about the same over the past 30 years. Transportation is a bit up; industry a bit down. And electricity, once the nation’s largest source of greenhouse gas emissions, has seen its emissions drop significantly.

Overall, the U.S. is still among the countries with the highest per capita emissions, so there’s room for improvement, and its emissions haven’t fallen enough to put the country on track to meet its pledges under the 10-year-old Paris climate agreement. But U.S. emissions are down about 15% over the past 10 years.

Here’s how that happened:

US electricity emissions have fallen

U.S. electricity use has been rising lately with the shift toward more electrification of cars and heating and cooling and expansion of data centers, yet greenhouse gas emissions from electricity are down by almost 30% since 1995.

One of the main reasons for this big drop is that Americans are using less coal and more natural gas to make electricity.

Both coal and natural gas are fossil fuels. Both release carbon dioxide to the atmosphere when they are burned to make electricity, and that carbon dioxide traps heat, raising global temperatures. But power plants can make electricity more efficiently using natural gas compared with coal, so it produces less emissions per unit of power.

Why did the U.S. start using more natural gas?

Research and technological innovation in fracking and horizontal drilling have allowed companies to extract more oil and gas at lower cost, making it cheaper to produce electricity from natural gas rather than coal.

As a result, utilities have built more natural gas power plants – especially super-efficient combined cycle gas power plants, which produce power from gas turbines and also capture waste heat from those turbines to generate more power. More coal plants have been shutting down or running less.

Because natural gas is a more efficient fuel than coal, it has been a win for climate in comparison, even though it’s a fossil fuel. The U.S. has reduced emissions from electricity as a result.

Significant improvements in energy efficiency, from appliances to lighting, have also played a role. Even though tech gadgets seem to be recharging everywhere all the time today, household electricity use, per person, plateaued over the first two decades of the 2000s after rising continuously since the 1940s.

Costs for renewable electricity, batteries fall

U.S. renewable electricity generation, including wind, solar and hydro power, has nearly tripled since 1995, helping to further reduce emissions from electricity generation.

Costs for solar and wind power have fallen so much that they are now cheaper than coal and competitive with natural gas. Fourteen states, including most of the Great Plains, now get at least 30% of their power from solar, wind and battery storage.

While wind power has been cost competitive with fossil fuels for at least 20 years, solar photovoltaic power has only been competitive with fossil fuels for about 10 years. So expect deployment of solar PV to continue to increase, both in the U.S. and internationally, even as U.S. federal subsidies disappear.

Both wind and solar provide intermittent power: The sun does not always shine, and the wind does not always blow. There are a number of ways utilities are dealing with this. One way is to use demand management, offering lower prices for power during off-peak periods or discounts for companies that can cut their power use during high demand. Virtual power plants aggregate several kinds of distributed energy resources – solar panels on homes, batteries and even smart thermostats – to manage power supply and demand. The U.S. had an estimated 37.5 gigawatts of virtual power plants in 2024, equivalent to about 37.5 nuclear power reactors.

Charts show cost decline compared with fossil fuels.
Globally, the costs of solar, onshore wind and EV batteries fell quickly over the first two decades of the 2000s.
IPCC 6th Assessment Report

Another energy management method is battery storage, which is just now beginning to take off. Battery costs have come down enough in the past few years to make utility-scale battery storage cost-effective.

What about driving?

In the U.S., gasoline consumption has remained roughly constant and electric vehicle sales have been slow. Some of this could be due to the success of fracking: U.S. petroleum production has increased, and gasoline and diesel prices have remained relatively low.

People in other countries are switching to electric vehicles more rapidly than in the U.S. as the cost of EVs has fallen. Chinese consumers can buy an entry-level EV for under US$10,000 in China with the help of government subsidies, and the country leads the world in EV sales.

In 2024, people in the U.S. bought 1.6 million EVs, and global sales reached 17 million, which was up 25% from the year before.

The unknowns ahead: What about data centers?

The construction of new data centers, in part to serve the explosive growth of artificial intelligence, is drawing a lot of attention to future energy demand and to the uncertainty ahead.

Data centers are increasing electricity demand in some locations, such as northern Virginia, Dallas, Phoenix, Chicago and Atlanta. The future electricity demand growth from data centers is still unclear, though, meaning the effects of data centers on electric rates and power system emissions are also uncertain.

However, AI is not the only reason to watch for increased electricity demand: The U.S. can expect growing electricity demand for industrial processes and electric vehicles, as well as the overall transition from using oil and gas for heating and appliances to using electricity that continues across the country.

The Conversation

Valerie Thomas receives funding from the US Department of Energy

ref. How the US cut climate-changing emissions while its economy more than doubled – https://theconversation.com/how-the-us-cut-climate-changing-emissions-while-its-economy-more-than-doubled-268763

Chatbots don’t judge! Customers prefer robots over humans when it comes to those ’um, you know’ purchases

Source: The Conversation – USA (2) – By Jianna Jin, Assistant Professor of Marketing at Mendoza College of Business, University of Notre Dame

When it comes to inquiring about – ahem – certain products, shoppers prefer the inhuman touch.

That is what we found in a study of consumer habits when it comes to products that traditionally have come with a degree of embarrassment – think acne cream, diarrhea medication, adult sex toys or personal lubricant.

While brands may assume consumers hate chatbots, our series of studies involving more than 6,000 participants found a clear pattern: When it comes to purchases that make people feel embarrassed, consumers prefer chatbots over human service reps.

In one experiment, we asked participants to imagine shopping for medications for diarrhea and hay fever. They were offered two online pharmacies, one with a human pharmacist and the other with a chatbot pharmacist.

The medications were packaged identically, with the only difference being their labels for “diarrhea” or “hay fever.” More than 80% of consumers looking for diarrhea treatment preferred a store with a clearly nonhuman chatbot. In caparison, just 9% of those shopping for hay fever medication preferred nonhuman chatbots.

This is because, participants told us, they did not think chatbots have “minds” – that is, the ability to judge or feel.

In fact, when it comes to selling embarrassing products, making chatbots look or sound human can actually backfire. In another study, we asked 1,500 people to imagine buying diarrhea pills online. Participants were randomly assigned to one of three conditions: an online drugstore with a human service rep, the same store with a humanlike chatbot with a profile photo and name, or the same store with a chatbot that was clearly botlike in both its name and icon.

We then asked participants how likely they would be to seek help from the service agent. The results were clear: Willingness to interact dropped as the agent seemed more human. Interest peaked with the clearly machinelike chatbot and hit its lowest point with the human service rep.

Why it matters

As a scholar of marketing and consumer behavior, I know Chatbots play an increasingly large part in e-retail. In fact, one report found 80% of retail and e-commerce business use AI chatbots or plan to use them in the near future.

When it comes to chatbots, companies want to answer two questions: When should they deploy chatbots? And how should the chatbots be designed?

Many companies may assume the best strategy is to make bots look and sound more human, intuiting that consumers don’t want to talk to machines.

But our findings show the opposite can be true. In moments when embarrassment looms large, humanlike chatbots can backfire.

The practical takeaway is that brands should not default to humanizing their chatbots. Sometimes the most effective bot is the one that looks and sounds like a machine.

What still isn’t known

So far, we’ve looked at everyday purchases where embarrassment is easy to imagine, such as hemorrhoid cream, anti-wrinkle cream, personal lubricant and adult toys.

However, we believe the insights extend more broadly. For example, women getting a quote for car repair may be more self-conscious, as this is a purchase context where women have been traditionally more stigmatized. Similarly, men shopping for cosmetic products may feel judged in a category that has traditionally been marketed to women.

In contexts like these, companies could deploy chatbots – especially ones that clearly sound machinelike – to reduce discomfort and provide a better service. But more work is needed to test that hypothesis.

The Research Brief is a short take on interesting academic work.

The Conversation

Jianna Jin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Chatbots don’t judge! Customers prefer robots over humans when it comes to those ’um, you know’ purchases – https://theconversation.com/chatbots-dont-judge-customers-prefer-robots-over-humans-when-it-comes-to-those-um-you-know-purchases-266105

The unraveling of workplace protections for delivery drivers: A tale of 2 workplace models

Source: The Conversation – USA (2) – By Daniel Schneider, Professor of Social Policy, Harvard Kennedy School

American households have become dependent on Amazon.

The numbers say it all: In 2024, 83% of U.S. households received deliveries from Amazon, representing over 1 million packages delivered each day and 9 billion individual items delivered same-day or next-day every year. In remarkably short order, the company has transformed from an online bookseller into a juggernaut that has reshaped retailing. But its impact isn’t limited to how we shop.

Behind that endless stream of packages are more than a million people working in Amazon fulfillment centers and delivery vehicles. Through its growing dominance in retail, Amazon has eclipsed its two major competitors in the delivery business, UPS and FedEx, in terms of package volume.

What is life like for those workers? Between Amazon’s rosy public relations on the one hand and reporters’ and advocates’ troubling exposés on the other, it can be hard to tell. Part of the reason is that researchers like us don’t have much reliable data: Workers’ experiences at companies such as Amazon, UPS and FedEx can be a black box. Amazon’s arm’s-length relationship with the drivers it depends on for deliveries makes finding answers even harder.

But that didn’t stop us. Using unique data from the Shift Project, our new study, co-authored with Julie Su and Kevin Bruey, offers the first direct, large-scale comparison of working conditions for drivers and fulfillment employees at Amazon, UPS and FedEx based on survey responses by more than 9,000 workers.

What we found was deeply troubling – not only for Amazon drivers but also for the future of work in the delivery industry as a whole.

2 models, 2 realities

For nearly a century, driving delivery trucks has been a pathway to the middle class, as epitomized by unionized jobs at UPS. UPS drivers, who have been members of the Teamsters union for decades, are employees with legal protections and a collective-bargaining contract.

In contrast, Amazon has embraced a very different model. Most important is that Amazon does not directly employ nearly any of its delivery drivers.

Instead, its transportation division, Amazon Logistics, relies on two methods to deliver most of its shipments: Amazon Flex, a platform-like system that treats drivers like independent contractors, and Amazon DSP, a franchise-like system that uses subcontractors. DSP subcontractors are almost all nonunion, and the company has cut ties with DSP contractors whose drivers have attempted to unionize. These practices place downward pressure on the wages and working conditions of drivers throughout the industry.

The impact on workers is stark.

Delivery workers at Amazon receive significantly lower wages than at UPS and FedEx, we found. Wage gaps are especially large between the delivery workers at Amazon, who earn US$19 an hour on average, and the unionized drivers at UPS, who make $35.

We also found that unionized UPS drivers have a clear pathway to upward mobility, while Amazon drivers don’t. At UPS, wages increase sharply the longer a worker has been on the job. Pay starts at $21 an hour, reaching nearly $40 an hour for drivers who’ve been with the company for at least 10 years – which is more than half of them.

At Amazon, wages start at $17 an hour and don’t increase with tenure. Nearly half of workers have less than a year on the job.

Between lower wages, more unstable schedules, fewer benefits and limited protections from employment laws, Amazon drivers struggle to make ends meet. More than 1 in 4 told us they had gone hungry because they couldn’t afford enough to eat within the past month, and 33% said they couldn’t cover their utility bills. Compared to drivers at UPS and FedEx, Amazon drivers face significant financial instability.

On top of that, Amazon drivers face intense workplace surveillance and speed tracking – as do workers at the company’s fulfillment centers. Sixty percent of both types of Amazon workers received frequent feedback on the speed of their work from a technological device, and more than two-thirds said that Amazon monitors the quality of their work using technology. That degree of technological surveillance and tracking far outpaces what UPS and FedEx workers told us they were exposed to, representing an extreme case of worker monitoring and performance assessment.

Using nonemployee drivers contributed to the exponential growth of Amazon as a package delivery company. In 2023, Amazon for the first time delivered more packages than UPS, making it the second-largest parcel carrier in the country – surpassed only by the U.S. Postal Service.

By building an online retail empire with the capacity to deliver the majority of its own shipments, Amazon’s expansion continues. UPS, by contrast, has seen drops in its revenues, stock value and market capitalization. Amazon’s sheer size and giglike approach are therefore changing industry standards, putting downward pressure on wages, benefits and job stability across the delivery sector.

The contrast between Amazon and UPS drivers isn’t just about two companies using different models for package delivery – it represents two competing futures for work. As the second-largest retail company and now largest private delivery company in the U.S., Amazon exerts market power that impacts the working conditions of workers beyond its own delivery drivers. Recent reporting indicates that UPS has been experimenting with using gig deliveries, much to the consternation of the union that represents three-quarters of its workforce.

In the post-World War II era, increasing unionization led to better wages and conditions across much of the economy, including nonunionized sectors. The continuing expansion of Amazon’s business model could signal the unraveling of wages, benefits and protections for working people more generally.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. The unraveling of workplace protections for delivery drivers: A tale of 2 workplace models – https://theconversation.com/the-unraveling-of-workplace-protections-for-delivery-drivers-a-tale-of-2-workplace-models-268164

How to keep dementia from robbing your loved ones of their sense of personhood – tips for caregivers

Source: The Conversation – USA (3) – By R. Amanda Cooper, Assistant Professor of Communication, University of Connecticut

Different communication styles are needed for the progressive phases of dementia. Halfpoint Images/Moment via Getty Images

Every three seconds, someone in the world develops dementia. There are over 6 million people living with dementia in the U.S. and 57 million globally.

These figures will only increase in the coming years, as rates of dementia are predicted to double by 2060. If you don’t know someone affected by dementia, you probably will at some point.

Dementia is incredibly difficult both for the person experiencing it and for their loved ones, not only because of the symptoms of the disease but also because of the social stigma associated with cognitive decline. Experiencing stigma makes it difficult for people with dementia to ask for help, increases anxiety and depression, and ultimately leads to social isolation.

Dementia-related stigma is perpetuated through media messages that portray people with dementia as mindless and incapable, as well as through daily interactions in which others dismiss and dehumanize the person living with dementia.

These forms of invalidation – usually unintentional – accelerate and intensify the loss of self-worth and identity that dementia patients are already experiencing.

Fortunately, educating and spreading awareness can help reduce behaviors that propagate stigma and dehumanizing treatment of people with dementia.

As a social scientist and researcher in interpersonal communication and family caregiving, I explore the social and relational side of dementia. Through my work with these patients and families, I’ve learned that reducing stigma and supporting self-worth for people who have dementia is often done through daily conversations.

Back shot of two seniors sitting on edge of bed in front of window, speaking to one another.
People living with dementia can continue to have fulfilling interactions when caregivers carry out person-centered care.
Jessie Casson/DigitalVision via Getty Images

How is dementia defined?

Dementia is an umbrella term that refers to a family of cognitive conditions involving memory loss, difficulty thinking or processing information, changes in ability to communicate and challenges with managing daily tasks.

The most common form of dementia is Alzheimer’s disease, but there are several other forms of dementia that can severely affect a person’s quality of life and that of their loved ones.

Most forms of dementia are progressive, meaning that the symptoms of the disease get steadily worse over time. A person with dementia can live with the disease for several years, and their symptoms will shift as the disease progresses.

People in the early stages of dementia, including mild cognitive impairment, continue to engage socially and participate in many of the activities they have always done. In the middle stage of the disease, people often need more help from others to complete daily tasks and may have more difficulty holding conversations. In the late stage, people with dementia are dependent on others and often lose the ability to communicate verbally.

Despite the cognitive declines that come with dementia, people living with dementia can maintain many of their former abilities as the disease progresses. Even in the late stages, research shows that people with dementia can understand tone of voice and nonverbal communication such as body language, facial expressions and gentle touch.

This makes it clear that people with dementia can continue having meaningful social connections and a sense of self-worth even as their disease progresses.

Senior man with dementia sitting at table with smiling young girl and colored pencils.
Engaging in meaningful activities that are appropriate to the person’s stage of dementia can help foster a sense of self.
Jessie Casson/DigitalVision

Focusing care around the person

In the 1990s, psychologist Tom Kitwood, who studied dementia patients in long-term care settings, introduced the notion of “personhood.” Personhood is a recognition of a person’s unique experiences and individual worth. He had observed that residents with dementia were sometimes treated as objects rather than people and were dismissed as being “no longer there” mentally. In response, Kitwood advocated for a new model of person-centered care.

In contrast to the medical model of care that was standard at the time, person-centered care aims to provide people with dementia comfort, attachment, inclusion, occupation and identity.

Comfort includes both physical and psychological comfort, ensuring that the person with dementia feels safe and is as pain-free as possible. Attachment and inclusion have to do with supporting a person with dementia’s closest relationships and making sure they feel included in social activities.

Occupation is about giving the person meaningful activities that are suited to their abilities, while identity is about preserving their unique sense of self. According to Kitwood, each of these elements of personhood can be upheld or threatened through a person’s interactions with others.

I find Kitwood’s work particularly important because it suggests that communication is at the heart of personhood.

Communicating to support personhood

So how can family members and friends communicate with their loved one with dementia to help preserve their sense of self?

Researchers have identified several evidence-based communication strategies that support person-centered care both in long-term care settings and within the family.

These include:

Communication shifts as the disease progresses

Supporting personhood requires adjusting to the communication abilities of the person with dementia. Some communication strategies are helpful in one stage of the disease but not in others.

In a recent study, my team and I found that asking the person with dementia to recall the past was affirming for those who were early in the disease and who could still recall the past. But for people who were in later stages of the disease, asking them “Do you remember?” was received more like a test of memory and led to frustration or confusion. Similarly, we found that suggesting words to prompt recall was helpful later in the disease but demeaning for people who were in earlier stages of the disease who could still find their words without help.

Providing more help in conversation than is needed can lead people with dementia to withdraw, whereas appropriately adjusting to a person’s communication abilities can empower them to continue to engage socially.

Ultimately, supporting a person with dementia’s sense of self and self-worth in conversations is about finding a communication sweet spot – in other words, matching your approach to their current capabilities.

Changing your default approach to conversations can be challenging, but making simple communication changes can make all the difference. Meaningful conversations are the key to helping your loved one live their days to the fullest, with a sense of personal worth and a feeling of meaningful connection with others.

The Conversation

R. Amanda Cooper is affiliated with the Alzheimer’s Association as a Community Educator.

ref. How to keep dementia from robbing your loved ones of their sense of personhood – tips for caregivers – https://theconversation.com/how-to-keep-dementia-from-robbing-your-loved-ones-of-their-sense-of-personhood-tips-for-caregivers-265477