Why universities are hiring more chief marketing officers – even as budgets shrink

Source: The Conversation – USA (2) – By Prachi Gala, Associate Professor of Marketing, Kennesaw State University

Faculty hiring freezes. Department budget cuts. Declining public trust. Across the United States, higher education is navigating one of its most challenging periods in decades.

Yet, quietly, something else is happening: More universities are adding chief marketing officers, or CMOs, to their top management teams.

From flagship universities to small regional colleges, public universities are increasingly hiring high-level marketing executives to oversee branding, enrollment campaigns and public communications.

Why is this happening now? And is it paying off?

As a marketing professor who researches leadership structures, I recently co-authored one of the first major studies on CMOs in higher education, along with my colleagues Aisha Ghimire and Cong Feng. In the paper, which is under review at the European Journal of Marketing, we examined thousands of data points from 167 public universities from 2010 to 2021. Our goal was to see whether having a chief marketing officer actually affected performance.

Attracting more students, if not more donations

We found that having a chief marketing officer is linked to a significant boost in enrollment. On average, student enrollment rose by 1.6% more at schools that had chief marketing officers than at those that didn’t.

That may not sound like much, but in a competitive environment where many schools are struggling to maintain their numbers, even small gains can mean millions of dollars in tuition revenue. In this context, CMOs appear to help universities better understand prospective students, fine-tune recruitment messages and coordinate outreach across multiple channels – from social media to targeted advertising.

However, when it comes to endowment growth – the other big financial lever for universities – we found no overall positive effect. In fact, in some cases, having a CMO was linked to worse performance. For example, universities whose chief marketing officers held MBAs saw their endowments grow more slowly, or even shrink, over time. The same was true of universities that brought in CMOs from outside the institution.

This doesn’t mean these executives were bad at their jobs. Instead, it suggests that traditional corporate marketing experience doesn’t always translate neatly into the relationship-building that fuels giving in higher education.

Messaging matters more in a turbulent market

If higher education were coasting along, the rise of CMOs might seem like a luxury. But the timing tells a different story.

Since 2010, U.S. colleges and universities have faced declining enrollment, particularly among undergraduates. Public universities alone saw enrollment drop 4% in 2021. The COVID-19 pandemic accelerated these trends – enrollment has never fully recovered – and many states have slashed public funding for higher education. Adding to the pressure, experts expect to see fewer exchange students studying at U.S. universities in the near future.

In this environment, the ability to explain the value of higher education – and a particular institution – has never been more important. Colleges and universities hire CMOs to do exactly that: define and communicate the mission, brand and unique benefits of the university to the public.

Public universities, unlike elite private institutions such as Harvard or Princeton, cannot rely solely on prestige to attract applicants and donors. They compete not only with each other but with private colleges, for-profit institutions and online programs. For them, marketing is a matter of survival.

Inside the new higher ed marketing playbook

When most people think of university marketing, they imagine glossy brochures or billboards during college football season. While those still exist, much of the work is now highly targeted and data-driven.

A CMO might oversee digital ad campaigns aimed at specific students, or lead market research to identify what prospective students want from a degree. They may also handle crisis communications, alumni messaging and internal storytelling to boost morale and cohesion.

At some universities, marketing teams operate almost like internal agencies, serving multiple colleges, research centers and outreach programs. This level of coordination can be especially valuable in large, decentralized institutions where departments historically created their own messaging in isolation.

The rise of CMOs in higher education is not without controversy. Critics argue that growing executive teams — while faculty and other instructors face cuts – signals misplaced priorities. Some faculty worry that marketing language can oversimplify complex academic missions or shift a school’s focus toward revenue generation at the expense of scholarship.

The road ahead: Matching leaders to missions

Our research underscores that CMOs are most effective in specific domains, such as enrollment growth. They are not a one-size-fits-all solution for every challenge a university faces. And certain hiring decisions – such as prioritizing corporate experience over deep institutional knowledge – we believe, may have unintended consequences for fundraising.

This suggests universities need to be clear about why they’re hiring chief marketing officers and how they’ll integrate them into leadership. Without alignment between the CMO’s expertise and the institution’s strategic goals, the role risks becoming symbolic rather than meaningful.

The trend toward hiring CMOs is likely to continue, especially among public universities competing for a shrinking pool of students and constrained state and federal funding. But our findings suggest that simply adding a marketing executive is not enough. Success depends on matching the right leader to the institution’s needs and supporting them with resources, cross-campus cooperation and a clear mandate.

For some schools, that may mean seeking CMOs with deep experience in higher education advancement rather than corporate branding. For others, it may involve building stronger bridges between marketing and enrollment management, academic affairs and fundraising efforts.

The rise of CMOs isn’t a silver bullet for higher education’s enrollment and funding challenges. But it’s a sign that universities are rethinking how they present themselves to the world – and in today’s competitive, skeptical environment, that might be one of the most important strategic conversations they can have.

The Conversation

Prachi Gala does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why universities are hiring more chief marketing officers – even as budgets shrink – https://theconversation.com/why-universities-are-hiring-more-chief-marketing-officers-even-as-budgets-shrink-262007

When workers’ lives outside work are more fulfilling, it benefits employers too

Source: The Conversation – USA (2) – By Louis Tay, Professor of Industrial-Organizational Psychology, Purdue University

If you never take a break, the extra hours of effort might not pay off. JGI/Tom Grill/Tetra images via Getty Images

Many employers are demanding more from workers these days, pushing them to log as many hours as possible.

Google, for example, told all its employees that they should expect to spend 60 or more hours in the office every week. Some tech companies are demanding 12-hour days, six days a week from their new hires.

More job applicants in health care, engineering and consulting have been told to expect long hours than previously demanded due to a weak job market.

On the other hand, companies such as Cisco, Booz Allen Hamilton and Intuit have earned a reputation of supporting a strong work-life balance, according to Glassdoor employee ratings.

To promote work-life balance, they offer flexible work options, give workers tips on setting boundaries and provide benefits to promote mental and physical well-being, including mindfulness and meditation training and personal coaching outside of work.

As a psychologist who studies workplace performance and well-being, I’ve seen abundant evidence that overworking employees can actually make them less productive. Instead, research shows that when employees have the time and space to lead a fulfilling life outside work, such as being free to spend time with their families or pursue creative hobbies, it improves their performance on the job.

Falling prey to the ‘focusing illusion’

For example, a team of researchers reviewed 70 studies looking at how managers support workers’ family lives. They found that when supervisors show consideration for workers’ personal roles as a family member, including providing help to workers and modeling work-family balance, those employees are more loyal and helpful on the job and are also less likely to think about quitting.

Another study found that workers who could take on creative projects outside of work became more creative at work, regardless of their own personalities. This was true even for workers who didn’t consider themselves to be very creative to start with, which suggests it was the workplace culture that really made a difference.

When employers become obsessed with their workers’ productivity, they can get hung up on tracking immediate goals such as the number of emails sent or sales calls made. But they tend to neglect other vital aspects of employees’ lives that, perhaps somewhat ironically, sustain long-term productivity.

Daniel Kahneman, the late psychologist whose research team won a Nobel Prize in economics, called this common misconception the “focusing illusion.”

In this case, many employers underestimate the hidden costs of making people work more hours than they can muster while maintaining some semblance of work-life balance.

Among them are mental health problems, burnout and high turnover rates. In other words, overly demanding policies can ultimately hinder the performance employers want to see.

Daniel Kahneman explains what the focusing illusion is.

Taking it from Simone Biles

Many top performers recognize the value of work while also valuing the time spent away from it.

“At the end of the day we’re human too,” said Simone Biles, who is widely considered the best gymnast on record. “We have to protect our mind and body, rather than just go out there and do what the world wants us to do.”

Elite athletes like Biles require time away from the spotlight to recuperate and hone their skills.

Others who are at the top of their professions turn to hobbies to recharge their batteries. Albert Einstein’s passion for playing the violin and piano was not merely a diversion from physics – it was instrumental to the famous and widely beloved scientist’s groundbreaking scientific insights.

Einstein’s second wife, Elsa Einstein, observed that he took short breaks to play music when he was thinking about his scientific theories.

Simone Biles, the champion gymnast floats through the air with her eyes firmly riveted on a bar.
Despite being the GOAT of gymnasts, Simone Biles says she is only human – just like everyone else.
Aytac Unal/Anadolu via Getty Images

Taking a break

I’ve reviewed hundreds of studies that show leisure time isn’t a luxury − it fulfills key psychological needs.

Taking longer and more frequent breaks from your job than your workaholic boss might like can help you get more rest, recover from work-related stress and increase your sense of mastery and autonomy.

That’s because when employees find fulfillment outside of work they tend to become better at their jobs, making their employers more likely to thrive.

That’s what a team of researchers found when they studied the workforce at a large city hospital in the U.S. Employees who thought their bosses supported their family life were happier with their jobs, more loyal and less likely to quit.

Unsurprisingly, the happier, more supported workers also gave their supervisors higher ratings.

Researchers who studied the daily leisure activities of 100 Dutch teachers found that when the educators could take some of their time off to relax and engage in hobbies outside work, they felt better and had an easier time coping with the demands of their job the next day.

Another study of German emergency service workers found that not having enough fun over the weekend, such as socializing with friends and relatives, can undermine job performance the following week.

Finding the hidden costs of overwork

The mental health consequences of overwork, spending too many hours on the job or getting mentally or physically exhausted by your work are significant and measurable.

According to the World Health Organization, working more than 55 hours per week is associated with a 35% higher risk of having a stroke and a 17% higher risk of developing heart disease.

Working too many hours can also contribute to burnout, a state of physical, emotional and mental exhaustion caused by long-term work stress. The World Health Organization officially recognizes burnout as a work-related health hazard.

A Gallup analysis conducted in March 2025 found that even employees who are engaged at work, meaning that they are highly committed, connected and enthusiastic about what they do for a living, are twice as likely to burn out if they log more than 45 hours a week on the job.

Burnout can be very costly for employers, ranging anywhere from US$4,000 to $20,000 per employee each year. These numbers are calculated from the average hourly salaries of employees and based on the impact of burnout on aspects such as missed workdays and reduced productivity at work. That means a company with 1,000 workers could lose around $4 million every year due to burnout.

Ultimately, employers that overwork their workers have high turnover rates.

One study found that the onset of mandatory overtime for South Korean nurses made more of them decide to quit their jobs.

Similarly, a national study of over 17,000 U.S.-based nurses found that when they worked longer hours, turnover increased. This pattern is evident in many other professions besides health care, such as finance and transportation.

Seeing turnover increase

Conservative estimates of the cost of turnover for employers ranges from 1.5 to two times an employee’s annual salary. This includes the costs of hiring, onboarding and training new employees. Critically, there are also hidden costs that are harder to estimate, such as losing the departed employee’s institutional knowledge and unique connections.

Over time, making workers work extra hours can undercut an employer’s performance and threaten its viability.

Abundant evidence indicates that supporting employees’ aspirations for happier and more meaningful lives within the workplace and beyond leaves workers and their employers alike better off.

The Conversation

Louis Tay is affiliated with ExpiWell, a mobile-first tech startup that enables researchers to capture momentary experiences of people.

ref. When workers’ lives outside work are more fulfilling, it benefits employers too – https://theconversation.com/when-workers-lives-outside-work-are-more-fulfilling-it-benefits-employers-too-260772

The growing fad of ‘microdosing’ mushrooms is leading to an uptick in poison control center calls and emergency room visits

Source: The Conversation – USA (3) – By Joshua Kellogg, Assistant Professor of Natural Product Chemistry, Penn State

_Amanita_ mushrooms are commonly used in mushroom-based products. Kateryna Kon/Science Photo Library via Getty Images

Imagine you purchase a bag of gummies labeled nootropic – a term used to describe substances that claim to enhance mental ability and function, or “smart drugs.” However, within hours of consuming them, your heart starts racing, you’re nauseated and vomiting. Then you begin convulsing and have a seizure, resulting in a trip to the hospital.

You certainly did not expect to have such a severe reaction to an over-the-counter edible product, which is available online and in herbal and vape shops nationwide. What happened?

So-called “microdosing” of mushrooms has been on the rise over the past few years, accompanying a shift in local policy in some areas and increasing research into its potential benefits for mood and mental health. Microdosing involves the ingestion of small quantities of psychoactive mushrooms, less than a regular dose and not in sufficient quantities to induce a “trip” or psychedelic experience, but to boost mood, creativity, concentration or productivity.

Psychedelic mushrooms are illegal at the federal level, restricted as a “Schedule 1” substance by the Food and Drug Administration, though some states and local municipalities have begun the process of decriminalizing the possession of these mushrooms.

This greater acceptance of mushrooms and psychedelics has led to a growing market for edible products containing non-hallucinogenic mushroom species that are appearing on the shelf at grocery stores, vape shops, even gas stations, with claims that these products improve mental function.

To meet demand, manufacturers are also turning to other types of mushrooms – including both psychoactive and non-psychedelic – some of which are potentially more toxic. But key pieces of information are often missing for consumers to make informed decisions about which products to consume.

I am a natural product scientist at Pennsylvania State University, where my lab specializes in understanding the molecules found in plants, mushrooms and other natural resources and how they can benefit or harm human health. Our team actively researches these small molecules to uncover how they can address infectious and chronic diseases, but also monitors them for toxic or adverse effects on human health.

While nootropic products have potential to boost health, there can be little transparency surrounding many commercial mushroom products, which can have dangerous consequences.

Chemistry and toxicology of psychoactive mushrooms

The main psychoactive components of traditional “magic” mushrooms, found in the genus Psilocybe, are psilocybin and psilocin. These two small molecules are alkaloids that activate receptors in the brain to trigger the main psychoactive effects of magic mushrooms.

Both psilocybin and psilocin have a high therapeutic index – meaning they are generally nontoxic in humans because the amount that must be ingested to be fatal or dangerous is more than 500 times the dose at which it has been shown to be therapeutically effective. Therefore, psilocybin-containing mushrooms are generally considered to have a low potential for acute toxicity in humans, to the point where it is believed to be nearly impossible to achieve a toxic dose from oral consumption.

Although microdosing is becoming increasingly popular, research is ongoing and doctors warn of the dangers.

Demand breeds diversification in mushroom sourcing

With the growth in popularity of psychedelic mushrooms, companies have been looking for ways to meet consumer demand. And in some cases, this has meant finding mushrooms that do not contain psilocybin and are therefore not restricted by the FDA. The result has been an increase in products that come without legal entanglements, which means there are products that can contain other types of mushrooms, including lions mane, chaga, reishi, maitake and a genus of mushrooms called Amanita, which can be hallucinogenic.

Amanita mushrooms are the quintessential white-flecked, red-capped toadstools – the stereotypical image of a mushroom. These fungi contain very different compounds compared to the Psilocybe mushrooms, such as muscarine and ibotenic acid. These compounds function differently in the brain and, while also capable of producing psychedelic experiences, are generally considered to be more toxic.

Nootropic and other mushroom products are often found as edibles, including chocolates and gummies. However, there is little enforcement surrounding the ingredient labeling of such dietary supplements; products that have a proprietary blend of ingredients generally do not have to report individual ingredients to the species level. This protects trade secrets regarding unique blends of ingredients, but it can also obscure the actual composition of some edible nootropic and microdosing products. And this can have dangerous consequences.

A few bright red mushroom caps with white stalks grow from the ground.
Amanita muscaria mushrooms growing in a garden in Poland in October 2024.
NurPhoto/Getty Images

Increasing adverse effects

The explosion of nootropic mushroom products has led to a wide variety of products on the market that potentially contain wildly differing levels of mushrooms, many times containing blends of multiple mushroom species. And with little reporting guidelines in effect, it can be hard to know exactly what you’re taking.

One case study in Virginia involved five people who were hospitalized after they ingested gummies from different nootropic brands that were labeled to contain muscarine, muscimol and ibotenic acid, all compounds found in Amanita mushrooms.

A follow-up analysis of locally available gummy brands that contained “mushroom nootropic” ingredients revealed the presence of psilocybin, but also caffeine, the stimulant ephedrine and mitragynin, a potential painkiller found in Southeast Asian plant products like kratom. None of these ingredients were listed on the product label. Therefore, the cocktail of mushrooms and substances that these people were exposed to was not necessarily reflected on the label at the time of purchase.

The increase in use of other, potentially toxic, mushrooms in over-the-counter products has been reflected in reported poisoning cases in the United States. In 2016, out of more than 6,400 mushroom-related poisoning cases in the U.S., only 45 were Amanita mushrooms.

In the past few years since certain states began decriminalizing psilocybin, the U.S. has seen an increase in calls and reports to poison control centers of people feeling nauseous and experiencing vomiting, seizures, cardiovascular symptoms and other adverse effects after ingesting edible mushroom products such as chocolates and gummies. This prompted a multistate investigation beginning in 2023 that uncovered over 180 cases in 34 states of people who had ingested a particular brand of mushroom-based edibles, Diamond Shruumz.

A 2024 recall required that stores remove these products from their shelves. And in late 2024, the FDA put out a letter to warn consumers and manufacturers of the dangers associated with Amanita mushrooms, saying they “do not meet the Generally Recognized As Safe, or GRAS, standard and that Amanita mushrooms are unapproved food additives.” Despite this warning, such products are still available from producers.

Even when a product is labeled with the relevant ingredients, mushrooms are notoriously easy to misidentify when collected. Numerous mushroom species have similar shapes, colors and habits.

But, despite their visual similarities, these different mushrooms can have drastically different chemistry and toxicity. This even plagues foragers of culinary mushrooms, with hundreds of emergency department visits due to fungal misidentification every year in the U.S.

There is little current regulation or oversight for species identification in dietary supplements or over-the-counter mushroom edible products, leaving consumers at the mercy of producers to accurately list all raw products and ingredients on the product label.

The Conversation

Joshua Kellogg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The growing fad of ‘microdosing’ mushrooms is leading to an uptick in poison control center calls and emergency room visits – https://theconversation.com/the-growing-fad-of-microdosing-mushrooms-is-leading-to-an-uptick-in-poison-control-center-calls-and-emergency-room-visits-252866

RFK Jr.’s plans to overhaul ‘vaccine court’ system would face legal and scientific challenges

Source: The Conversation – USA (3) – By Anna Kirkland, Professor of Women’s and Gender Studies, University of Michigan

The Vaccine Injury Compensation Program was established in 1986 by an act of Congress. MarsBars/iStock via Getty Images Plus

For almost 40 years, people who suspect they’ve been harmed by a vaccine have been able to turn to a little-known system called the Vaccine Injury Compensation Program – often simply called the vaccine court.

Health and Human Services Secretary Robert F. Kennedy Jr. has long been a critic of the vaccine court, calling it “biased” against compensating people, slow and unfair. He has said that he wants to “revolutionize” or “fix” this system.

I’m a scholar of law, health and medicine. I investigated the history, politics and debates about the Vaccine Injury Compensation Program in my book “Vaccine Court: The Law and Politics of Injury.”

Although vaccines are extensively tested and monitored, and are both overwhelmingly safe for the vast majority of people and extremely cost-effective, some people will experience a harmful reaction to a vaccine. The vaccine court establishes a way to figure out who those people are and to provide justice to them.

Having studied the vaccine court for 15 years, I agree that it could use some fixing. But changing it dramatically will be difficult and potentially damaging to public health.

Deciphering vaccine injuries

The Vaccine Injury Compensation Program is essentially a process that enables doctors, lawyers, patients, parents and government officials to determine who deserves compensation for a legitimate vaccine injury.

It was established in 1986 by an act of Congress to solve a specific social problem: possible vaccine injuries to children from the whole-cell pertussis vaccine. That vaccine, which was discontinued in the U.S. in the 1990s, could cause alarming side effects like prolonged crying and convulsions. Parents sued vaccine manufacturers, and some stopped producing vaccines.

Congress was worried that lawsuits would collapse the country’s vaccine supply, allowing diseases to make a comeback. The National Childhood Vaccine Injury Act of 1986 created the vaccine court process and shielded vaccine manufacturers from these lawsuits.

Here’s how it works: A person who feels they have experienced a vaccine-related injury files a claim to be heard by a legal official called a special master in the U.S. Court of Federal Claims. The Health and Human Services secretary is named as the defendant and is represented by Department of Justice attorneys.

A syringe leaning against a gavel on a white background
Many experts agree that the vaccine compensation program could use some updates.
t_kimura via iStock / Getty Images Plus

Doctors who work for HHS evaluate the medical records and make a recommendation about whether they think the vaccine caused the person’s medical problem. Some agreed-upon vaccine injuries are listed for automatic compensation, while other outcomes that are scientifically contested go through a hearing to determine if the vaccine caused the problem.

Awards come from a trust fund, built up through a 75-cent excise tax on each dose of covered vaccine sold. Petitioners’ attorneys who specialize in vaccine injury claims are paid by the trust fund, whether they win or lose.

Some updates are needed

Much has changed in the decades since Congress wrote the law, but Congress has not enacted updates to keep up.

For instance, the law supplies only eight special masters to hear all the cases, but the caseload has risen dramatically as more vaccines have been covered by the law. It set a damages cap of US$250,000 in 1986 but did not account for inflation. The statute of limitations for an injury is three years, but in my research, I found many people file too late and miss their chance.

When the law was written, it only covered vaccines recommended for children. In 2023, the program expanded to include vaccines for pregnant women. Vaccines just for adults, like shingles, are not covered. COVID-19 vaccine claims go to another system for emergency countermeasures vaccines that has been widely criticized. These vaccines could be added to the program, as lawyers who bring claims there have advocated.

These reform ideas are “friendly amendments” with bipartisan support. Kennedy has mentioned some of them, too.

A complex system is hard to revolutionize

Kennedy hasn’t publicly stated enough details about his plan for the vaccine court to reveal the changes he intends to make. The first and least disruptive course of action would be to ask Congress to pass the bipartisan reforms noted above.

But some of his comments suggest he may seek to dismantle it, not fix it. None of his options are straightforward, however, and consequences are hard to predict.

Robert F. Kennedy Jr., Secretary of the Department of Health and Human Services, testifying in Congress
HHS Secretary Robert Kennedy Jr. has said he plans to revolutionize the vaccine court.
Kayla Bartkowski / Staff, Getty Images News

Straight up changing the vaccine court’s structure would probably be the most difficult path. It requires Congress to amend the 1986 law that set it up and President Donald Trump to sign the legislation. Passing the bill to dismantle it requires the same process. Either direction involves all the difficulties of getting a contentious bill through Congress. Even the “friendly amendments” are hard – a 2021 bill to fix the vaccine court was introduced but failed to advance.

However, there are several less direct possibilities.

Adding autism to the injuries list

Kennedy has long supported discredited claims about harms from vaccines, but the vaccine court has been a bulwark against claims that lack mainstream scientific support. For example, the vaccine court held a yearslong court process from 2002 to 2010 and found that autism was not a vaccine injury. The autism trials drew on 50 expert reports, 939 medical articles and 28 experts testifying on the record. The special masters deciding the cases found that none of the causation hypotheses put forward to connect autism and vaccines were reliable as medical or scientific theories.

Much of Kennedy’s ire is directed at the special masters, who he claims “prioritize the solvency” of the system “over their duty to compensate victims.” But the special masters do not work for him. Rather, they are appointed by a majority of the judges in the Court of Federal Claims for four-year terms – and those judges themselves have 15-year terms. Kennedy cannot legally remove any of them in the middle of their service to install new judges who share his views.

Given that, he may seek to put conditions like autism on the list of presumed vaccine injuries, in effect overturning the special masters’ decisions. Revising the list of recognized injuries to add ones without medical evidence is within Kennedy’s powers, but it would still be difficult. It requires a long administrative process with feedback from an advisory committee and the public. Such revisions have historically been controversial, and are usually linked to major scientific reviews of their validity.

Public health and medical groups are already mobilized against Kennedy’s vaccine policy moves. If he failed to follow legally required procedures while adding new injuries to the list, he could be sued to stop the changes.

Targeting vaccine manufacturers

Kennedy could also lean on his newly reconstituted Advisory Committee on Immunization Practices to withdraw recommendations for certain vaccines, which would also remove them from eligibility in the vaccine compensation court. Lawsuits against manufacturers could then go straight to regular courts. On Aug. 14, 2025, the Department of Health and Human Services may have taken a step in this direction by announcing the revival of a childhood vaccine safety task force in response to a lawsuit by anti-vaccine activists.

Kennedy has also supported legislation that would allow claims currently heard in vaccine court to go to regular courts. These drastic reforms could essentially dismantle the vaccine court.

People claiming vaccine injuries could hope to win damages through personal injury lawsuits in the civil justice system instead of vaccine court, perhaps by convincing a jury or getting a settlement. These types of settlements were what prompted the creation of the vaccine court in the first place. But these lawsuits could be hard to win. There is a higher bar for scientific evidence in regular courts than in vaccine court, and plaintiffs would have to sue large corporations rather than file a government claim.

Raising the idea of reforming the vaccine court has provoked strong reactions across the many groups with a stake in the program. It is a complex system with multiple constituents, and Kennedy’s approaches so far pull in different directions. The push to revolutionize it will test the strength of its complex design, but the vaccine court may yet hold up.

The Conversation

Anna Kirkland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. RFK Jr.’s plans to overhaul ‘vaccine court’ system would face legal and scientific challenges – https://theconversation.com/rfk-jr-s-plans-to-overhaul-vaccine-court-system-would-face-legal-and-scientific-challenges-261451

Protestant ideas shaped Americans’ support for birth control – and the Supreme Court ruling protecting a husband and wife’s right to contraception

Source: The Conversation – USA (3) – By Samira Mehta, Associate Professor of Women and Gender Studies & Jewish Studies, University of Colorado Boulder

Sixty years ago, the Supreme Court ruled that married couples have a constitutional right to use contraception. Griswold v. Connecticut, decided in 1965, made it illegal for states to outlaw birth control for spouses – a right that would not be extended to single people until 1972.

Griswold granted married couples this right on the grounds of privacy. Though the Constitution does not specifically name an explicit right to privacy, justices argued that it could be inferred from several amendments – an idea cited in later rulings on abortion and LGBTQ+ rights.

According to the Griswold ruling, the right of privacy within marriage was “older than the Bill of Rights – older than our political parties, older than our school system.”

“Marriage is a coming together for better or for worse, hopefully enduring, and intimate to the degree of being sacred,” the majority opinion reads – it represents a coming together for a “noble” purpose.

In short, the Supreme Court framed marital sex as natural, intimate and, perhaps most importantly, sacred. These characteristics, they argued, allowed it to exist beyond the gaze of the law.

Here’s the thing, though: Historians know that marriage hasn’t always been a private affair. Nor has it always been treated as sacred – not under the law, at any rate. As a scholar completing a book on the history of religion and contraception, I argue that the attitudes toward marriage and contraception reflected in the Griswold decision were deeply rooted in Protestant thought.

Private and public

Throughout European history, royal couples getting married often had witnesses leading them to their bedrooms and remaining there – or waiting right outside. The marriage was not considered legally binding until it was consummated. At a time when royal weddings were often intended to shore up alliances, knowing that the marriage had been consummated ensured that any political agreements were binding and at least suggested that heirs would be legitimate.

A dark, blurry painting of people in aristocratic dress milling about a large, airy chamber with several beds in it.
A bedding ceremony after the wedding of Carl X Gustav of Sweden and Hedwig Eleanor of Sweden, painted by Jürgen Ovens.
Wikimedia Commons

Among the more “common folks,” today’s standard of marital privacy could not be achieved even within the family, simply because of space. In medieval and early modern Britain, whose legal system largely grounds American law, it was common for whole households to sleep or even live in just one room, including guests and apprentices. The reality of multipurpose, shared space was also the case in the American Colonies, on the frontier, and in the living quarters of enslaved people.

For much of its history, then, marriage was not the legalization of a private intimacy, but a public act made for a variety of political and economic reasons.

And while marriage was often understood as sacred, interpretations varied. Catholicism did consider marriage a sacrament but was not the most holy way to live – a status reserved for celibate priests and nuns.

At other times, marriage was not respected. Marriages between enslaved people, even when sanctioned by churches, held no weight in American law.

So why did the U.S. Supreme Court eventually assert that the state should not peer into the marital bedchamber?

Scholars such as Janet Jakobsen have argued that, during the Protestant Reformation, one of the ways that Protestants differentiated themselves from Catholics was by elevating marriage to the most sacred form of human sexuality.

Reformers such as Martin Luther criticized clerical celibacy, and were married. But the Protestant move toward married clergy was also about other kinds of freedom, according to Jakobsen. Religious and sexual freedom were intertwined: Marriage itself, not the church, became the institution where a couple could freely regulate their sexuality.

Praise for the pill

By the time the Supreme Court argued that marriage was, by nature, private and sacred, there was a long Protestant history of making that case.

But there was an even more recent Protestant history of making that argument specifically about birth control.

As new contraceptive options emerged in the 20th century, from the diaphragm to birth control pills, Christian leaders wrestled with what to think. The Catholic Church remained steadfastly opposed to contraception, although some Catholic theologians began to argue in favor of loosening the ban. Many Protestant denominations, meanwhile, slowly came to accept it – and then to endorse it.

A black and white photograph shows women with baby carriages lined up on a street.
Women with children stand outside Sanger Clinic – the first birth control clinic in the United States – in Brooklyn, N.Y., in 1916.
Circa Images/GHI/Universal History Archive/Universal Images Group via Getty Images

Christians who came to support birth control framed it as a moral good: a tool that would allow married couples to have satisfying sex lives, while protecting women from the health risks of frequent pregnancies. Richard Fagley, the executive secretary of the the Commission of the Churches on International Affairs, was one of the architects of this new theological perspective. He argued in 1960 that medical knowledge, including contraception, was “a liberating gift from God, to be used to the glory of God, in accordance with his will for men.”

By the time the pill came on the market in the 1960s, liberal Protestants, as well as many conservatives, were applying ideas about “Christian duty” to a new theology of “responsible parenthood.”

The best kind of family, they argued, was a father with a steady job and a homemaker mother. Limiting family size could help make that financially possible – and decrease divorce, as well.

The National Council of Churches, an organization representing many Protestant and some Orthodox churches, wrote in a statement approved by most of its members that they acknowledged the value of sex in marriage with or without procreation, because it was central to the “mutual love and companionship” of the marriage bond.

That said, they still emphasized parenthood as “a divinely ordained purpose of marriage.” Parenthood was, in the council’s eyes, a “participation in God’s continuing creation, which calls for awe, gratitude, and a sense of high responsibility.”

When the Supreme Court struck down the constitutional right to an abortion in 2022, the majority opinion noted, “Nothing in this opinion should be understood to cast doubt on precedents that do not concern abortion.” Justice Clarence Thomas, however, wrote a concurring opinion calling for the court to revisit other decisions with similar reasoning, including the right to same-sex marriage and Griswold itself.

It seems important to look back on 1965, at the many voices that shaped the Griswold case, including secular feminists, medical doctors and Christian clergy. The decision’s supporters believed it would make women’s lives better, but also families’ lives – precisely by giving them privacy and autonomy.

Portions of this article originally appeared in a previous article published on May 24, 2022.

The Conversation

Samira Mehta receives funding from the Henry Luce Foundation.

ref. Protestant ideas shaped Americans’ support for birth control – and the Supreme Court ruling protecting a husband and wife’s right to contraception – https://theconversation.com/protestant-ideas-shaped-americans-support-for-birth-control-and-the-supreme-court-ruling-protecting-a-husband-and-wifes-right-to-contraception-249424

Older Americans are using AI − study shows how and what they think of it

Source: The Conversation – USA – By Robin Brewer, Associate Professor of Information, University of Michigan

Most older adults who use AI use smart speaker assistants. Six_Characters/E+ via Getty Images

Artificial intelligence is a lively topic of conversation in schools and workplaces, which could lead you to believe that only younger people use it. However, older Americans are also using AI. This raises the questions of what they’re doing with the technology and what they think of it.

I’m a researcher who studies older age, disability and technology use. I partnered with the University of Michigan’s National Poll on Healthy Aging to survey nearly 3,000 Americans over the age of 50. We asked them whether and how they use AI and what concerns they have about using it.

Of the older people we surveyed, 55% responded that they had used some type of AI technology that they can speak to, like Amazon’s Alexa voice assistant, or type to, like OpenAI’s ChatGPT chatbot. Voice assistants were overwhelmingly more popular than text chatbots: Half of them reported using a voice assistant within the past year, compared to 1 in 4 who used a chatbot.

Popular, among some

Independent living continues to be a major goal of older Americans as they either do not want to or are unable to afford to live in long-term care communities, and AI may be a tool to support this goal. Our findings show that older adults who use AI in their homes find it helpful for living independently and safely.

They mostly used these technologies for entertainment or searching for information, but some of their responses show more creative uses, such as generating text, creating images or planning vacations.

Nearly 1 in 3 older adults reported using AI-powered home security devices, including doorbells, outdoor cameras and alarm systems. Nearly all of those people – 96% – felt safer using them.

While there has been some concern about privacy when using cameras indoors to monitor older people, cameras aimed outdoors seem to provide a sense of security for those who may be aging in their homes alone or without family nearby. Of the 35% of older adults who reported using AI-powered home security systems, 96% said they were beneficial.

a video monitor view of a person wearing a yellow safety vest carrying packages
AI-powered security devices such as smart doorbells make many older adults feel safer.
O2O Creative/E+ via Getty Images

However, when we dove into which older adults are using AI, we saw that demographics matter. Specifically, those in better health, with more education and higher incomes were more likely to have used AI-powered voice assistants and home security devices in the past year. This pattern seems to follow adoption trends of other technologies such as smartphones.

Trusting AI is tricky

As more information about AI’s accuracy emerges, so do questions about whether people can trust it. Our survey results show that older Americans are split on whether to trust content that was generated by AI: 54% said they trust AI, and 46% said they do not. People who trusted AI more were more likely to have used some type of AI technology within the past year.

Further, AI-generated content can sometimes look correct but be inaccurate. Being able to identify incorrect information from AI is important for assessing whether and how to use AI-generated search results or chatbots. However, only half of the older people surveyed were confident that they could identify whether content from AI was incorrect.

More educated users were more likely to say they felt confident they could spot inaccuracies. Conversely, older adults who reported lower levels of physical and mental health were less likely to trust AI-generated content.

What to do?

Together, these findings repeat a common cycle of technology adoption that is pervasive even among younger demographics, where more educated and healthy people are among the first to adopt and be aware of newer technologies. This raises questions about how to best reach all older people about the benefits and risks of AI.

How can older people who are not AI users get support for learning more so that they can make informed decisions about whether to use it? How can institutions develop better training and awareness tools so that older people who trust AI avoid trusting it too much or inappropriately using AI to make important decisions without understanding the risks?

Our survey results highlight potential starting points for developing AI literacy tools for older adults. Nine in 10 older people wanted to know when information had been generated by AI. We are starting to see AI labels on search engine results, such as Google search’s AI snippets.

a screenshot off a webpage showing a block of text
Some AI-generated content, like this Google AI Overview search summary, is clearly labeled as AI, but not all are.
Screenshot by The Conversation

Michigan and other states have adopted policies for disclosing AI content in political ads, but these notices could be made more visible in other contexts, such as nonpolitical advertising and on social media. Further, nearly 80% of older people wanted to learn more about AI risks – where might it go wrong and what to do about it.

Policymakers can focus on enforcing AI notices that signal content was generated by AI, particularly at a critical time when the U.S. is considering revising its AI policies to do just the opposite – removing language about risk, discrimination and misinformation – based on a new executive order.

Overall, our findings show that AI can support healthy aging. However, overtrust and mistrust of AI could be addressed with better training tools and policies to make risks more visible.

The Conversation

Robin Brewer receives funding from the National Science Foundation and the National Institutes of Health. She has previously received funding from Google, the Retirement Research Foundation, and the U.S. Department of Transportation.

ref. Older Americans are using AI − study shows how and what they think of it – https://theconversation.com/older-americans-are-using-ai-study-shows-how-and-what-they-think-of-it-262411

Genomics can help insect farmers avoid pitfalls of domestication

Source: The Conversation – USA – By Christine Picard, Professor of Biology, Indiana University

A biologist maintains a large population of black soldier flies for protein farming. picture alliance/Contributer via Getty Images

Insects are becoming increasingly popular to grow on farms as feed for other animals, pet food and potentially as food for people. The process of bringing a wild animal into an artificial environment, known as domestication, comes with unique challenges. Luckily, there are important lessons to be learned from all the other animals people have domesticated over millennia.

As researchers who study how domesticating animals changes their genes, we believe that recognizing the vulnerabilities that come with domestication is important. Today’s powerful biotechnology tools can help researchers anticipate and head off issues early on.

Domestication is nothing new

From grain domestication starting as far back as 12,000 years ago to today’s high-tech, genome-based breeding strategies, humans have long bent nature to suit their purposes. By selectively breeding individual plants or animals that have desirable traits – be it appearance, size or behavior – humans have domesticated a whole host of species.

The same principle underlies all domestication attempts, from dogs to crops. A breeder identifies an individual with a desired trait – whether that’s a dog’s talent for tracking or a plant’s ability to withstand pests. Then they breed it to confirm that the desired trait can be passed down to offspring. If it works, the breeder can grow lots of descendants in a lineage with the genomic advantage.

People have made crops resilient to disease and environmental challenges, docile cows that yield more milk or meat, large-breasted poultry and cute dogs.

A long history of insects working for people

Insect domestication is also far from new. People have reared silkworms (Bombyx mori) to produce silk for over 5,000 years. But selective breeding and isolation from wild relatives have led to their inability to fly, dependence on one food source and need for assistance to reproduce. As a result, silkworms are wholly reliant on humans for survival, and the original species doesn’t exist anymore.

A white moth sitting on a white cocoon on top of a leaf
Silk moths have lost their ability to fly and are completely dependent on humans for survival.
baobao ou/Moment Open via Getty Images

Similarly, people have maintained colonies of the western honeybee (Apis mellifera) for pollination and honey production for centuries. But bees are at risk due to colony collapse disorder, a phenomenon where worker bees disappear from seemingly healthy hives. The causes of colony collapse disorder are unknown; researchers are investigating disease and pesticides as possible factors.

Now the insect agriculture industry has set its sights on domesticating some other insects as a source of sustainably farmed protein for other animals or people.

Insects such as the black soldier fly (Hermetia illucens) and the mealworm (Tenebrio molitor) can grow on existing organic waste streams. Rearing them on organic farm and food waste circularizes the agricultural system and reduces the environmental footprint of growing proteins.

But these insects will need to be grown at scale. Modern agriculture relies on monocultures of species that allow for uniformity in size and synchronized growth and harvest. Domesticating wild insects will be necessary to turn them into farmed animals.

A large number of white larvae in a dry food medium
Black soldier fly larvae feed on a mixture of wheat bran, corn and alfalfa when reared in labs and farms.
Christine Picard

Domestication has an immunity downside

Chickens today grow faster and bigger than ever. But factory-farmed animals are genetically very homogeneous. Moreover, people take care of everything for these domesticated animals. They have easy access to food and are given antibiotics and vaccines for their health and safety.

Consequently, industrially-farmed chickens have lost a lot of their immune abilities. Building these strong disease-fighting proteins requires a lot of energy. Since their spotless, controlled environments protect them, those immune genes are just not needed. The energy their bodies would typically use to protect themselves can instead be used to grow bigger.

In the wild, individuals with faulty immune genes would likely be killed by pathogens, quickly wiping these bad genes out from the population. But in a domesticated environment, such individuals can survive and pass on potentially terrible genes.

The H5N1 bird flu provides a recent example of what can go wrong when a homogeneous population of domesticated animals encounters a dangerous pathogen. When disease broke out, the poor immune systems of domesticated chickens cracked under the pressure. The disease can spread quickly through large facilities, and eventually all chickens there must be euthanized.

Hundreds of brown chickens with red crowns being reared in an indoor facility
Industrially-farmed chickens are genetically homogenous and have lost much of their immune defenses.
pidjoe/E+ via Getty Images

Domestication and the risks of monoculture

Weak immune systems aren’t the only reason the bird flu spread like it did.

Domestication often involves growing large numbers of a single species in small concentrated areas, referred to as a monoculture. All the individuals in a monoculture are roughly the same, both physically and in their genes, so they all have the same susceptibilities.

Banana cultivars are one example. Banana plants grown in the early 1900s were all descendants of a single clone, named Gros Michel. But when the deadly Panama disease fungus swept through, the plants had no defenses and the cultivar was decimated.

Banana growers turned to the Cavendish variety, grown in the largest banana farms today. The banana industry remains vulnerable to the same kind of risk that took down Gros Michel. A new fungal strain is on the rise, and scientists are rushing to head off a global Cavendish banana collapse.

Lessons about weaknesses that come with domestication are important to the relatively new industry advancing insects as the future of sustainable protein production and organic waste recycling.

How genomics can help correct course

Modern genomics can give insect agriculture a new approach to quality control. Technological tools can help researchers learn how an organism’s genes relate to its physical traits. With this knowledge, scientists can help organisms undergoing domestication bypass potential downsides of the process.

For instance, scientists combined data from hundreds of different domesticated tomato genomes, as well as their wild counterparts. They discovered something you’ve probably experienced – while selecting for longer shelf life, tomato flavor genes were unintentionally bred out.

A similar approach of screening genomes has allowed scientists to discover the combination of genes that enhances milk production in dairy cows. Farmers can intentionally breed individuals with the right combinations of milk-producing genes while keeping an eye on what other genes the animals have or lack. This process ensures that breeders don’t lose valuable traits, such as robust immune systems or high fertility rates, while selecting for economically valuable traits during domestication.

Insect breeders can take advantage of these genetic tools from the outset. Tracking an animal population’s genetic markers is like monitoring patients’ vital signs in the hospital. Insect breeders can look at genes to assess colony health and the need for interventions. With regular genetic monitoring of the farmed population, if they begin to see individuals with markers for some “bad” genes, they can intervene right away, instead of waiting for a disaster.

Mechanisms to remedy an emerging disaster include bringing in a new brood from the wild or another colony whose genes can refresh the domesticated population’s inbred and homogeneous genome. Additionally, researchers could use gene-editing techniques such as CRISPR-Cas9 to replicate healthy and productive combinations of genes in a whole new generation of domesticated insects.

Genomics-assisted breeding is a supplement to standard practices and not a replacement. It can help breeders see which traits are at risk, which ones are evolving, and where natural reservoirs of genetic diversity might be found. It allows breeders to make more informed decisions, identify genetic problems and be proactive rather than reactive.

By harnessing the power of genomics, the insect agriculture industry can avoid setting itself up for an accidental future collapse while continuing to make inroads on sustainable protein production and circularizing the agricultural ecosystem.

The Conversation

Christine Picard receives funding in part by the National Science Foundation through the Industry-University Cooperative Research Centers NSF cooperative agreements. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or the Industry Advisory Board Members of the Center for Insect Biomanufacturing and Innovation. Christine Picard is a member of the North American Coalition of Insect Agriculture, an Associate Editor for the Journal of Insects as Food and Feed, and the Treasurer for the Academic Society of Insects as Food and Feed.

Hector Rosche-Flores receives funding in part by the National Science Foundation through the Industry-University Cooperative Research Centers NSF cooperative agreements. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or the Industry Advisory Board Members of the Center for Insect Biomanufacturing and Innovation. 

ref. Genomics can help insect farmers avoid pitfalls of domestication – https://theconversation.com/genomics-can-help-insect-farmers-avoid-pitfalls-of-domestication-261357

‘It’s a complicated time to be a white Southerner’ − and their views on race reflect that

Source: The Conversation – USA – By James M. Thomas, Professor of Sociology, University of Mississippi

Scholars interviewed white Southerners to get past the stereotypes people hold of them. CGInspiration, iStock/Getty Images Plus

Historian Nell Painter remarked in 2011, “Being white these days isn’t what it used to be.”

For the past decade, wave upon wave of protests against police violence and mass incarceration have drawn the public’s attention toward the continued significance of America’s color line, the set of formal and informal rules that maintain white Americans’ elevated social and economic advantages.

Meanwhile, an explosion of popular literature scrutinizes those rules and places white people’s elevated status in sharp relief.

How are white people making sense of these tensions?

In his 1935 publication “Black Reconstruction in America,” sociologist W.E.B. Du Bois described the “public and psychological wage” paid to white workers in the post-Reconstruction era on account of their being white. Today those “wages of whiteness” remain durable as ever. Nearly 60 years removed from the high water mark of the Civil Rights movement, its aims have not been met.

A man with glasses and wearing a suit talks from behind a table into a large microphone.
Sociologist W.E.B. Du Bois described in 1935 the ‘public and psychological wage’ paid to white workers because they were white.
Keystone-France/Gamma-Rapho via Getty Images

White people still enjoy better jobs, health care, housing, schooling and more.

I’m a sociologist of race and racism. My team of graduate student researchers and I have spent the past four years interviewing white people to understand how they make sense of their white racial status today. We concentrated our efforts among white people living in the U.S. South because that region is seen as more responsible for shaping what it means to be white, and the social and economic advantages of being white, than any other.

There is not much research on how white people think about what it means to be white. Meanwhile, popular and scholarly treatments of white Southerners as overwhelmingly conservative and racially regressive abound.

Some white Southerners we spoke with fit those tropes. Many others do not. Overall, we found white Southerners across the political spectrum actively grappling with their white racial status.

As Walter, 38, from Clarksdale, Mississippi, told us, “It’s a complicated time to be a white Southerner.” We use pseudonyms to protect anonymity.

Crises cast a long shadow

The Italian political theorist Antonio Gramsci defined a crisis as a historical period in which “the old is dying and the new cannot be born.” Within this space between, Gramsci argued, “morbid phenomena of the most varied kind come to pass.”

Many people we spoke with lived through the defining ruptures of the 20th century that forever changed the South, and America too: the formal demise of Jim Crow rule, violent and bloody struggles over integration, and the slow, uneven march toward equal rights for all Americans.

Still others came of age against the backdrop of the defining shocks of this new century: 9/11 and the war on terrorism, Hurricane Katrina, the racial backlash to the election of Barack Obama, and the Black Lives Matter movement.

For some, the political rise of Donald Trump and his willingness to traffic in racist rhetoric constituted a crisis, too. “He embodies everything that is immoral,” said Ned, 45, from Vardaman, Mississippi. The town Ned is from is named for James K. Vardaman, former governor of Mississippi who once declared that “if it is necessary every Negro in the state will be lynched; it will be done to maintain white supremacy.”

Taken together, these crises cast a long shadow of uncertainty over white people’s elevated social position and anchor how white Southerners understand their white racial status.

Resistance to desegregation

Miriam, 61, from Natchez, Mississippi, grew up under the last gasps of Jim Crow. She recalled her parents pulling her from public school and sending her to a nearby private school shortly after the Supreme Court’s 1969 Alexander v. Holmes ruling, which ordered the immediate desegregation of Southern schools.

Her new school was one of hundreds of “segregation academies” founded across the South in the aftermath of the court’s ruling.

“You didn’t go over there, by the Black school,” Miriam recalled. “You stayed over by the white school. … I remember as a kid that made quite an impression.”

Reflecting on what it means to be a white Southerner today, Miriam drew from these experiences living under the region’s long shadow of segregation.

“There’s been so much hatred and so much unpleasantness. I want to do everything I can to make relations better,” she said. “I think that is part of being white in the South.”

Daryl, 42, a self-described conservative, lived in several Southern communities as a child, including Charlotte, North Carolina, in the mid-1980s as the city wrestled with its court-ordered school busing program. Daryl recalled his parents and other white people complaining about the poor quality of newly integrated schools, including telling him “stories of things like needles on the playground.”

Daryl rarely, if ever, talked with his own parents about race, but he broaches these topics with his own children today.

A self-described “childhood racist,” Daryl draws from his experiences to frame his conversations with his own children. “I remind them that there used to be this day where this was OK, and this is how things were thought of,” he says.

‘Good reason to be mad’

The region’s history also includes more contemporary crises.

Lorna, 34, is a registered Republican from Marion, Arkansas. She described how recent protests against police violence are affecting her understanding of America’s color line.

“I feel like Black people are mad or angry. They’re tired of violence and, you know, profiling,” she said. “And I don’t think it’s just in the South. I think it’s all over the United States. And they have a good reason to be mad.”

Kenneth, 35, lives in Memphis. Like Lorna and others, Kenneth’s sense of what it means to be white has been shaped by more recent crises, including the racial backlash to Obama’s elections in 2008 and 2012 that motivated Trump’s election in 2016.

Reflecting on these episodes, Kenneth believes he has an obligation as a white Southerner to become more informed about “the legacy of racism in the South and the impact that it still has today.”

Becoming more informed, Kenneth says, “will cause me to reflect on how I should think about that, and what, if anything, I should do differently now.”

A classrom with only white children, sitting at typewriters.
The scholars interviewed one woman who was sent to a segregation academy, like this one in Virginia, by her parents. ‘There’s been so much hatred. … I want to do everything I can to make relations better,’ she said.
Trikosko/Library of Congress/Interim Archives/Getty Images

Uncovering what’s minimized or ignored

Our interviews reveal a range of beliefs and attitudes among white Southerners often discounted or dismissed altogether by more popular and scholarly treatments of the region.

Contrary to research that finds white people minimizing or ignoring their elevated social status, the white Southerners we spoke with showed a profound awareness of the advantages their white racial status affords them.

“I have to admit I’m glad I’m white,” said Luke, 75, from Melber, Kentucky. “Because in the United States you probably have a little advantage.”

Our research also shows that how white people make sense of who they are is also a matter of where they are.

Places – and not just Southern ones – are imbued with ideas and beliefs that give meaning and significance to the people within them. The region’s history of racial conflict, meanwhile, renders the “wages of whiteness” more plain to see for white Southerners in ways we are only beginning to understand.

Put plainly: Place matters for how race matters.

Emphasizing this more complicated understanding of race and place allows for a more complete account of the South, including how the unfolding racial dramas of the past several decades continue to shape the region and its people.

The Conversation

James M. Thomas’s research has been funded by the National Science Foundation and the Russell Sage Foundation

ref. ‘It’s a complicated time to be a white Southerner’ − and their views on race reflect that – https://theconversation.com/its-a-complicated-time-to-be-a-white-southerner-and-their-views-on-race-reflect-that-261454

Why rural Coloradans feel ignored − a resentment as old as America itself

Source: The Conversation – USA – By Kayla Gabehart, Assistant Professor of Environmental Policy, Michigan Technological University

Many rural Americans feel largely left out of American culture. Helen H. Richardson/Getty Images

Many rural Coloradans, especially in agricultural communities, feel looked down on by their urban counterparts. One cattle rancher I spoke to put it plainly. “It’s an attitude … we are the idiots … we are the dumb farmers … we don’t really matter.”

The sentiment is also portrayed in popular culture such as the hit TV show “Yellowstone.”

“It’s the one constant in life. You build something worth having, someone’s gonna try to take it,” says patriarch John Dutton. He was facing repeated threats by developers from “the city” to annex his land for a luxury hotel and resort development.

As a policy scholar, I’ve talked to and interviewed many dozens of people in rural areas in Colorado. I’ve also read hundreds of newspaper articles and watched hundreds of hours of legislative testimony that capture the sentiment of rural people being left behind, left out and snubbed by their urban counterparts.

Recently, I studied the divide between rural and urban Coloradans by looking at their responses to four statewide policies. A designated day to forgo eating meat, two political appointees and the ongoing wolf reintroduction.

These policies, while specific to Colorado, are symptoms of something larger. Namely, an ever-urbanizing, globalized world that rural, agricultural citizens feel is leaving them behind.

‘MeatOut’ or misstep?

My expertise doesn’t just come from my research – I’ve lived it.

I grew up in a rural community in Elbert County, Colorado, about an hour- and-a-half southeast of Denver.

In early 2021, Gov. Jared Polis declared via proclamation that March 20 would be a “MeatOut Day.” For health and environmental reasons, Colorado residents were encouraged to forgo meat for a single day.

Supported by the Farm Animal Rights Movement, MeatOuts have been promoted across the U.S. since the 1980s. Typically, gubernatorial proclamations, of which hundreds are passed each year and are completely ceremonial and devoid of any long-term formal policy implications, go largely unnoticed. And in Denver, Colorado’s metropolitan center, this one did too.

Not so in rural Colorado.

My neighbors in Elbert County promptly responded with outrage, flying banners and flags declaring their support for agriculture and a carnivorous diet.

One rancher from Nathrop painted a stack of hay bales to say, “Eat Beef Everyday.”

Communities all over the state, and even in neighboring states, responded with “MeatIns,” where they gathered to eat meat and celebrate agriculture and the rural way of life. They also coupled these events with fundraisers, for various causes, for which hundreds of thousands of dollars were raised across the state. While Polis backed off the MeatOut after 2021, Denver Mayor Mike Johnston has, just this year, supported a similar “Eat Less Meat” campaign, prompting similar rural outrage.

Did I mention there are nearly 36,000 cattle in Elbert County? This is relatively typical of a rural Colorado county, particularly on the Plains.

In Colorado, 2.7 million cattle are raised annually, with a value of US$4.5 billion. The industry is consistently the top agricultural commodity and the second-largest contributor to Colorado’s GDP, at about $7.7 billion per year.

In early March 2021, Polis declared March 22 “Colorado Livestock Proud Day,” in response to the backlash.

Other policies

This came on the heels of several policies supported by Polis prior to the MeatOut controversy that critics considered anti-agriculture.

In 2020, he appointed Ellen Kessler, a vegan and animal rights activist, to the State Veterinary Board. Kessler criticized 4-H programs, designed to educate youth on agriculture and conservation, on her social media, insisting they “don’t teach children that animal lives matter.” Kessler resigned in March 2022, just days before she was cited for 13 counts of animal cruelty. More recently, in May 2025, Polis appointed Nicole Rosmarino to head the State Land Board. Rosmarino has ties to groups that oppose traditional agricultural practices, historically a key component of Colorado State Land Board operations.

People sit in a room with stuffed deer heads in the background.
Community members gather at the Colorado Parks and Wildlife hunter education building in Denver. Colorado ranchers petitioned the state’s wildlife commission to delay the next round of wolf releases in September 2024. The petition was denied.
Hyoung Chang/Getty Images

Then came wolf reintroduction, passed by urban voters by just under 57,000 votes in the 2020 general election and supported by the governor. Those in support advocated for a return to natural biodiversity; wolves were hunted to extinction in the 1940s.

Rural residents voted decidedly against the initiative. Despite much legislative and grassroots action to oppose it, wolves were reintroduced in December 2023 in various areas along the Western Slope, in close proximity to many ranches. Several cattle have since been killed by wolves. Ever since, rural interests have been working to overturn wolf reintroduction on the 2026 ballot.

An American mess

Rural residents in Colorado have told me they feel excluded. This is not new or exclusive to Colorado, but a story as old as America itself.

University of Wisconsin political scientist Katherine J. Cramer wrote about this rural exclusion in Wisconsin, calling it “rural resentment.” Berkeley sociologist Arlie Russell Hochschild called it “stolen pride.” In their book, Tom Schaller, a political scientist at the University of Maryland, and Paul Waldman, a longtime journalist, characterize it as “white rural rage.”

It’s a dynamic that descends from slavery. Isabel Wilkerson, in her book “Caste: The Origins of Our Discontents,” demonstrates that while Black Americans have historically been relegated to the bottom of the hierarchy of an American caste system, poor white people are strategically positioned just above them but below white Americans of higher socioeconomic status. As Wilkerson explains, this is a durable system sustained by norms, laws and cultural expectations that feel “natural.” But they are entirely constructed and designed by the American upper class to intentionally exploit resentment of working-class white people.

The result is what sociologist Michael M. Bell calls a “spatial patriarchy” that characterizes rural America as dumb, incapable, racist, poor and degraded as “white trash.”

This spatial patriarchy is as old as industrialization and urbanization. One of the first policy iterations was rural school consolidation during the turn of the 20th century, designed to modernize schools and make them more efficient. Urban policymakers were influenced by eugenics and the assumption that rural schools “were populated by cognitively deficient children whose parents had not been smart enough or fortunate enough to leave the decaying countryside,” according to sociologist Alex DeYoung.

So, states around the country consolidated schools, the lifeblood of rural communities. Where a school closed, the town often died, as in small towns, schools are not just socioeconomic hubs but centers of cultural and social cohesion.

Environmental impact

The same concept – that urban policymakers know better than rural Americans – is manifest in the modern environmental movement. Like with the MeatOut, rural communities also distrust environmental policies that, in their view, intentionally target a rural way of life. Rural communities take the position that they’ve been made to bear the brunt of the transformations of the global economy for generations, including those that deal with energy and the environment.

For example, environmentalists frequently call for lowering meat consumption and enacting livestock taxes to lower global greenhouse gas emissions.

But, there’s a huge, untapped potential for environmental policies that use language consistent with rural attitudes and values, such as ideas about conservation and land stewardship. Political scientists Richard H. Foster and Mark K. McBeth explain, “Rural residents perceive, probably correctly, that environmental ‘outsiders’ are perfectly willing to sacrifice local economic well-being and traditional ways of life on the altar of global environmental concerns.” They instead suggest “emphasizing saving resources for future generations” so that rural communities may continue to thrive.

The Food and Agriculture Organization of the United Nations attribute between 18% to 24% of greenhouse gas emissions to agriculture, while the International Panel on Climate Change places the estimate closer to 10%. However, agricultural producers point out that, while they may be responsible for that 10%, just 100 companies, such as BP and ExxonMobil, have produced 70% of all emissions. Agricultural producers say policies such as livestock taxes would disproportionately impact small-scale farmers and intensify rural inequality.

Rural communities have the distinct feeling that urban America doesn’t care whether they fail or flourish. Nearly 70% of rural voters supported Trump in the 2024 presidential election. He won 93% of rural counties. Rural Americans feel left behind, and for them, Trump might be their last hope.

The Conversation

Kayla Gabehart does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why rural Coloradans feel ignored − a resentment as old as America itself – https://theconversation.com/why-rural-coloradans-feel-ignored-a-resentment-as-old-as-america-itself-260894

Exactly what is in the Ivy League deals with the Trump administration – and how they compare

Source: The Conversation – USA (2) – By Brendan Cantwell, Professor of Higher, Adult, and Lifelong Education, Michigan State University

Brown University is one of the Ivy League universities that has recently made a deal with the White House to end the government’s inquiry into its treatment of Jewish students, among other practices, on campus. Kyle Mazza/Anadolu via Getty Images

The Trump administration and Harvard University are reportedly close to reaching a settlement that would require Harvard to pay US$500 million in exchange for the government releasing frozen federal funding and ending an investigation into antisemitism on campus.

This follows similar deals the White House struck with Columbia University and Brown University in July 2025. Both of those universities agreed to undertake campus reforms and pay a large sum – more than $200 million in the case of Columbia and $50 million for Brown – in order to receive federal funding that the Trump administration was withholding. The White House originally froze funding after saying that these universities had created unsafe environments for Jewish students during Palestinian rights protests on campus in 2024.

As a scholar of higher education politics, I examined the various deals the Trump administration made with some universities. When Harvard announces its deal, it will be informative to see what is different – or the same.

I believe the Columbia and Brown deals can be used as a blueprint for Trump’s plans for higher education. They show how the government wants to drive cultural reform on campus by giving the government more oversight over universities and imposing punishments for what it sees as previous wrongdoing.

Here are four key things to understand about the deals:

Two young women wearing long light blue graduation robes walk past a row of police officers outside two large buildings on a gray day.
Columbia University students walk past police on commencement day on May 21, 2025, outside the campus on Broadway in New York.
Selcuk Acar/Anadolu via Getty Images

1. Antisemitism isn’t a major feature of the agreements

The Trump White House accused Brown and Columbia of tolerating antisemitism during campus protests. But the administration neither followed federal standards for investigating antisemitism, nor did it dictate specific reforms to protect Jewish students.

Ahead of its deal, Columbia in March 2025 adopted a new, broader definition of antisemitism that was created by the International Holocaust Remembrance Alliance. The United Nations and most European Union countries also use this definition.

Yet the school’s 22-page deal mentions antisemitism only once, where it says Columbia is required to hire an additional staff member to support Jewish students’ welfare.

Brown’s deal, meanwhile, did not involve the university adopting a particular definition of antisemitism. But Brown did commit to offering “research and education about Israel, and a robust Program in Judaic Studies.” Brown already hosts a Judaic Studies program, and it is unclear from the agreement’s text what additional measures are required.

The deals also extend well beyond antisemitism concerns and into questions of gender and the composition of student bodies.

Columbia agreed to provide “single-sex” housing and sports facilities, for example. The university has an optional Open Housing program that allows mixed-gender roommates and several gender-neutral restrooms.

This places the school in line with Donald Trump’s January executive order that says a person’s gender is based on their sex as assigned at birth.

Brown’s deal also requires single-sex sports and housing facilities. In addition, Brown committed to using definitions of men and women that match Trump’s executive order.

Columbia, which enrolls about 40% of its students from other countries, also agreed to “decrease financial dependence on international student enrollment.”

The Brown deal says nothing about international education.

2. Both deals are expensive but vague about financial details.

Columbia must pay a fine of more than $200 million to the federal government, while Brown will make $50 million in donations to Rhode Island workforce development programs.

In both cases, it is not clear where the money will go or how it will be used.

Congress passed The Clery Act in 1990, creating a legal framework for fining campuses that failed to protect students’ safety.

Since then, the government has reached different settlements with universities.

Liberty University, in Lynchburg, Virginia, was required to pay the federal government $14 million in 2024, for example, for failing to investigate sexual assault allegations.

But Columbia’s payment is far larger than any previous university and government settlement. Columbia will make three payments of about $66 million into the Treasury Department over three years, according to The Chronicle of Higher Education. But it isn’t clear how the money will exactly be spent and what will happen after those three years, The Chronicle of Higher Education reported in August 2025.

Only Congress can legally decide how to spend Treasury Department funds. But Trump has ignored Congress’ appropriation directives on a number of occasions.

Brown, meanwhile, will not pay the government anything. Instead, its deal will go “to state workforce development organizations operating in compliance with anti-discrimination laws, over the ten years.”

The Brown deal doesn’t say what qualifies as qualified workforce development organizations.

3. Trump wants to influence university admissions.

While the Brown and Columbia deals have several differences, the agreements have nearly identical language giving the Trump administration oversight of the way they admit students.

The deals say that the universities must provide the government with detailed information about who applied to the schools and was admitted, broken down by grades and test scores, as well as race and ethnicity. The government could then conduct a “comprehensive audit” of the schools, based on this information.

This information could also be used to determine if universities are showing a preferences for students of color. Without providing evidence, conservative activists have alleged that selective colleges discriminate against white people and that this is a violation of the Civil Rights Act of 1964.

Experts have said that these reporting requirements appear to be intended to increase the number of white students admitted to Ivy League schools.

An older white man with a beard, flanked by two men in suits, bumps fists with a young person in a crowd.
Harvard President Alan Garber greets graduating students at Harvard’s commencement on May 29, 2025, in Cambridge, Mass.
Rick Friedman/AFP via Getty Images

4. The deals could open more doors to federal intrusion.

Claire Shipman, Columbia’s acting president, said in July that the deal would allow the university’s “research partnership with the federal government to get back on track.”

Christina Paxson, Brown’s president, also defended the agreement in a statement, writing that it “enables us as a community to move forward after a period of considerable uncertainty in a way that ensures Brown will continue to be the Brown that our students, faculty, staff, alumni, parents and friends have known for generations.”

But the deals could invite more scrutiny from the federal government.

Both deals spell out the government’s right to open new investigations against Brown and Columbia, or to reopen old complaints if the administration is not satisfied with how the universities are implementing the agreement.

Trump is now pressuring Harvard, UCLA and other universities to strike deals, also based on similar antisemitism allegations.

The White House announced on Aug. 8 that it could seize the research patents, worth hundreds of millions of dollars, that Harvard holds. Since 1980, universities have been able to legally hold, and profit from, patents resulting from federally funded research.

The federal government has long influenced higher education through funding and regulation. But the government has never tried to dictate what happens on campus before now.

Higher education experts like me believe that political goals now drive the way the government approaches higher education. Some of Trump’s conservative allies are now urging the president to go even further, saying “we have every right to renegotiate the terms of the compact with the universities.”

Given these and other pressure tactics, academics who study the law and government warn that the university deals indicate encroaching authoritarianism.

The Conversation

Brendan Cantwell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Exactly what is in the Ivy League deals with the Trump administration – and how they compare – https://theconversation.com/exactly-what-is-in-the-ivy-league-deals-with-the-trump-administration-and-how-they-compare-262912