Emergency alerts may not reach those who need them most in Colorado

Source: The Conversation – USA (2) – By Carson MacPherson-Krutsky, Research Associate, Natural Hazards Center, University of Colorado Boulder

A firefighter watches as the NCAR Fire burns on March 26, 2022, in Boulder, Colo. Michael Ciaglo via Getty Images

Many Coloradans may never get an alert that could save their life during a disaster.

And the alerts that go out may not easily be understood by the people who do get them.

We are social scientists who study emergency alerts and warnings, the challenges that exist in getting emergency information to the public, and ways to fix these issues.

Research two of us – Carson MacPherson-Krutsky and Mary Painter – did with researcher Melissa Villarreal shows only 4 in 10 Colorado residents have opted in to receive local emergency alerts. And many alerts may not be written with complete information, translated into the languages residents speak, or put into formats accessible to people with vision or hearing loss. This means some of our most vulnerable neighbors could miss crucial information during a crisis.

A decentralized alert system

Alerts are complex. They can come from a variety of official sources, including 911 centers, weather forecast centers and others. Alerts can also come in many forms, ranging from emails and texts to sirens and radio broadcasts.

Our study, mandated and funded by Colorado House Bill 23-1237, focused on understanding alert systems in Colorado after the Grizzly Creek Fire in 2020 and the Marshall Fire in 2021.

Smoke billows from a rocky and mountainous forest near an empty highway.
The Grizzly Creek Fire burns down hillsides along I-70 in Glenwood Canyon on Aug. 17, 2020, near Glenwood Springs, Colo.
Helen H. Richardson/MediaNews Group/The Denver Post via Getty Images

These fires were destructive and highlighted issues related to emergency alerting. Alerts about the fires and calls to evacuate were delayed and inconsistently received. Most were only available in English despite census data that shows 1 in 10 residents of Eagle and Garfield counties speak Spanish at home and only “speak English less than ‘very well.’”

The resulting legislation focused on how to make emergency alerts in Colorado accessible to all, but especially those with disabilities and with limited-English proficiency.

As social scientists who study disasters, we know that hazards, like earthquakes and wildfires, reveal inequities and that certain groups fare worse and take longer to recover. People with disabilities have higher rates of death from disasters. This is not because these populations are inherently less able to respond, but because emergency planning and systems may not account for their specific needs.

Our Colorado study used interviews and a statewide survey of 222 officials that send alerts to better understand the challenges of providing alerts across the state and reaching at-risk populations.

A patchwork system

The state of Colorado does not have a uniform alert system. Local areas determine the alert systems they will use.

Some alerts get sent through systems that require people to opt in. This means that people sign up and choose to receive notifications. Neighboring counties often use different opt-in alert systems, meaning individuals who travel to different counties for work or recreation may need to register for multiple systems. Examples of these systems include Everbridge, used by Boulder County, and CodeRed, used by Adams and Park counties.

A boy stands on top of a car, peering through binoculars, as orange smoke billows in the background.
Amitai Beh, 6, watches the NCAR Fire on March 26, 2022 in Boulder, Colo..
Michael Ciaglo/Stringer via Getty Images

The success of these systems in an emergency relies on the community signing up for alerts.

We found that registering for alert systems was a barrier for everyone, but especially those with limited-English proficiency and with disabilities. This is because they may not be aware of the systems that are accessible to them or they are wary of providing personal information, and depending on their location, alerts may only be offered in English.

Most of the Colorado counties either have Integrated Public Alert and Warning System (IPAWS) approval or are in the process of getting approval. Some counties on the Eastern Plains, like Otero and Kiowa counties, have not started the process.
The current status of Integrated Public Alert and Warning System alerting entities across Colorado. Green means there’s an approved alerting authority, yellow indicates the region is in the process of becoming an alerting authority, and gray means the area has not begun the process.
Colorado Division of Homeland Security & Emergency Management, CC BY-ND

Another system is “opt out,” meaning people will receive alerts by default unless they turn them off. These include Wireless Emergency Alerts, or WEAs. These messages get broadcast through cellphone towers to phones in a specific geographic area. So if you have a cellphone in a WEA alert boundary, you will get an alert. WEAs are used in Colorado to target specific regions in danger, such as an area that needs to evacuate or for an Amber Alert.

There is no national standard or guidance for opt-in or opt-out systems, which can lead to inconsistencies in how people get alerts.

Lack of resources limits alerting authorities

We found that though authorities often want to provide alerts in other languages and accessible formats, they have significant resource constraints. Time, staff, money or training can all limit the level of accessibility they can provide.

Sixty-four percent of the authorities we surveyed said they lacked funding to make alerts more inclusive.

More than a third of our respondents didn’t know if their systems could provide alerts in languages other than English or for people with disabilities. This speaks to a need for better training on how these systems work and how to use them effectively.

An alert is complete if it includes information about the source, hazard, location and time. Recently, researchers found that fewer than 10% of all Nationwide Wireless Emergency Alerts issued from 2012 to 2022 were complete.

One of us – Micki Olson – worked with the federal government to develop the Message Design Dashboard to help alerting authorities craft clear and comprehensive emergency messages.

Fifty-six out of 64 counties in Colorado are an Integrated Public Alert and Warning System authority, which means they can send alerts across multiple platforms at once. This can improve alert access since it broadens who alerts reach.

Not all counties have this option, and even the ones who do, don’t always use it. In our study, authorities noted limited staff capacity, funds and lack of time prevents them from getting or using the IPAWS system.

“We simply do not have the resources, both financial and people, to deploy all of these systems,” a survey respondent from Gunnison County said.

Alert systems were not built to be accessible

The final issue we identified is that alert systems were not developed with accessible options and functionality like video or image options. For example, people who are blind or have low vision won’t have access to a message unless they enable text-speech features on their phone in advance.

The WEA system only allows alerts to be sent in English or Spanish. Characters like accents and tildes cannot be included. Expansion of language options was planned but is now on hold for unclear reasons. Some counties have the resources to make alerts available in additional languages, but most do not.

Almost 900,000 Coloradans speak a language other than English. According to the Migration Policy Institute, more than 230,000 Coloradans have difficulty comprehending and communicating in English.

Where do we go from here?

Recent events, including the Palisades and Eaton fires in California and the devastating floods in Kerr County, Texas, demonstrate how critical it is that timely and accessible emergency alerts reach everyone, but especially the most vulnerable individuals.

However, these systems are complex, and everyone from individuals to local government can play a part in improving them.

  • Federal and local governments can allocate funds to update and standardize systems. They can also implement training and procedures to ensure alerts are effective and inclusive.

  • Authorities that send alerts can partner more closely with trusted community organizations and networks to reach diverse audiences.

  • Researchers can identify how to better tailor systems to meet community needs.

  • Individuals can learn about and sign up for alerts. To do so, visit local government websites or enter “emergency alerts” and the name of your county or city in an online search.

The Conversation

Carson MacPherson-Krutsky works for the Natural Hazards Center at the University of Colorado Boulder. Through the Center, she receives funding from the State of Colorado, NSF, USACE, USGS and others.

Mary Angelica Painter works for the Natural Hazards Center at the University of Colorado Boulder. Through the Center, she receives funding from agencies including NSF, USACE, USGS and others.

Micki Olson has received funding from FEMA and NOAA.

ref. Emergency alerts may not reach those who need them most in Colorado – https://theconversation.com/emergency-alerts-may-not-reach-those-who-need-them-most-in-colorado-262308

How a corpse plant makes its terrible smell − it has a strategy, and its female flowers do most of the work

Source: The Conversation – USA (2) – By Delphine Farmer, Professor of Chemistry, Colorado State University

The corpse plant’s bloom appears huge, but its flowers are actually tiny and found in rows inside its floral chamber. John Eisele/Colorado State University

Sometimes, doing research stinks. Quite literally.

Corpse plants are rare, and seeing one bloom is even rarer. They open once every seven to 10 years, and the blooms last just two nights. But those blooms – red, gorgeous and massive at over 10 feet (3 meters) tall – stink. Think rotting flesh or decaying fish.

Corpse plants definitely earn their nickname. Their pungent odors attract not only the carrion insects – beetles and flies normally drawn to decomposing meat – that pollinate the plants, but also crowds of onlookers curious about the rare, elaborate display and that putrid scent.

Plant biologists have studied corpse flowers for years, but as atmospheric chemists we were curious about something specific: the mixes of chemicals that create that smell and how they change during the flower’s short bloom.

While previous studies had identified dozens of volatile organic and sulfur compounds that contribute to corpse flower scents, no one had yet quantified those emission rates or looked at how the rates changed throughout a single evening. We recently got that opportunity. What we found opened a new window into the complexity and strategic behavior of a very unusual flower.

Time-lapse video of a corpse plant’s bloom in 2024 at Colorado State University. More than 8,600 visitors saw the bloom.

Meet Cosmo the corpse plant

Corpse plants are native to the Indonesian island of Sumatra but are considered endangered, even there. Several years ago, Colorado State University was given a corpse plant, or Titan arum (Amorphophallus titanum), to study. Its name is Cosmo – Titan arums are rare enough that they get names.

Cosmo sat dormant in the CSU plant growth facility for several years before showing signs that it was about to flower in spring 2024. When news came that Cosmo was going to bloom, we jumped at the opportunity to bring our atmospheric chemistry expertise into the greenhouse.

A woman reaches inside a giant open flower with a tall stalk.
During Cosmo’s bloom, Colorado State University Plant Growth Facilities Manager Tammy Brenner points toward the inside of the spathe, the large outer sheath that opens. Just below her hand is the floral chamber, where rows of tiny female and male flowers were blooming. They aren’t easy to see from outside, but the smell is impossible to miss.
John Eisele/Colorado State University

We deployed a series of devices for collecting air samples before, during and after the bloom. Then we measured chemicals in the air samples using a gas chromatography mass spectrometer, an instrument that is mentioned frequently on crime TV shows. Colleagues also brought a time-of-flight mass spectrometer that we placed downwind of Cosmo to measure volatile organic compounds every second.

To make each rare bloom count, corpse plants put vast amounts of energy into the show, producing large flowers that can weigh more than 100 pounds. The plants heat themselves up using a biochemical process known as thermogenesis that enhances emissions of organic compounds that attract insects.

Corpse plant emissions are notorious. While some local communities revered the plants, others would try to destroy them. In the 19th century, European explorers actively collected Titan arum plants and distributed them throughout botanical gardens and conservatories around the world.

Corpse plants are dichogamous: Each has both male and female flowers. Inside the giant petal-like leaf called a spathe, each plant has a central spike called a spadix that is ringed with many rows of tiny female and male flowers near its base. These female and male flowers bloom at separate times to avoid self-pollination, and they are the source of the smell.

A close up picture of the tiny flowers of the corpse plant.
Each corpse plant has both male (yellow) and female (red) flowers. The female flowers bloom first to attract pollinators coming from other corpse flowers. The next day, the male flowers bloom, providing pollen for flies and beetles to carry away to the next plant. The thin, yellow strands are pollen.
John Eisele/Colorado State University

On the first night of a corpse plant bloom, the female flowers bloom to attract pollinators that, if they’re lucky, are carrying pollen from another corpse plant.

Then, on the second night, the male flowers bloom, allowing pollinating insects to carry pollen off to another corpse plant.

The rare blooms and avoidance of self-pollination highlight not only why these plants are listed as endangered but also their need for efficient pollination strategies – exactly the chemistry we wanted to investigate.

Female flowers work harder

We found that the female flowers do most of the work attracting pollinators, as previous studies noted. They emit vast amounts of organic sulfur, plus some other compounds that mimic rotting smells to attract beetles and flies that normally feed on animal carcasses.

It’s this organic sulfur that really stinks from the female bloom: Methanethiol, a molecule in the same chemical family of compounds as the odors emitted by skunks, was the single most-emitted compound during Cosmo’s bloom.

An illustration of a corpse flower and some key aspects
Some of the keys to a corpse flower’s successful bloom. Floral biogenic volatile organic compounds (fBVOCs), are those released by flowers.
Mj Riches and Rose Rossell/Colorado State University

We also measured many other organic sulfur compounds, including dimethyl disulfide, which has a garlic smell; dimethyl sulfide, known for an unpleasant scent; and dimethyl trisulfide, which smells like rotting cabbage or onions. We also measured dozens of other compounds: sweet-smelling benzyl alcohol, asphalt-scented phenol and fragrant benzaldehyde.

While previous studies found some of the same compounds with different instruments, we were able to measure the methanethiol and quantify the concentrations quickly enough to track the progress of the bloom overnight.

As Cosmo bloomed, we combined our instrument data with measurements of the air change rate in the greenhouse – meaning how fast it takes for air to move through the space – and were able to calculate the emission rates.

The volatile emissions added up to about 0.4% of the plant’s average biomass, meaning the plant, which we estimated to weigh about 100 pounds, lost a measurable amount of weight while producing those chemicals. That’s a lot of stench.

Floral trapping

The female flowers bloomed all night, but early the next morning the emissions rapidly stopped. We wondered whether this cutoff point just might be evidence of floral trapping: a pollination strategy employed by other members of Titan arum’s family.

Four images sow the outside, cutaway, and the interior of the plant.
A cutaway of another species of Amorphophallus, Amorphophallus declinatus, shows how the male and female flowers surround the spadix inside the spathe.
Cyrille Claudel, The Plant Journal, 2023, CC BY-NC-ND

During floral trapping, the floral chamber can physically close through movement of hairs or expansion of parts of the plant, such as the surrounding spathe. A physical closure of the floral chamber would not be easily visible to bystanders, but it could rapidly cut off the emissions the way we observed.

An Australian arum lily that smells like dung uses this technique. The carrion insects that come for the female flowers are forced to stay for the male flowers that bloom the next night, so they can carry off that pollen to find another female corpse bloom. Our evidence suggests that the corpse flower probably does too.

The second night, the emissions started back up – at much lower levels. The male flowers emit a sweeter set of aromatic compounds and far less sulfur than the females.

A chart tracks four chemicals rising quickly and powerful during the female bloom, then rising much more subtly during the male bloom.
How four of the main chemicals released from the corpse flower rose, fell and rose again during its two-day bloom. The numbers on the left measure methanethiol; those on the right measure the three sulfides. Arrows on the left show comparison levels of methanethiol measured over landfills, waste sites and a paper mill to showcase just how stinky the bloom was.
Rose Rossell/Colorado State University

We hypothesize that the male flowers don’t need to work as hard to smell pungent in order to attract as many insects because insects are already there due to floral trapping. A 2023 study found that thermogenesis was also weaker during the male bloom: the spadix reached 96.8 degrees Fahrenheit (36 C) during the female bloom, but only 92 F (33.2 C) during the male bloom.

Stinkier than a landfill

We found that the corpse plant’s powerful emission rates can be an order of magnitude stronger than landfills – albeit only for two nights. These strong emissions are well designed to move far through the Sumatran jungle to attract carrion flies.

The odors are also resilient to atmospheric oxidation – the way organic compounds degrade in the atmosphere by reacting with oxidants in pollution such as ozone or nitrate radicals. Different compounds degrade at different rates – an important factor for attracting pollinators.

Many insects are attracted to not just one volatile compound, but by specific ratios of different volatile compounds. When atmospheric pollution degrades floral emissions and these ratios change, pollinators have a harder time finding flowers.

The female floral plume maintained a reasonably constant ratio of the major sulfur chemicals. The male plume, however, was far more susceptible to degrading in pollution and changing floral ratios in nighttime air.

These enigmatic plants put a lot of energy into clever pollination strategies. Cosmo taught us about their far-reaching scents of rotting meat, thermogenesis to increase emissions and floral entrapment, offering new insight into the corpse plant’s spectacular bloom.

The Conversation

Delphine Farmer receives funding from the Alfred P. Sloan Foundation, the National Science Foundation, the National Oceanic and Atmospheric Administration, the Department of Energy, and the W.M. Keck Foundation.

Mj Riches receives funding from the National Science Foundation.

Rose Rossell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How a corpse plant makes its terrible smell − it has a strategy, and its female flowers do most of the work – https://theconversation.com/how-a-corpse-plant-makes-its-terrible-smell-it-has-a-strategy-and-its-female-flowers-do-most-of-the-work-263409

5 ways students can think about learning so that they can learn more − and how their teachers can help

Source: The Conversation – USA (2) – By Jerrid Kruse, Professor of Science Education, Drake University

Learning is more than just memorization. FG Trade/E+ via Getty Images

During my years teaching science in middle school, high school and college, some of my students have resisted teaching that educators call higher-order thinking. This includes analysis, creative and critical thinking, and problem-solving.

For example, when I asked them to draw conclusions from data or generate a process for testing an idea, some students replied, “Why don’t you tell us what to do?” or “Isn’t it the teacher’s job to tell us the right answers?”

In other words, my students had developed a strong preconceived notion that knowledge comes from authority. After investigating, my colleagues and I concluded that these beliefs about learning were influencing how they approached our lessons – and thus what they were able to learn.

All students come to class with a range of beliefs about what it means to learn. In the field of education, perhaps the most sought-after belief is what we call having a growth mindset. Students with a growth mindset believe they can improve and continue to learn. In contrast, students with a fixed mindset struggle to believe they can become more knowledgeable about the topic they’re studying. When students say, “I’m bad at math,” they exhibit a fixed mindset.

As teachers, we not only try to help students understand the topic at hand but also aim to instill accurate beliefs about learning so nothing interferes with their ability to take in new information.

Other than the growth mindset, I argue that five other beliefs are particularly important to promote in classrooms to help students become better learners and more prepared for the modern world.

Learning is understanding

Some students and teachers equate learning to memorizing.

While memorization has a role in learning, deep learning is about understanding. Students will be well served recognizing that learning is about explaining and connecting concepts to make meaning.

Too much focus on memorizing can hide gaps in learning.

For example, I was once working with a preschool student when they proudly demonstrated their ability to recite the numbers 1 through 20. I then asked the student to count the pencils on the desk. The student did not understand my request. They had not connected these new words to the number concept.

To help students recognize the importance of understanding for learning, teachers and parents might engage students in questions such as, “Why is connecting a new idea to an old idea better than just trying to memorize the answer?” or “Why is an explanation more useful than just an answer?”

a young girl sitting at a desk buries her forehead in a textbook
Learning is hard.
demaerre/iStock via Getty Images

Learning is complex and requires challenge

Students’ belief that learning is akin to memorization may reflect a related belief that knowledge is simple and learning should be easy.

Instead, educators want students to embrace complexity and its challenges. Through wrestling with nuance and complexity, students engage in the mental effort required to form and reinforce new connections in their thinking.

When students believe knowledge is simple and learning should be easy, their engagement in higher-order thinking, which is required to embrace complexity and nuance, suffers.

To help students who are struggling grasp a complex idea, teachers and parents might ask questions that help students see why learning is complex and requires challenge.

Learning takes time

When students believe learning is simple and easy, educators should not be surprised they think learning should be fast as well.

Instead, students ought to understand that deep learning takes time. If students believe learning is quick, they are less likely to seek challenge, explore nuance or reflect and make connections among ideas. Unfortunately, many curricula pack so much intended learning into a short amount of time that beliefs in quick learning are subtly reinforced.

While teachers can get creative with curricular materials — and spend more time challenging students to explore complexity and make connections — just spending more time on a concept may not be enough to shift a student’s beliefs about learning.

To help students shift their thinking about the speed of learning, I ask them to discuss questions such as, “Why do you think understanding complex concepts takes so much time?” or “Why would only covering this concept for one lesson not be enough?” With these questions, my colleagues and I have found students start to recognize that deep learning is slow and takes time.

Learning is ongoing

Students should also recognize that learning doesn’t end.

Unfortunately, many students believe learning to be a destination rather than an ongoing process. Yet, because knowledge contains an inherent level of uncertainty, and increased learning often reveals increased complexity, learning must be continuous.

To help students reflect on this belief, teachers and parents might ask their students, “How do you think your knowledge has changed over time?” and “How do you think your learning will change in the future?”

a white man stands facing away from the camera toward students at row desks
Learning doesn’t come only from teachers at the front of a class.
Drazen Zigic/iStock via Getty Images

Learning is not only from teachers

I remember one high school student telling me that “teachers are supposed to tell us the answers, so we know what to put on the test.”

This student had apparently figured out the “rules of the game” and was not happy when their teacher was trying to engage them in higher-order thinking. This student was holding onto a transmission model of learning in which learning comes from authority figures.

Instead, students should recognize that learning comes from many sources, including their experiences, their peers and their own thinking, as well as from authority figures.

While teachers and parents may hesitate to undermine their own authority, they do students a disservice when they do not prepare them to question and go beyond authority figures.

To help students shift their thinking, teachers might ask students to consider, “Why might learning from multiple sources help you better understand the complexity and nuance of a concept?”

Building better beliefs about learning

Often, teachers and parents believe opportunities to engage in higher-order thinking are enough to help their students develop better beliefs about learning.

But such beliefs require explicit attention and must be planned for in lessons. This is done by asking reflective questions that target specific beliefs, such as the questions noted in the final sentence of each of the previous sections.

In my experience, the conversations I’ve had with students using the questions noted above are highly engaging. Moreover, helping kids develop more robust beliefs about learning just might be the most important thing teachers can do to prepare students for the future.

The Conversation

Jerrid Kruse receives funding from the National Science Foundation, the NASA Iowa Space Grant Consortium, and the William G. Stowe Foundation.

ref. 5 ways students can think about learning so that they can learn more − and how their teachers can help – https://theconversation.com/5-ways-students-can-think-about-learning-so-that-they-can-learn-more-and-how-their-teachers-can-help-244619

Molecular ‘fossils’ offer microscopic clues to the origins of life – but they take care to interpret

Source: The Conversation – USA – By Caroline Lynn Kamerlin, Professor of Chemistry and Biochemistry, Georgia Institute of Technology

ATP synthase is an enzyme that has been using phosphate to generate life’s energy for millions of years. Nanoclustering/Science Photo Library via Getty Images

The questions of how humankind came to be, and whether we are alone in the universe, have captured imaginations for millennia. But to answer these questions, scientists must first understand life itself and how it could have arisen.

In our work as evolutionary biochemists and protein historians, these core questions form the foundation of our research programs. To study life’s history billions of years ago, we often use clues called molecular “fossils” – ancient structures shared by all living organisms.

Recently, we discovered that an important molecular fossil found in an ancient protein family may not be what it seems. The dilemma centers, in part, on a simple question: What does it mean if a simple molecular structure – the fossil – is found in every single organism on Earth? Do molecular fossils point to the seeds that gave rise to modern biological complexity, or are they simply the stubborn pieces that have resisted erosion over time? The answers have far-reaching implications for how scientists understand the origins of biology.

Follow the phosphorus to follow life

Life is made of many different building blocks, one of the most important of which is the chemical element phosphorus. Phosphorus makes up part of your genetic material, powers complex metabolic reactions and acts as a molecular switch to control enzymes.

Phosphorus compounds – specifically a charged form called phosphate – have a number of unique chemical properties that other biological compounds cannot match. In the words of the pioneering organic chemist F.H. Westheimer, they are chemically able to “do almost everything.”

Their unique combination of stability, versatility and adaptability is why many researchers argue that following phosphorus is key to finding life. The presence of phosphorus both close to home – in the ocean or on one of Saturn’s moons – and in the farthest reaches of our galaxy is strong evidence for the potential for life beyond Earth.

Chemical structure of a nucleotide, made of a phosphate, ribose sugar and base
Phosphate is part of many essential biological molecules, including the building blocks of DNA.
Charles Molnar and Jane Gair, CC BY-SA

If phosphorus is so critical to life, how did early biology predating cells first use it?

Today, biological organisms are able to make use of phosphates through proteins – molecular machines that regulate all aspects of life. By binding to proteins, phosphates regulate metabolism and cellular communication, and they serve as a source of cellular energy.

Further, the process of phosphorylation, or adding a phosphate group to a protein, is ubiquitous in biology and allows proteins to perform functions their individual building blocks cannot. Without proteins, the existence of organisms such as bacteria and humans may not be possible.

Given how essential phosphorus is to life, scientists hypothesize that phosphate binding was among the first biological functions to emerge on Earth. In fact, current evidence suggests that the first phosphate-binding proteins are truly ancient – even older than the last universal common ancestor, the hypothetical mother cell to all life on Earth that existed around 4 billion years ago.

A mysterious phosphate-binding fossil

One family of phosphate-binding proteins, called P-loop NTPases, regulates everything from the communication between cells to the storage of energy and are found across the tree of life. Because P-loop NTPases are among the most ancient protein families, analyzing their properties can provide key insights into both the emergence of proteins and how primitive life used phosphates.

Although P-loop NTPases are diverse in structure, they share a common motif called a P-loop. This component binds to phosphate by wrapping a nest of amino acids – the building blocks that make up proteins – around the molecule. Every known organism has multiple families of P-loop NTPase, which makes the P-loop an excellent example of a molecular fossil that can provide clues about the evolution of life. Our crude analysis of the human genome estimates that humans have about 5,000 copies of P-loops.

When part of a larger protein structure, the P-loop folds like origami into a shape that is ideal for hugging a phosphate molecule. These nests are extremely similar to each other, even when the surrounding proteins are only distantly related in function. A landmark study in 2012 argued that even if the P-loop nest is extracted from a protein, it can still bind to phosphate. In other words, the ability of a P-loop to form a nest is determined by its interactions with phosphate, not its protein scaffold.

This study provided the first evidence that some forms of the P-loop sequence could have functioned billions of years ago, even before the emergence of large, complex proteins. If true, this implies that P-loop nests may have seeded the emergence and evolution of many of the phosphate-binding proteins seen today.

Interrogating the history of the P-loop

The pioneer of bioinformatics, Margaret Oakley Dayhoff, hypothesized in 1966 that the large collection of big proteins seen today arose from small peptides that were duplicated and fused over long periods of time. Although P-loops may have evolved in a different way, Dayhoff’s realization was the first to clarify how complex forms could have arisen from much simpler ones.

Inspired by Dayhoff’s hypothesis, we sought to interrogate the role that simple P-loops may have played in the evolution of the complex proteins key to life. Our findings challenge what’s currently known about these molecular fossils.

Diagram showing the evolution of amino acids to oligopeptides to complex proteins
The Dayhoff hypothesis proposed that large, complex proteins arose from the duplication and merging of smaller, simpler peptides over time.
Merski et al./Biomolecules, CC BY-SA

Using computer models, we compared a range of P-loops from the P-loop NTPase family to a control group made of the same amino acids but in a different order. While these control loops are also found in proteins, they do not form nests.

Although the P-loops and the control loops are very different in their nest-forming ability, we found that they both are able to form transient nests when embedded in proteins. This meant that, contrary to popular belief, the amino acid sequence of P-loops aren’t special in their ability to form nests – as would be expected if they alone were the seeds for many modern proteins.

A fossil eroded over time

Our work strongly suggests that while the P-loop is a molecular fossil, the true nature of its form billions of years ago may have been eroded by the sands of time.

For example, when we repeated our simulations in a different solvent – specifically methanol – we found that P-loops situated in their parent proteins were able to regain some of their ability to form nests. This doesn’t mean that being in methanol drove the first proteins with P-loops to form the nests critical for life. But it does emphasize the importance of considering the surrounding environment when studying peptides and proteins.

Just as archaeologists know to be careful in how they interpret physical fossils, historians of protein evolution could take similar care in their interpretation of molecular fossils. Our results complicate the current understanding of early protein evolution and, consequently, some aspects of the origins of life.

In resetting the field’s broader understanding of how these crucial proteins emerged, scientists are poised to start rewriting our own evolutionary history on this planet.

The Conversation

Caroline Lynn Kamerlin receives funding from the NASA Exobiology program.

Liam Longo receives funding from the NASA Exobiology program.

ref. Molecular ‘fossils’ offer microscopic clues to the origins of life – but they take care to interpret – https://theconversation.com/molecular-fossils-offer-microscopic-clues-to-the-origins-of-life-but-they-take-care-to-interpret-259271

Identifying as a ‘STEM person’ makes you more likely to pursue a STEM job – and caregivers may unknowingly shape kids’ self-identity

Source: The Conversation – USA – By Remy Dou, Associate Professor of Teaching and Learning, University of Miami

Kids seem to get a message that STEM jobs aren’t compatible with being a primary caregiver. kali9/E+ via Getty Images

Employers in science, technology, engineering and mathematics – commonly called the STEM industries – continue to struggle to attract female applicants. In its 2024 jobs report, the National Science Board found that men outnumber women almost 3-to-1 in STEM jobs that require at least a bachelor’s degree and over 8-to-1 in STEM jobs that don’t, such as electrical, plumbing or construction work.

Despite women being just as academically prepared for many STEM roles as men, if not more so, and the fact that STEM jobs offer higher salaries and greater job security than non-STEM jobs, men continue to dominate this section of the workforce.

I am a social scientist who studies the relationship between education, identity and science, and since 2019, I’ve led the Talking Science research and development group. One question we’ve sought to answer is why employers continue to struggle recruiting talented women to the STEM workforce.

Our team recently carried out a study where we discovered that how caregivers, especially mothers, talk about STEM topics may significantly shape their children’s interest in STEM careers.

Are you a math person?

As a researcher, whenever I give a public talk I like to ask the audience, “Who here is not a math person?” Without fail, several hands shoot up faster than if I had asked, “Who wants free money?”

It turns out that most people are well aware of their own relationship to STEM fields and may see themselves as a math, science or “STEM” person, or, commonly, not a STEM person. Researchers like me call this kind of self-identification a “STEM identity,” and almost everyone has one. Although any given person can have a very high STEM identity or a very low one, most individuals fall somewhere in between.

Having a high STEM identity strongly predicts whether a student will choose to pursue a career in STEM. Research shows that if children don’t develop a high STEM identity by eighth grade, they are unlikely to ever pursue a STEM career.

This finding raises the question: What childhood experiences shape children’s STEM identities?

Individuals come to identify with different groups by recognizing characteristics they share with members of those groups. In many cases, people learn about the characteristics of a group through direct experience. For example, elementary-age children often see teaching as a female occupation when they encounter mostly female teachers at their school. Most children, however, never spend enough time with a scientist to form a stereotype directly.

Children learn most of what they know about STEM professionals indirectly through depictions of scientists in their social environment. Once children have formed a stereotype in their minds, they then compare themselves to these stereotypes to determine whether they are, or could be, a STEM person.

In the United States, five decades of the “draw-a-scientist” studies reveal that children asked to depict scientists overwhelmingly draw them as male – illustrating a persistent stereotype linking science and masculinity. While a growing body of research shows that in recent years gender-based stereotypes of STEM workers have decreased significantly, STEM workforce employment patterns contradict this finding.

A missing explanation?

Since social stereotypes about scientists are becoming less gender-biased, our team realized that something else must be causing children to carry male-biased views of STEM into young adulthood. The Talking Science team believed that understanding why some women see themselves as STEM people and want to obtain STEM jobs held the key to understanding the gap between decreasing social stigma and the persistent lack of women in STEM.

To understand this phenomenon more deeply, our team interviewed 20 college students, 13 of whom identified as female. We intentionally selected these students because of their positive STEM identities and enrollment in college STEM programs.

During 60-to-90-minute interviews, we asked participants to list the various people who positively or negatively shaped their academic and professional interests. We then asked students to label each of them as either a “STEM person,” “not a STEM person” or somewhere in between. Finally, we invited each student to explain why they assigned each label.

The students mentioned 102 individuals – including parents, aunts, siblings, friends and teachers – as influential in shaping their STEM identities. Our team then assigned a gender to these individuals based on pronouns and other descriptors the interviewees used.

A gender gap clearly emerged. Women were only about 40% of those described as STEM people and 70% of the individuals described as not STEM people. This latter group almost always included our interviewees’ mothers.

man and boy working with tools on a robot toy
Among those whom students named as influential in shaping their own STEM identity, the majority were male.
athima tongloom/Moment via Getty Images

Updating stereotypes about STEM workers

When first examining the data, we assumed that college students didn’t recognize their mothers as STEM people because of gender stereotypes. Some students were reluctant to describe their mothers as STEM people even when both parents worked in STEM professions – in one case, both parents even held the same college STEM degree.

After closer examination, we noticed that a few students labeled their fathers as not a STEM person. These fathers shared one thing in common with mothers labeled the same way: They all played the role of primary caregiver.

Even in cases where mothers or fathers held a college degree in a STEM field, students consistently diminished the STEM identity of the parent who took on the bulk of the child-rearing responsibilities. As a result, we recognized that something other than gender contributed to students’ perceptions of their parents’ STEM identities.

When pressed to describe why they did not see their primary caregivers as STEM people, our interviewees generally pointed to two things: failure to display STEM interests and failure to display STEM knowledge.

When asked about their parents’ STEM interests, most interviewees described parenting as an all-consuming task that doesn’t leave room for STEM. However, this view generally did not apply to both mothers and fathers, but rather to the parent taking on the role of primary caregiver.

Similarly, most students pointed to the parent who often engaged in conversations about STEM topics as more knowledgeable, and this view also tended to exclude the primary caregiver.

Why what parents demonstrate matters

Children who grow up with the expectation of becoming a primary caregiver may associate their own caregivers’ limited displays of STEM interests and knowledge as par for the course. And because the role of primary caregiver continues to be associated with women, it’s possible for some girls to grow up believing that being a committed parent and a STEM person are incompatible roles.

Of course, STEM workers have families, and many, both men and women, are primary caregivers at home. But stereotypes are hard to break. If STEM industries want to attract more women, or if parents want their daughters to grow up to become STEM professionals, then children need to see parenthood and STEM jobs as compatible.

When parents talk to their children about their STEM-related interests and share their knowledge, children are more likely to learn that they can grow up to be both a parent and a STEM person. This approach can have an outsize effect on young women who grow up with the expectation of raising a family one day.

Creating opportunities for children to encounter female role models who are in the STEM professions is vital for attracting and recruiting women to STEM fields. Our study suggests it’s also crucial for children to see scientists and engineers as parents and caregivers with children of their own.

The Conversation

Remy Dou offer pro bono consulting services to Tumble Science Podcast for Kids and Cumbre Kids.

ref. Identifying as a ‘STEM person’ makes you more likely to pursue a STEM job – and caregivers may unknowingly shape kids’ self-identity – https://theconversation.com/identifying-as-a-stem-person-makes-you-more-likely-to-pursue-a-stem-job-and-caregivers-may-unknowingly-shape-kids-self-identity-254771

US women narrowed the pay gap with men by having fewer kids

Source: The Conversation – USA (2) – By Alexandra Killewald, Professor of Sociology, University of Michigan

Women typically earn less than men per hour that they work. MoMo Productions/DigitalVision via Getty Images

Women in the U.S. typically earned 85% as much as men for every hour they spent working in 2024. However, working women are faring much better than their moms and grandmothers did 40 years ago. In the mid-1980s, women were making only 65% as much as men for every hour of paid work.

Women’s wages have improved relative to what men earn in part because of gains in their education and work experience, and because women have moved into higher-paying occupations. But progress toward pay equality has stalled.

As sociologists and demographers, we wanted to know whether changes in American families might also have helped women come closer to pay equality with men. In an article published in June 2025 in Social Forces, an academic journal, we argued that this pay gap is becoming smaller in part because women are having fewer children.

Moms earn less but dads earn more

In the U.S. and elsewhere, ample evidence shows that parenthood affects men’s and women’s wages differently.

Compared to remaining childless, motherhood leads to wage losses for women. And those losses are larger when women have more kids.

By contrast, after men become fathers their wages usually rise.

Because having kids tends to push women’s wages down and men’s wages up, parenthood widens the gender pay gap.

Young girls play with their father and pet the dog sitting on his lap.
When men have kids, it doesn’t depress their wages the way it does for women.
MoMo Productions/Stone via Getty Images

Decline in birth rate plays a role

Americans are having fewer kids in general. Women, including those who don’t work outside the home, had an average of about three children by their 40s in 1980. By 2000, that average had fallen to 1.9, and it has been fairly stable since then.

To see whether changes in how many kids working American moms have affects what they earn relative to men, we analyzed data collected from a nationally representative sample of U.S. families. We tracked trends over time in the number of children that employed Americans ages 30-55 have.

We found that employees’ average number of children fell significantly between 1980 and 2000, declining from around 2.4 to around 1.8. That average stabilized after 2000; employees had an average of about 1.8 children in 2018 – the most recent year in our analysis.

At the same time, the pay that women in this age range earned per hour relative to men rose steeply. It climbed from 58% in 1980 to 69% by 1990 and then rose more gradually to 76% by 2018. That is, as people were having fewer kids, the gender pay gap got smaller. For both trends, there was rapid change in the 1980s, followed by slower change after 1990.

We next estimated whether declines in the number of children men and women have can explain the narrowing of the gender pay gap between 1980 and 2018.

We found that, even after adjusting for other factors, such as years of education, prior work experience and occupation, about 8% of the decline in the gender pay gap can be explained by the lower number of children working women and men are having.

Next, we showed that the number of children American employees had declined faster in the 1980s than later on. That slowdown coincided with a deceleration of women’s gains in pay relative to men. Once the average number of children that U.S. employees had stabilized around 2000, so did women’s progress toward earning as much as men.

Questions about the future of US fertility

U.S. scholars and policymakers are debating whether and why Americans are having fewer children today than one or two decades ago, and what the government should do about it.

We agree that these are important questions.

Our research shows that any future changes in how many children Americans have are very likely to affect how quickly women and men reach pay equality. But it’s not inevitable.

The number of children Americans have affects the gender pay gap only because parenthood decreases women’s wages while increasing men’s wages. As long as these unequal effects of parenthood on what men and women earn persist, they will continue to act as a brake on women’s progress toward equal pay.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. US women narrowed the pay gap with men by having fewer kids – https://theconversation.com/us-women-narrowed-the-pay-gap-with-men-by-having-fewer-kids-261811

Does anyone go to prison for federal mortgage fraud? Not many, the numbers suggest

Source: The Conversation – USA (2) – By Jay L. Zagorsky, Associate Professor Questrom School of Business, Boston University

Go directly to jail? Not quite. Sergey Chayko/Getty Images Plus

Mortgage fraud is back in the news. Lisa Cook, a Federal Reserve governor, is being investigated by the Department of Justice for allegedly making false statements when applying for a mortgage. Members of Donald Trump’s Cabinet are accused of similar wrongdoings. Could any of these people go to prison?

Mortgage fraud is not a new problem. Subprime mortgage fraud fueled the 2008 financial meltdown, when large numbers of very risky mortgages defaulted. Mortgage fraud was also a key feature of the savings and loan crisis in the 1980s.

Mortgage applications are very long, so there’s plenty of opportunity to make mistakes. Plus, they require borrowers to declare that everything is “true, accurate, and complete.” Misrepresentation can trigger potentially large civil and criminal penalties.

As a business school professor, I was curious how many people are convicted of mortgage fraud today. After all, relatively few people went to jail for fraudulent loans back in 2008. Since most mortgage fraud violates federal law, I looked at more than a decade of federal conviction data. What I found was clear: Almost no one has gone to federal prison recently for lying on a mortgage application.

What is mortgage fraud?

Mortgage fraud is when someone intentionally misrepresents facts in order to obtain a property loan. People can lie about many things on a mortgage application, such as their income, assets or employment status, or whether they will occupy the home being purchased or rent it out.

Being caught lying to get a mortgage can be costly. The maximum federal sentence is 30 years, with fines of up to US$1 million. Because more than a quarter of all mortgages are guaranteed by federal agencies, and many are acquired by quasi-government organizations like Freddie Mac and Fannie Mae, most mortgage fraud is a federal crime.

However, just because there are laws on the books doesn’t mean they’re enforced. For example, I work in Boston, where for years jaywalking has been illegal – but as any visitor quickly notices, no one pays any attention to this rule.

How many people are convicted?

The U.S. Sentencing Commission provides detailed data on every person convicted of federal crimes since 2013. The database is large, since federal courts convict almost 70,000 people each year.

However, very few people are convicted of federal mortgage fraud. Just 38 people in the country were sentenced for such crimes in 2024, and among that small group, four of the convicted got no prison time. A year earlier, just 34 people were convicted and seven avoided prison.

Over the past dozen years, fewer than 3,000 people were convicted of federal mortgage fraud, and the number of people sentenced fell steadily each year.

Three thousand people are a tiny fraction of mortgages issued. The Consumer Financial Protection Bureau estimates that almost 100 million new mortgage loans were written to purchase or refinance a home over the past 12 years. For those who like precision, 3,000 is only 0.003%.

The Sentencing Commission’s files also offer insight into who gets convicted of mortgage fraud. Three-quarters were men. More than 90% were U.S. citizens. The typical person convicted of mortgage fraud is a man in his late 40s with an associate degree, the data suggests.

The real penalty

While the maximum penalty is 30 years, almost no one serves that long a sentence. In 2024, the maximum sentence handed out was just 10 years. Since 2013, 15% of those convicted got no jail time. The average sentence for people who did get jail time was 21 months, which is less than two years behind bars.

Fines are also much lighter in practice than the maximum $1 million penalty. In 2024, the maximum fine passed down was a quarter-million dollars. Since 2013, the average person convicted of mortgage fraud paid a fine of less than $6,000, with over half of all those convicted paying no fine at all.

Now not paying a fine or only paying a small one doesn’t mean there’s no financial penalty. The courts required most of those convicted to make restitution. In 2024, half of all people convicted had to pay at least a half-million dollars to reimburse their victims, such as lending companies. Over the dozen years I looked at, the average person convicted paid $2 million in restitution for their misdeeds.

More lightning strikes than convictions

It’s impossible to know how common mortgage fraud really is. Some mortgage applications are rechecked in a “post-closing audit.” However, these audits happen within 90 days after the mortgage money is disbursed. Beyond that window, if a loan is paid back on time and without problems, there’s little incentive for a bank or mortgage service provider to recheck an applicant’s information.

What is clear is that while millions of mortgages are written each year, only a tiny fraction of mortgage recipients go to jail for fraud. One way to put this tiny fraction into perspective is to compare it with the National Weather Service estimates of the approximately 270 people hit by lightning yearly. Last year, lightning hit over seven times more people than the federal government convicted of mortgage fraud.

Years ago, I filled in a mortgage application to buy a home. I was consumed with dread wondering if any application mistake would result in my being sent to jail. After looking at the mortgage fraud conviction data, I should have been more worried about being hit by lightning.

The Conversation

Jay L. Zagorsky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Does anyone go to prison for federal mortgage fraud? Not many, the numbers suggest – https://theconversation.com/does-anyone-go-to-prison-for-federal-mortgage-fraud-not-many-the-numbers-suggest-265242

Fed, under pressure to cut rates, tries to balance labor market and inflation – while avoiding dreaded stagflation

Source: The Conversation – USA (2) – By Jason Reed, Associate Teaching Professor of Finance, University of Notre Dame

Interest rates are a tricky balancing act, as Fed Chair Jerome Powell knows well. AP Photo/Alex Brandon

The Federal Reserve is in a nearly impossible spot right now.

Markets are expecting a quarter-point interest rate cut to a range of 4% to 4.25% when the Fed policy-setting committee concludes its latest meeting on Sept. 17, 2025. After all, the slowdown in the jobs market, as well as a massive revision to past figures showing close to a million fewer jobs were created than previously reported, makes a strong case for lower interest rates to shore up the economy.

But at the same time, inflation – the other component of the Fed’s dual mandate – has begun to accelerate again. As rising tariffs squeeze consumer spending in sectors exposed to the harshest tariffs – such as clothing and electronics – other inflationary pressures loom over the horizon.

A slowing economy or rising inflation is a circumstance that policymakers want to avoid. But as an economist and finance professor, I’m increasingly concerned about the risk that they happen at the same time – a horrible economic condition known as stagflation – and that the Fed may be too slow in responding.

Between a rock and a hard data point

The Fed has been under pressure to cut rates for some time – including from President Donald Trump.

The reason markets and the White House are so interested is because what the Fed does matters. The central bank’s decision at its near-monthly meetings helps banks and other lenders to determine rates on auto loans, mortgages, credit cards and more. Lower rates usually lead more businesses and consumers to borrow and spend more, boosting economic activity. This also can drive up inflation.

For the better part of three years, the central bank has been focused on its generational fight against inflation. But now, with inflation down significantly from its 40-year high of 9% reached in 2022 and the jobs market sputtering, conditions finally seemed right to resume cutting rates.

The labor market has seen continued deterioration, most notably with the Bureau of Labor Statistics’ revisions to nonfarm payrolls – in effect reducing the number of jobs economists thought the U.S. gained by almost 1 million for the year ending in March 2025.

But a recent uptick in inflation has made the Fed’s call more complicated.

Over the past four months, the consumer price index has consistently ticked up, with the most recent CPI figure indicating year-over-year inflation of 2.9% – well above the Fed’s target of 2%.

Switching focus to jobs

At the Fed’s last meeting in August, Chair Jerome Powell said that the risks to the labor market now exceed the risks of inflation.

For example, for the first time since 2021, the number of unemployed people have outpaced job vacancies as companies have moved to eliminate open positions before laying off workers.

Most compelling is the so-called U6 unemployment rate – which includes those in the regular unemployment figures and people who have stopped looking for jobs, as well as those who are working part time but are looking for full-time opportunities. That has increased over the past three months to 8.1%.

The evidence suggests that businesses are reluctant to add workers as tariff policy and broad economic uncertainty appear to drive hiring decisions.

a black-and-white photo shows classic cars and a man pushing a lawnmower in a long line on the road
The last time there was stagflation was the 1970s, which led to long lines for cars ≠ and mowers – at the gas stations.
AP Photo

The worst of both worlds

The short-term risk here is that a quarter-point cut won’t be enough to shore up the jobs market, and it may be too late to prevent the economy from tipping into recession.

The longer-term risk is more concerning: Not only could the economy contract, but it could do so while inflation accelerates.

The last time the U.S. experienced stagflation was in the 1970s, when an oil embargo caused the price of crude to double. This drove up inflation while causing unemployment to soar and the economy to stall. Policies aimed at reducing inflation typically exacerbate slowing growth, and vice versa. In other words, there were fewer dollars to go around – and those dollars were worth a little less every day.

The pain experienced during this previous bout of stagflation convinced a generation of economists and policymakers that the condition was to be avoided at all costs.

The Fed, which has consistently shown its hand and has guided the markets toward this week’s rate cut, now has to make what seems like an impossible decision: cut rates even if doing so will add inflationary pressures.

And there are other potential headwinds for the U.S. economy. For example, it has yet to fully absorb the impact of Trump’s immigration crackdown on productivity and output due to the loss of workers. Waning consumer confidence suggests consumer spending could soon drop. And a potential federal government shutdown looms in September.

In my view, it’s clear that a cut is warranted. But will it drive up inflation? Economists like me will be watching this closely.

The Conversation

Jason Reed does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Fed, under pressure to cut rates, tries to balance labor market and inflation – while avoiding dreaded stagflation – https://theconversation.com/fed-under-pressure-to-cut-rates-tries-to-balance-labor-market-and-inflation-while-avoiding-dreaded-stagflation-265361

After Charlie Kirk’s murder, the US might seem hopelessly divided – is there any way forward?

Source: The Conversation – USA (2) – By Lee Bebout, Professor of English, Arizona State University

Many people think the U.S. is at an inflection point. StudioM1/iStock via Getty Images

Shortly following the fatal shooting of conservative activist Charlie Kirk, many politicians and pundits were quick to highlight the importance of civil discourse.

Utah Gov. Spencer Cox called for an “off-ramp” to political hostilities, while California Gov. Gavin Newsom released a statement condemning political violence. He lauded Kirk’s “commitment to debate,” adding, “The best way to honor Charlie’s memory is to continue his work: engage with each other, across ideology, through spirited discourse.” Political commentator Ezra Klein wrote, “You can dislike much of what Kirk believed and the following statement is still true: Kirk was practicing politics in exactly the right way.”

With so many Americans consuming political content via siloed social media feeds and awash in algorithms that stoke outrage, these ideals may seem quaint, if not impossible.

Clearly, murder is a no-go. But what does it mean to practice politics “the right way?” How can people engage “across ideology” in a “spirited” way?

Well, one way to not practice politics the right way is to limit the other side from having a voice of authority. Since 2016, the organization Kirk co-founded, Turning Point USA, has hosted the Professor Watchlist. The online database generated harassment campaigns against professors, leading to calls for firings, hate mail and death threats. To be sure, the left has not been without its own excesses of harassment in recent years.

Kirk was also known for going to college campuses and speaking to students: entering the lion’s den and affably challenging audiences to “change my mind.”

To me, the impulse to shut down the other side, combined with the “change my mind approach” to debate, has only exacerbated political polarization and entrenchment. Instead, I propose a few different ways of thinking about conversations with people whose views differ from your own.

The fantasy of swiftly changing minds

In my forthcoming book, “Rules for Reactionaries: How to Maintain Inequality and Stop Social Justice,” I explore the language strategies used to advance white supremacy and anti-feminism across U.S. politics and culture.

Deliberative democracy is the idea that decision-making and governance are arrived at through thoughtful, reasoned and respectful dialogue. This may take the shape of debates in Congress or robust questioning in town halls. But deliberative democracy also shapes the way all neighbors or citizens treat each other, whether on the street or at the dinner table.

I contend that a big stumbling block that prevents the U.S. from tackling its biggest problems is how Americans conceptualize deliberative democracy: There’s a fantasy that people’s minds can be easily changed, if only they’re given certain information or hear certain arguments.

In the 1990s, this was epitomized through former President Bill Clinton’s Initiative on Race, a program that he framed as a vehicle for social and political transformation. Clinton believed that an advisory board of experts could foster a meaningful national dialogue and produce necessary healing.

In response, conservative political figures objected both to the need for a conversation in the first place and to the makeup of the committee leading it.

By the time Clinton’s second term ended, the initiative quietly disappeared, only to be mentioned in passing in Clinton’s memoir. Yet with each subsequent racial flash point, from the arrest of Henry Louis Gates in 2009 to the murder of George Floyd, calls resurfaced for the national conversation. But race remains a politically and culturally salient issue.

Similarly, many Americans view friends, relatives and colleagues as targets for conversion. Because of the nature of my research, I often get a version of this question from my students: “How do you change someone’s mind if they say they’re a socialist?” Or they may frame it as, “I’ve got Thanksgiving with my family coming up, and my Uncle Johnny is so transphobic. How do I convince him to support trans rights?”

Cultural theorist Lauren Berlant would describe these encounters as moments of cruel optimism. There’s the belief that what you’re about to do is good and worthy. But time and again, you’re met with feelings of futility and frustration.

When debating politics, many people crave a chance to engage with someone they disagree with. There’s the hope of changing hearts and minds. But few minds – if any – change that quickly, and approaching these conversations as small windows of opportunity ends up being their downfall.

Opening minds instead of changing them

There are more fruitful approaches to conversation than merely trying to best someone in an argument by deploying buzzwords or “gotcha!” moments.

Rather than trying to immediately change someone’s mind, what if you entered a conversation with the goal of simply planting seeds? This approach transforms the dialogue from an attempted conversion into a legitimate conversation, wherein you’re merely offering your partner something to consider after the fact.

Another strategy involves remembering that conversations often have multiple audiences.

Consider the Thanksgiving dinner with Uncle Johnny. What if, instead of focusing on trying to convert him, the speaker recognized that there were other listeners at the table? Perhaps they could rethink their encounter not as converting an opponent, but as modeling to relatives how to have a conversation about one’s values with a loved one whom they vehemently disagree with. Or perhaps the speaker could recognize that a cousin at the table may be closeted, and take it upon themselves to model how to push back against transphobia.

In both cases, the conversion of Uncle Johnny ceases to be the objective. Civic dialogue and persuasion remain.

Change is slow but never futile

If the U.S. is going to heal its civic life through dialogue, I think it will require Americans to not just speak with those they disagree with, but to listen to them as well.

Krista Ratcliffe, a scholar of rhetoric at Arizona State University, has written about her concept of “rhetorical listening.” Listeners, she argues, must not simply be attuned to the words a speakers says, but also to the life experiences and ideologies that shape those words.

Rhetorical listening means avoiding the urge to one-up the opponent or convert the unwashed masses. Instead, you’re entering into dialogue from a position of curiosity, with a willingness to learn and grow.

Many people believe that the U.S. is at an inflection point. Will families and friendships continue to be torn apart? Will greater political polarization lead to more violence? Often it feels hopeless.

Like Sisyphus, many Americans probably feel like they continue to push a boulder up a hill, only for it to roll down the other side. The error would be for Americans to be surprised when the boulder rolls back down – shocked that there was no progress and that everyone has to start over again.

While the Sisyphean task of deliberative democracy requires that citizens push the boulder day in and day out, they should also recognize that as they push, the weight of the boulder as it’s collectively pushed will gradually and imperceptibly alter the terrain.

Moreover, as the French philosopher Albert Camus once wrote, it’s important to “imagine Sisyphus happy” – to continue to seize what joy can be had as this hard work plods along.

The Conversation

Lee Bebout does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. After Charlie Kirk’s murder, the US might seem hopelessly divided – is there any way forward? – https://theconversation.com/after-charlie-kirks-murder-the-us-might-seem-hopelessly-divided-is-there-any-way-forward-265248

Starmer’s Downing Street reshuffle: why he’s brought in Darren Jones for ‘phase two’ of his government

Source: The Conversation – UK – By Bradley Ward, Teaching Fellow, Department of Politics and International Relations, University of Southampton

Flickr/Number 10 , CC BY-NC-ND

Prime Minister Keir Starmer was forced to make changes to his top team by Angela Rayner’s resignation amid a stamp duty scandal, but has ended up using the opportunity to implement a major reshuffle at No 10 too. Key roles have been handed out to ministers on Labour’s right flank, who have been tasked with implementing phase two of the Labour government’s “plan for change”: “delivery, delivery, delivery”.

According to No 10, phase one – fixing the foundations – is complete. It can now move beyond repairing the wreckage left by the Conservatives and onto delivering core policy goals. In shifting focus, Starmer’s hope is that the government can convincingly demonstrate “real change” to the electorate in time for the next election. But he is also aiming to consolidate his team’s grip over the government machine.

A low-profile but highly significant decision has been taken to move Darren Jones from his role in the Treasury to become the chief secretary to the prime minister. In the first cabinet meeting since the reshuffle, Jones sat two seats down from Starmer.

In this newly created role, Jones has been handed “operational oversight” of the government’s programme and will lead a new No.10 delivery unit. The unit replaces the short-lived Mission Delivery Unit, which was created when Labour first entered office. Other notable appointments to Starmer’s top office reflect his push for a more assertive approach to economic policy and a sharper communications strategy.

The reshuffle is a response to a longstanding problem facing British prime ministers: having power over decision-making with limited capacity to deliver policy. From Boris Johnson’s attack on the courts to Liz Truss’s rows with the Office for Budget Responsibility, prime ministers have long lamented the weakness of the levers available to them to execute their agenda.

In line with this, Starmer is reported to have become frustrated at how slowly decisions are implemented. He has moved forward with plans to transform the “overcautious and flabby state” by recouping powers transferred to quangos.

The latest No 10 reshuffle is largely consistent with this trajectory. The UK’s system of governance and democracy is widely regarded as one of the most centralised in the western world. And yet, in managing a wider political system characterised by incoherence, prime ministers have generally responded by hoarding power through mechanisms of internal centralisation.

Starmer’s government, for example, is planning to give the justice secretary the power to veto decisions made by the sentencing council. Similarly, new housing secretary Steve Reed promises to “build, baby, build” – seeing through on Conservative efforts to weaken challenges to the government through the courts by diluting planning regulations.

And as former policymaker Sam Freedman illustrates in his recent book Failed State, cabinet and personnel changes have become increasingly centred around loyalty to the prime minister’s agenda.

Have the foundations been fixed?

Some of Labour’s proposals to fix the foundations seek to address this problem of power without capacity. Transferring hundreds of digital services onto a single platform should make it easier for citizens to navigate public services, for example.

The creation of a delivery unit should give the centre stronger strategic infrastructure. And abolishing some quangos, such as NHS England, will reduce unnecessary administrative duplication and simplify the centre’s capacity to direct policy.

Ministers around the cabinet table.
The prime minister hosts a cabinet meeting with Darren Jones two seats to his right.
Number 10/Flickr, CC BY-NC-ND

Jones’s move to No 10, as well as the hiring of several economic advisers, appears to be a welcome attempt to reassert the prime minister’s authority over the Treasury, which is widely regarded as having become too powerful.

However, despite Starmer’s claim to have successfully completed phase one, the hyper-centralised yet highly fragmented system behind the quandary of power without capacity remains firmly in tact. Central government continues to hold a monopoly over executive and legislative power, while frontline provision remains gutted by cuts to public services, a toxic dependency on outsourcing and weak local government.

In the absence of more foundational reforms to the UK’s system of governance, executive centralisation only serves to burden central government with more responsibilities which it lacks the resources to handle.

What can be done?

Many options are available to improve the capacity of the government to “deliver, deliver, deliver”. For example, Treasury monopoly power over fiscal policy might be broken up via a department for growth responsible for macroeconomic policy.

A new department for the civil service, overseen by a revamped civil service board, would bolster accountability for implementation across central government.

Policy delivery could also be boosted by strengthening local government. The 2022 Brown report, commissioned by the Labour party in opposition but ignored since entering government, provides many useful proposals along these lines. These include replacing the House of Lords with an elected chamber of nations and regions, and a transfer of greater fiscal powers to local governments to allow them to be more responsive to local needs.

In lieu of a more urgent and transformational programme to rebuild state capacity at both the central and local levels, however, the prospects for delivery during the Starmer government’s second phase look bleak.


Want more politics coverage from academic experts? Every week, we bring you informed analysis of developments in government and fact check the claims being made.

Sign up for our weekly politics newsletter, delivered every Friday.


The Conversation

Bradley Ward receives funding from Leverhulme Trust.

Joseph Ward has received funding from the ESRC.

ref. Starmer’s Downing Street reshuffle: why he’s brought in Darren Jones for ‘phase two’ of his government – https://theconversation.com/starmers-downing-street-reshuffle-why-hes-brought-in-darren-jones-for-phase-two-of-his-government-264550