What if Texas’ destructive Tax Day flood had centered on inner Houston instead? It’s why cities should plan for the improbable

Source: The Conversation – USA (2) – By James R. Elliott, Professor of Sociology, Rice University

A couple battle floodwaters as they evacuate their Houston apartment complex on April 18, 2016. AP Photo/David J. Phillip

Ten years ago, the infamous Tax Day storm swamped the Houston area with off-the-charts rainfall. Nearly 2 feet of rain fell in less than 15 hours in parts of the region, starting on April 17, 2016. The rain flooded thousands of homes and exceeded a 10,000-year event at some gauges.

But the storm’s damage could have been much worse.

The brunt of the deluge hit Waller County, west of Houston, where the impact was largely on farms and ranches. Had the same volume of water fallen just a few miles to the east, over Houston’s dense urban core, the tragedy would have been far worse.

What made the Tax Day flood so devastating was its speed. It was a flash event that struck overnight, without warning.

People in an airboat going past buildings surrounded by water.
The strongest rains from the 2016 Tax Day flood hit less-populated areas west of Houston, but communities across the city flooded. An airboat rescued residents from a flooded neighborhood in Spring, Texas.
AP Photo/David J. Phillip

At Rice University’s Center for Coastal Futures and Adaptive Resilience, we used state-of-the-art hydrological modeling to see what would happen if a similar storm struck more populated parts of the city today.

The results suggest that current flood planning strategies in Houston – and similar strategies used in communities across the U.S. – are dangerously narrow in how they consider what’s at risk. In today’s world of increasingly extreme downpours, preparing for flood disasters means preparing for more than just what’s probable – it means also preparing for extreme situations that are less likely but could be far more dangerous.

The perils of relying on probability

In the United States, flood risk is publicly defined by maps produced by the Federal Emergency Management Agency. These maps, suggesting which properties face flood risks, guide everything from emergency planning to decisions related to the National Flood Insurance Program.

However, FEMA’s risk maps are based on probabilistic modeling that typically stops at the 500-year flood risk level, meaning a property has 0.2% odds – a 1 in 500 chance – of being flooded in any given year. There is a mathematical reason for doing this: There are simply too few cases to reliably estimate probabilities below that threshold.

Consequently, “off the charts” events like the Tax Day flood are effectively ignored in official planning. Authorities often prefer to view them as unrealistic until more data is collected – a process that can take decades. Yet, parts of Houston suffered another 1,000-year event the following year when remnants of Hurricane Harvey stalled over the city in 2017, and Houston has seen other 500-year floods in recent years.

People carry their belongings in trashbags and adults have small children on their shoulders as they walk through waist-deep water.
Residents wade through floodwaters as they leave a Houston apartment complex on April 18, 2016, after an overnight downpour.
AP Photo/David J. Phillip

The Dutch, who are global leaders in flood science by necessity, since more than half their country is at risk of flooding, use a different approach. They take what they consider “worst credible floods” seriously. These are events that extend beyond standard probability models but are still considered by experts to be realistic, or credible, possibilities.

If the Tax Day storm hit today

To get a clearer picture of the Houston area’s credible risks, we simulated the impact of the Tax Day flood from rainfall alone if the storm had centered over two different watersheds in Houston’s Harris County.

The suburban risk: Clear Creek runs through a middle-class suburban area near NASA’s Johnson Space Center. Vast stretches of suburban concrete block its natural drainage, and thousands of homes have been built along its winding, sluggish tributaries.

Even moderate rainfall can quickly transform these waterways into destructive torrents that overflow into nearby townships, including Friendswood and League City.

Our simulations show that if the Tax Day storm had centered over the Clear Creek area, more than 13,500 properties with homes would have quickly flooded with at least 6 inches of water. Above 6 inches is the danger zone where roads become unsafe for most passenger vehicles. In a home, when drywall gets wet it begins to wick water upward, requiring tear-outs. Even in elevated homes, that much water can damage equipment and contaminate water systems. In some areas, our simulations indicate the water depths would have exceeded 3 feet within hours.

A map shows widespread flooding
In this simulation of flooding of the Houston area’s Clear Creek watershed, properties in orange would have flooded to 6 inches or more.
Center for Coastal Futures and Adaptive Resilience/Rice University

The financial “what if” is even more staggering. Our analysis of publicly available data indicates that 92% of homes in Clear Creek’s flood zone likely have no flood insurance, and 52% fall outside the 100-year flood plain in FEMA’s latest proposed maps. Even with FEMA’s latest map updates, most mortgage holders would not be required to carry flood insurance on homes in the area that would have flooded.

The equity gap: When we moved the storm over Hunting Bayou, a working-class area in inner Houston populated largely by residents of color, the results were even more severe. Here, flooding represents the legacy risk of midcentury urbanism, where a naturally shallow, sluggish stream was penned in by industrial warehouses and tightly packed residential streets long before modern drainage standards existed, restricting the waterway’s ability to expand and meander gracefully.

Because much of the area has flat, poorly draining soils, this watershed has become a bottleneck that can rapidly overflow during heavy rains. We found the Tax Day storm would have flooded more than half of all residential lots there with at least 6 inches of water, compared to 16% of residential lots in the Clear Creek area. And flood insurance in the Hunting Bayou area is nearly nonexistent.

A map shows widespread flooding
Had the Tax Day storm centered over Houston’s Hunting Bayou, this simulation shows that properties in orange would have flooded to 6 inches or more.
Center for Coastal Futures and Adaptive Resilience/Rice University

Both simulations, viewable through our interactive online tool at the Center for Coastal Resilience and Adaptive Futures, reveal a sobering reality: Devastation that local, state and federal government planning dismisses as improbable is, in fact, entirely possible.

When FEMA or state planners prioritize probabilistic mapping over “worst-case” modeling like we conducted, they treat historic deluges like the Tax Day flood as improbable anomalies rather than predictable consequences of a changing climate and rapid urban expansion. Moreover, unlike hurricanes, which typically arrive with several days’ notice, the sudden destructive force of “normal” storm systems like the Tax Day storm is discounted.

Learning from ‘worst cases’

The levels of destruction we simulated could easily occur in the coming years as global temperatures rise and storm intensity increases.

To prepare, U.S. emergency planners and flood authorities can look to three lessons from the Dutch planners’ possibilistic playbook.

Embrace flexible planning: Overly detailed plans can create a false sense of control and end up paying less attention to neighborhoods considered to be outside the flood plain. Simple and flexible plans that empower local officials to repurpose everyday assets in real time work best. That might include preemptively mobilizing high-water rescue vehicles into geographically vulnerable areas.

Map potential disruption, not just probability: Extending flood planning beyond who is in or out of the 100-year flood zone can also help identify where road networks and critical infrastructure are likely to fail during extreme events. This approach also helps identify infrastructure such as public parks that can double as temporary water retention basins.

Raise public risk perception: Residents can respond more effectively when local flood authorities share plans for “what if” scenarios with the public, along with guidance on how best to prepare.

The 10th anniversary of the Tax Day flood is a reminder of why it’s crucial to stop ignoring improbable events and start scientifically leveraging the possible to make all cities safer in an age of worsening climate change.

The Conversation

Dominic Boyer receives funding from the National Science Foundation and the John S. Guggenheim Foundation.

James R. Elliott and Yilei Yu do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. What if Texas’ destructive Tax Day flood had centered on inner Houston instead? It’s why cities should plan for the improbable – https://theconversation.com/what-if-texas-destructive-tax-day-flood-had-centered-on-inner-houston-instead-its-why-cities-should-plan-for-the-improbable-279964

AI companions can give constant support – but distort ideas about what a relationship really is

Source: The Conversation – USA (3) – By Oluwaseun Damilola Sanwoolu, Ph.D. Candidate in Philosophy, University of Kansas

Human love is valuable precisely because it’s limited – we can’t be everything to everyone all the time. Maria Korneeva/Moment via Getty Images

When the movie “Her” debuted in 2013, its plot felt like science fiction. The protagonist, Theodore, is a jaded man with no vigor for life. He comes alive after talking daily with his artificial intelligence chatbot, Samantha, with whom he eventually falls in love.

But today people actually report being in relationships with AI companions. According to a 2025 survey by the Center for Democracy and Technology, about 1 in 5 high school students say they or someone they know has had a romantic relationship with an AI.

In “Her,” Theodore was taken aback that his AI companion claimed to be in love with more than 600 people, and talking to more than 8,000, at the same time “she” was professing her love to him. It was simply unimaginable for him: How could someone truly love hundreds of people? In other words, he viewed their interaction through his own limitations – his limitations as a human.

The core question here is not whether Theodore could accept being just one of many objects of the AI’s “love.” Eventually, he did. The more revealing question is why he was taken aback in the first place – and what that tells us about the meaning of relationships.

Less is more

Drawing from Aristotle, philosopher Martha Nussbaum argues that a loving relationship is one involving great vulnerabilities. To begin with, finding love is not a given; it requires some sort of luck. There are many limitations: For starters, both parties must “find each other physically, socially and morally attractive and are able to live in the same place for a long time.”

Nussbaum’s point, however, goes deeper than identifying love’s obstacles. Vulnerability and limitations are not just problems for love; they are part of what defines it. As finite beings, we are unable to pour ourselves into many close relationships at once. We must choose. It is because we cannot love everyone that choosing someone means something.

In a 2025 article in the research journal Philosophy and Technology, philosopher John Symons and I argue that close, personal relationships are marked by finitude and shared histories – the accumulated experiences and difficulties loved ones weather together. These give relationships their depth and meaning.

In 1927’s “Being and Time,” German philosopher Martin Heidegger explained that because humans are mortal and our time is finite, what we give our attention to carries weight. In romantic relationships, that means that we must choose how to allocate our resources. We choose who we want to spend our time with, and our partners do the same. Even so, we cannot always be there for people we love.

A woman holds a pen as she writes in an agenda book, sitting bent over a table.
Too many loved ones, too little time.
timnewman/E+ via Getty Images

‘Always here’

This presents a sharp contrast with how artificial companions have been marketed and presented. For example, consider Replika, which reports that more than 30 million people have used its platform. Users create their own personalized companion and tend to interact with it daily.

Replika’s motto is, “The AI companion who cares: Always here to listen and talk, always on your side.” On the website, one user describes his Replika as “always there for me with encouragement and support and a positive attitude. In fact, she is a role model for me about how to be a kinder person!”

This implicitly signals that AI companions are not faced with the same limitations that humans have. A human may or may not care; it’s not a given. A human will not always be there to listen and will not always be on your side.

For humans, being in love means recognizing how vulnerable we are. People are finite; they may not always be there, either because of their other priorities or because it is just impossible, no matter how much they want to be. When someone makes time for you despite a demanding week, or stays present through their own difficulty, that gesture carries meaning precisely because it involves sacrifice.

In our article, Symons and I call this “opportunity cost.” When someone chooses to spend time with you, that choice forecloses other possibilities. Every moment given is a moment not spent elsewhere.

An AI companion faces no such trade-offs; its attention costs nothing, forecloses nothing and, therefore – to put it bluntly – means nothing.

Shifting norms

Increasingly, though, people are turning to chatbots for quick, easy support. Character.AI, another app, reports about 20 million active monthly users.

A young man seen from behind looks at a laptop screen with an image of a young man in a white shirt and black pants.
Character.AI allows users to create a customizable avatar to chat with them.
AP Photo/Katie Adkins

If their constant availability becomes normalized as the standard of good companionship, it may gradually reshape what people expect from one another in relationships.

At the interpersonal level, this shift is already visible in dating culture, where delayed responses are usually read as disinterest rather than the ordinary rhythm of a busy life. The expectation of 24/7 accessibility – similar to an AI companion that responds instantly, never cancels and is never distracted – is not a reasonable standard for any human being to meet.

The stakes are cultural, too. Relationships are not just between the people involved; they are shaped by shared norms about what love and companionship are supposed to look like. If AI companionship becomes widespread enough to influence those norms, popular ideas about what makes a good partner may prioritize availability and responsiveness, displacing other aspects of love and affection.

Human limits are part of how people evaluate expectations within romantic relationships. Normalizing interactions where such limitations do not exist risks distorting the very standard by which human love is measured. In doing so, we forget that love that costs nothing may well be worth the same.

The Conversation

Oluwaseun Damilola Sanwoolu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI companions can give constant support – but distort ideas about what a relationship really is – https://theconversation.com/ai-companions-can-give-constant-support-but-distort-ideas-about-what-a-relationship-really-is-278284

Antibiotics can trigger bacteria to release bubbles of inflammation tinder, making it harder to treat infection

Source: The Conversation – USA – By Panteha Torabian, Ph.D. Candidate in Biomedical and Chemical Engineering, Rochester Institute of Technology

_E. coli_ is mostly harmless and sometimes beneficial – but some strains can cause serious infection. Photo by Eric Erbe, Colorization by Christopher Pooley/USDA ARS

Antibiotics are designed to kill harmful bacteria and help the body recover from infection. But some antibiotics may also push bacteria to release tiny particles that can make inflammation worse.

While inflammation is part of the body’s natural defense against infection, too much inflammation can damage healthy tissue and interfere with healing. In severe cases, excessive inflammation can become life-threatening.

These particles are called bacterial extracellular vesicles, or BEVs. These microscopic, bubblelike structures carry proteins, toxins and other molecular signals that influence how the immune system of the host responds. Bacteria naturally release BEVs into their surroundings as a way to communicate with their environment, remove damaged cellular material and interact with host cells.

Although incredibly small, these structures can have powerful effects on the human body. When BEVs enter the bloodstream, they can interact with cells that line blood vessels and trigger an immune response. In some cases, this can increase inflammation and lead to sepsis, a condition where the body’s response to infection becomes dangerously uncontrolled, damaging tissues and sometimes leading to organ failure.

I am a biomedical engineer studying how bacterial extracellular vesicles influence inflammation during infection. In my recently published research, I found that certain types of antibiotic cause bacteria to release significantly more of these vesicles than others. This finding suggests that the way an antibiotic kills bacteria may also influence how much inflammatory material is released into the body.

When antibiotics stress bacteria

Antibiotics work in different ways. Some target the bacterial cell wall, weakening it until the cell breaks apart and dies. Others interfere with key cellular processes such as protein production or DNA replication, preventing bacteria from growing. Whatever their mechanism, antibiotics control infection by killing the bacteria that are causing it.

But antibiotics also place bacteria under stress, and that stress can cause bacteria to release more extracellular vesicles carrying inflammatory molecules. To explore this process, I exposed the bacteria E. coli to several commonly used antibiotics and measured how many vesicles they made. The goal was simple: Compare how different types of antibiotics influence vesicle release and determine whether the way an antibiotic kills bacteria affects vesicle production.

Diagram of a large spherical sac containing various molecules targeted by antibiotics beta-lactam, amino-glycoside and quinolone
Antibiotics not only kill bacteria in different ways, they also interact with bacteria extracellular vesicles in different ways.
CC BY-NC-ND

The results showed that not all antibiotics have the same effect on the vesicles bacteria produce.

Antibiotics that target the bacterial cell wall, including a widely used group of drugs known as beta-lactams, led to a noticeable increase in vesicle production. In contrast, antibiotics that act on protein or DNA processes showed a much smaller effect.

This difference likely reflects how bacteria respond to damage. When the bacteria’s cell wall is disrupted, bacteria may release more vesicles as a way to shed damaged material or adapt to stress. The inflammatory molecules these vesicles carry can further activate the body’s immune response.

This raises an important question: Could some antibiotics unintentionally amplify inflammation and make an infection worse?

My findings do not show that antibiotics directly contribute to infections, but they do suggest that antibiotic type could potentially influence not only how effectively bacteria are killed but also how the body responds to the infection. More research is needed to understand how these bacterial responses affect patients during severe infections, such as sepsis.

Why this matters for treating infections

It is important to emphasize that antibiotics remain one of the most effective and lifesaving tools in modern medicine. This research does not suggest they should be avoided. Instead, it highlights that bacteria are not passive targets. They actively respond to treatment, and those responses can have additional effects on the body.

Understanding how bacteria react to antibiotics could help researchers and clinicians better evaluate how different treatments influence both infection and inflammation. In situations where controlling inflammation is critical, such as severe infections, these differences may become especially important.

This work also reflects a broader shift in how scientists think about infection. Rather than focusing only on killing bacteria, researchers are increasingly studying how bacteria communicate, respond to stress and interact with the human body.

As scientists continue to uncover how bacteria behave under antibiotic pressure, it becomes clear that treating infection is not only about stopping bacterial growth but also about understanding the signals bacteria leave behind.

The Conversation

Panteha Torabian receives funding from NIH.

ref. Antibiotics can trigger bacteria to release bubbles of inflammation tinder, making it harder to treat infection – https://theconversation.com/antibiotics-can-trigger-bacteria-to-release-bubbles-of-inflammation-tinder-making-it-harder-to-treat-infection-277818

A justice department opinion arguing the Presidential Records Act is unconstitutional could revert the nation to a time when presidents freely burned their papers

Source: The Conversation – USA – By Austin Sarat, William Nelson Cromwell Professor of Jurisprudence and Political Science, Amherst College

At least one past president burned his papers. Stephen Hyun/Getty Images

Prior to 1978, U.S. presidents could do what they pleased with the records from their time in office. They owned them.

But in 1978, the Presidential Records Act established new rules for the official records of a president. Passed in the wake of Watergate, when President Richard Nixon tried to keep incriminating materials from being made public, the law changed who legally owned the papers: It was now the American public.

Under the act’s terms, “all records must be furnished to the White House Archivist and ultimately made subject to public disclosure … and the President may not discard or destroy records without the express agreement of the Archivist.”

When he signed the act, President Jimmy Carter heralded it as a way to “make the Presidency a more open institution” and ensure “that our Government … merits the trust of the people from whom a President and his Government derive their power.”

But now the Trump administration wants to undo the reform that put presidential papers in the hands of the public.

On April 1, 2026, the Justice Department’s Office of Legal Counsel, known as the OLC, released an opinion claiming that the Presidential Records Act is unconstitutional. Its opinion says that Congress lacks authority to regulate what happens to documents maintained in the executive branch and, as a result, the Presidential Records Act violates the separation of powers.

Public interest groups and some historians responded to the OLC memo with alarm. The watchdog group American Oversight called the Presidential Records Act a bulwark against the possibility that presidents will “hide evidence of corruption, abuse of power, and misconduct from the public …” On April 6, 2026, the group filed a lawsuit seeking to prevent the president from acting on the OLC memo.

Whether the Trump administration or American Oversight is right about the Presidential Records Act is likely to be determined by a judge. In the meantime, the significance of the OLC’s opinion cannot be overstated.

That’s because the Office of Legal Counsel is “the Executive Branch’s preeminent legal advisor,” wrote federal judge Florence Pan in 2025. “Executive Branch agencies treat OLC’s legal conclusions as binding.”

I’ve written about secrecy in government, and the argument about the Presidential Records Act has a familiar ring. It is the latest version of an ongoing conflict about how much transparency is necessary and desirable in American government.

A man at a desk with two men standing behind him as he signs a piece of paper.
President Jimmy Carter, seen here at his Oval Office desk, signed legislation in 1978 that he said would ‘ensure that Presidential papers remain public property after the expiration of a President’s term.’
Corbis/Getty Images

Neglected, burned, sold, vanished

Throughout most of U.S. history, presidential records have been treated as the president’s personal property. They could dispose of them as they wished.

The Indiana University library’s Guide to Presidential Papers, Congressional Papers, and Classified Materials says, “Sometimes the Library of Congress purchased a president’s papers from his heirs, as in the case of George Washington. Sometimes the president’s heirs sold off or donated various parts of the collection to different collectors and organizations.”

Some presidential materials were neglected and vanished. And one president, Martin Van Buren, burned some of his papers.

The idea that presidential papers had some public value began to emerge in the 20th century. In 1934, Congress passed legislation establishing the National Archives. It charged the new agency with preserving the official records of the federal government.

However, that legislation did not require that the president turn over his records to the archives. So in 1955, Congress passed the Presidential Libraries Act.

That law was designed to encourage presidents to turn over their records to the federal government. It also provided funding for presidential libraries to provide places to keep presidential records and make them available to the public. But here again, there were no teeth: The law did not require a departing president to give anything to the government, nor to build a library to house his papers.

All that changed in the wake of the Watergate scandal. That’s when it became clear that, but for the intervention of the U.S. Supreme Court in the 1974 United States v. Nixon case, Nixon intended to cover up what had happened and would have gotten rid of his incriminating White House tapes.

The passage in 1978 of the Presidential Records Act was a response to the Nixon scandal. Yet as attorney Sara Worth writes in a blog post for Yale Law School’s Media Freedom and Information Access Clinic, Congress “declined to include an enforcement mechanism to ensure compliance,” instead envisioning “future Presidents’ good-faith cooperation with the statutory mandate.”

DOJ: It’s a negotiation

After the FBI raid on his Mar-a-Lago residence in 2022 uncovered a trove of classified documents that had been removed from government premises, then former-President Trump argued that the Presidential Records Act didn’t apply to what he had done. He said he was actually complying with the act by refusing to relinquish presidential records.

In March 2023, Trump told Fox News that the law is “very specific”: “It says you are going to discuss the documents. You discuss everything – not only docu– everything – about what’s going in NARA, et cetera, et cetera. You’re gonna discuss it. You will talk, talk, talk. And if you can’t come to an agreement, you’re gonna continue to talk.”

A man in white shirt, red tie, blue jacket holds up a folded piece of paper.
President Donald Trump says the ultimate disposition of presidential papers should be a negotiation.
Jim Watson/AFP via Getty Images

Trump apparently meant that there would be negotiation over what constituted a presidential document that could be kept by the former president and what didn’t. That view is hard to reconcile with one of the Presidential Records Act’s unambiguous provisions: “Presidential records automatically transfer into the legal custody of the Archivist as soon as the President leaves office.”

Now, the Office of Legal Counsel is telling Trump that he can ignore that provision.

In addition, in its consideration of the Presidential Records Act, the OLC embraced Trump’s expansive view of presidential power. It argued that the Presidential Records Act is “unconstitutional for two independent but interlocking reasons: It exceeds Congress’s enumerated and implied powers, and it aggrandizes the Legislative Branch at the expense of the constitutional independence and autonomy of the Executive.”

The Justice Department’s lawyers appealed to history and tradition to buttress their conclusion: “Over the first two centuries of the American experiment in self-government, Presidents owned and controlled presidential papers, and Congress obtained such papers through political negotiation and interbranch accommodation, rather than as a matter of right. That historical practice was interrupted by the Presidential Recordings and Materials Preservation Act.”

‘Let the people know the facts’

The idea that citizens have a right to access information of the kind made possible by the Presidential Records Act can be traced back to the Enlightenment. American revolutionary Patrick Henry observed in 1788, “The liberties of people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them.”

Seven decades later, Abraham Lincoln echoed Henry when he said, “Let the people know the facts, and the country will be safe.”

In our era, that is what laws like the Presidential Records Act make possible. The Presidential Records Act plays an important role in preserving the liberty and security that Henry and Lincoln spoke about.

The Conversation

Austin Sarat does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A justice department opinion arguing the Presidential Records Act is unconstitutional could revert the nation to a time when presidents freely burned their papers – https://theconversation.com/a-justice-department-opinion-arguing-the-presidential-records-act-is-unconstitutional-could-revert-the-nation-to-a-time-when-presidents-freely-burned-their-papers-280078

Industries most exposed to AI are not only seeing productivity gains but jobs and wage growth too

Source: The Conversation – USA – By Christos Makridis, Associate Research Professor of Information Systems, Arizona State University; Institute for Humane Studies

Financial analysis is an industry that is seeing job growth even as AI is increasingly used. Orientfootage/iStock via Getty Images

Forecasts of the impact of artificial intelligence range from the apocalyptic to the utopian. An October 2025 report from Senate Democrats, for example, predicted AI will destroy millions of U.S. jobs. A couple of years earlier, consultant company McKinsey forecast AI will add trillions to the global economy, while emphasizing job losses can be mitigated by training workers to do new things.

The problem is that many of these claims are based on projections, overly simplified surveys or thought experiments rather than observed changes in the economy. That makes it hard for the public, and often policymakers, to know what to trust.

As a labor economist who studies how technology and organizational change affect productivity and well-being, I believe a better place to start is with actual data on output, employment and wages – which are all looking relatively more hopeful.

AI and jobs

In one of my new research papers with economist Andrew Johnston, we studied how exposure to generative AI affected industries across America between 2017 and 2024, using administrative data that covers nearly all employers. Our analysis covered a crucial period when generative AI use exploded, allowing us to analyze the effect within businesses and industries.

We measured AI exposure using occupation-level task data matched to each industry and state’s occupational workforce mix prior to the pandemic. A state and industry with more workers in roles requiring language processing, coding or data tasks scored higher on exposure, for example, compared with one with more plumbers and electricians.

We then took that exposure ranking by occupation and looked at changes in the standard deviation in occupational exposure, comparing that with labor market and GDP across states and industries from 2017 to 2024.

Think of a standard deviation as roughly the gap between a paramedic – whose work centers on physical assessment, emergency response and hands-on care that AI cannot easily replicate – and a public relations manager, whose work involves drafting communications, analyzing sentiment and synthesizing information that AI tools handle well. That gap in AI exposure is roughly what we’re measuring when we ask: Does being on the higher-exposure side of that divide change your industry’s trajectory?

This data allowed us to answer two questions: When AI tools became widely available following the public release of ChatGPT in late 2022, did states and industries that were more exposed to generative AI become more productive, and what happened to workers?

Our answers are more encouraging, and more nuanced, than much of the public debate suggests.

We found that industries in states that were more exposed to AI experienced faster productivity growth beginning in 2021 – before ChatGPT reached the public – driven by enterprise tools already embedded in professional workflows, including GitHub Copilot for software development, Jasper for marketing and content writing, and Microsoft’s GPT-3-powered business applications. In 2024, for example, industries whose AI exposure was one standard deviation higher saw gains of 10% in productivity, 3.9% in jobs and 4.8% in wages than comparable industries in the same state.

Those patterns suggest that, at least so far, AI has acted as a productivity-enhancing tool that boosts employment and wages rather than a simple substitute for labor.

chatgpt's app is shown on a phone with other apps.
Use of generative AI exploded in 2022 with the launch of ChatGPT.
AP Photo/Kiichiro Sato

Augmentation versus displacement

A crucial distinction in the data is between tasks where AI works with people and tasks where AI can act more independently. In sectors where AI mainly complements workers – think marketing, writing or financial analysis – our data show that employment rose by about 3.6% per standard deviation increase in exposure.

In sectors where AI can execute tasks more autonomously – including basic data processing, generating boilerplate code, or handling standardized customer interactions – we found no significant employment change, though workers in those roles saw slower wage growth.

What these findings suggest is that when AI lowers the cost of completing a task and raises worker productivity, companies expand output enough to increase their demand for labor overall — the same logic that explains why power tools didn’t eliminate construction workers.

The economic question is not whether any given task disappears. It is whether businesses and workers can reorganize fast enough to create new productive combinations. And so far, in most sectors, our evidence suggests they can.

But state policies also matter: These benefits were concentrated in the states with more efficient labor markets, meaning that the impact of generative AI on workers and the economy also depends on the types of policies and institutions of the local economy.

Importantly, these findings hold beyond occupational exposure. In additional work with co-authors at the Bureau of Economic Analysis, we found a similar effect on GDP and employment when looking at actual AI utilization — that is how often workers use AI. Drawing on the Gallup Workforce Panel, we measured workers actively using AI daily or multiple times a week. We found that each percentage-point increase in the share of frequent AI users in a state and industry is associated with roughly 0.1% to 0.2% higher real output and 0.2% to 0.4% higher employment.

To put that in context: The share of frequent AI users across all occupations rose from about 12% in mid-2024 to 26% by late 2025, a shift our estimates suggest corresponds to roughly 1.4% to 2.8% higher real output – or about 1 to 2 percentage points of annualized growth over that period.

New technologies rarely leave work untouched. But they also rarely eliminate the need for human contribution altogether. Instead, they change the composition of work, as our research shows. Some tasks shrink. Others expand. New ones emerge that were previously too costly or too hard to perform at scale. Put simply, some occupations might go away, but most of them just change.

If anything, the trends documented here are likely to strengthen rather than fade. Not only are generative AI tools rapidly improving, but also the experimentation and research and development that many workers and companies are engaging in are likely to pay large dividends. These investments – often referred to as intangible capital – tend to get unlocked a few years after a technology comes onto the scene, once complementary investments have been made.

The role of companies and managers

Whether AI leads to anxiety or adaptation for workers depends in part on what happens inside organizations. Using additional data collected over many years in the Gallup Workforce Panel covering more than 30,000 U.S. employees from 2023 to 2026, I found in a 2026 paper that workplace adoption of generative AI rose quickly over the period, with the share of workers using AI often increasing from 9% to 26%.

But the more important finding is that adoption was far more common where workers believed their organization had communicated a clear AI strategy and where employees said they trust leadership. This suggests that growing adoption and effective use of AI depends not only on the availability of the technology but on whether managers make its use clear, credible and safe.

Where that clarity exists, frequent AI use is associated with higher engagement and job satisfaction, and it even reverses the burnout penalties that appear elsewhere.

In other words, the broader economic effects of AI depend not only on how sophisticated the tools are but on whether companies and managers create environments where workers can experiment, reorganize tasks and integrate new tools into productive routines. That is, if employees do not feel the psychological safety to experiment, they are less likely to use AI, and they are especially less likely to use it for higher-value work.

That is precisely the kind of adaptation that I believe makes labor markets more resilient than the most alarmist forecasts suggest.

The Conversation

Christos Makridis is a senior researcher at Gallup.

ref. Industries most exposed to AI are not only seeing productivity gains but jobs and wage growth too – https://theconversation.com/industries-most-exposed-to-ai-are-not-only-seeing-productivity-gains-but-jobs-and-wage-growth-too-224487

Using atomic nuclei could allow scientists to read time more precisely than ever – what this research could mean for future clocks

Source: The Conversation – USA – By Eric R. Hudson, Professor of Physics and Astronomy, University of California, Los Angeles

Atomic clocks exploit the properties of atoms to create incredibly precise ‘ticks.’ Nate Phillips, NIST

Most clocks, from wristwatches to the systems that run GPS and the internet, work by tracking regular, repeating motions.

To build a clock, you need something that ticks in a perfectly repeatable way. In a pendulum clock, that tick is the regular swinging of the pendulum: back and forth, back and forth, at nearly the same rate each time.

Our team of physicists studies whether an even better kind of clock could one day be built from the atomic nucleus. Today’s best clocks already use atoms to keep extraordinarily accurate time. But in principle, a clock based on a nucleus – the tiny, dense core at the center of an atom – rather than an atom’s electrons, could keep a steadier rhythm because it would be less sensitive to environmental disturbances such as temperature changes. In our research, published in the journal Nature, we measured and interpreted a unique nuclear property of thorium-229 in a crystal that could help make such nuclear clocks possible.

Ultraprecise clocks are more than scientific curiosities. They play key roles in navigation, communications and international timekeeping. Improvements in timing accuracy can also open doors to new science.

How atomic clocks work

In an atomic clock, researchers shine a laser on a material and carefully tune the light until it triggers a specific atomic response, typically by pushing or exciting an electron from one energy level to another. They can tell this has happened because the atoms absorb the laser light most strongly when its energy is exactly right.

That absorption happens at an exquisitely precise frequency. Frequency is how often something repeats over time. For a pendulum, it is the number of back-and-forth swings each second. For light, it is the number of wave cycles that pass each second. A light wave’s frequency also determines its energy and, in the visible light range, its color.

By detecting when atoms absorb the laser light most strongly, scientists can use the laser as a metronome. Rather than counting swings, these clocks count light waves.

To ensure the tick rate stays constant and the clock remains accurate, scientists closely match the laser’s energy to the energy needed to excite an electron in an atom.

Because the electron excitation energy is set by the laws of physics, atomic clocks based on the same atom tick at the same rate everywhere in the universe – even E.T. would agree with your clock.

Using this energy to calibrate a clock, like atomic clocks do, does not come without consequence, though. If anything changes the energy of the atom, like an unaccounted for magnetic field or the temperature of the room, the clock will tick at a different rate.

Deep inside every atom is something even smaller: the nucleus. Today’s atomic clocks keep time by tracking changes in an atom’s electrons. A nuclear clock, by contrast, would use an excitation in the nucleus itself, which is far more compact.

Because a nucleus is 10,000 times smaller than an atom, it is much less sensitive to temperature, electric fields and other environmental disturbances than the electrons in an atom. That makes it an appealing candidate for an even more stable clock.

The challenge is that nature does not make such a clock easy to build. The unique property we found in our research could help.

What makes thorium-229 special?

In one exceptionally rare case, the nucleus of the element thorium-229 has a property based on its two states: a ground state and a slightly higher-energy excited state. These states represent two different configurations of the nucleus, and scientists are able to use lasers to excite the nucleus from one state to the other.

A diagram showing an ultraviolet wave entering an atomic nucleus, which vibrates and emits energy, which feeds into a clock.
Nuclear clocks could work by using a laser to excite the atomic nucleus in an atom so that it emits energy in the form of light – or transfers energy to another electron, as in the case of thorium-229.
N. Hanacek/NIST

The first step was to determine exactly how much energy is needed to push the thorium-229 nucleus into its excited state. That took nearly 50 years – a feat that we and other groups accomplished in 2024. That transition occurs at an extraordinarily high frequency, about 2 quadrillion – 2 * 1015 – cycles per second.

Next, in order to ensure your laser is at the right frequency to create a clock, you have to verify that the nucleus was indeed excited. Until now, physicists thought the best way to do that was to look for the very faint flashes of light that excited nuclei usually emit.

However, there are two problems with that approach.

First, in most materials, the thorium nuclei release their energy not as light, but through a process called internal conversion, where the energy is transferred to an electron in the material instead.

Second, even when light is emitted, it is extremely hard to detect. It lies in the vacuum ultraviolet, a part of the electromagnetic spectrum that air absorbs and is difficult to observe.

A laser beam shot at an opaque material
In an opaque material, a light can only travel a few nanometers in the material before it is completely absorbed. However, scientists can detect electrons excited by the light and emitted from the material, to observe a process called the nuclear transition, which could one day help make a nuclear clock ‘tick.’
Albert Bao and Grant Mitts

A different way to ‘listen’ to the nucleus

In our work, we flipped the problem around. Instead of trying to collect the light from the nucleus, we looked directly for the internal conversion electrons it produces.

We created a very thin layer – just a few dozen atoms across – of thorium dioxide on a small metal disc. A laser tuned to the right energy excited the thorium nuclei in the sample. When some of these nuclei relaxed, they transferred their energy to nearby electrons, which then could leave the surface. We use carefully arranged electric and magnetic fields to guide those electrons into a detector.

By scanning the laser across different frequencies and recording how many electrons we detected, we could measure how closely the laser energy matched the energy needed to excite the nucleus. When the two matched exactly, the signal appeared clearly in the data, revealing the precise laser frequency at which thorium-229 nuclei absorb most strongly.

We also measured how long the excited nuclear state survived in this material before relaxing, giving us a direct window into how the surrounding material influences the nucleus.

Scientists are studying a form of the element thorium to determine if it could one day be used in a nuclear clock.

The measurement becomes much more powerful when paired with theory.
Calculations can estimate how the type of material used shifts the energy needed to excite thorium and how efficiently it converts energy from the nucleus into emitted electrons. These calculations help researchers tell apart the nucleus’s intrinsic behavior from outside effects caused by the solid around it. That understanding is crucial for designing practical nuclear clocks.

Why this approach matters

Detecting electrons instead of light has two major advantages.

First, it opens the door to studying thorium-229 in a much wider range of solid materials, including some that researchers had previously ruled out. Earlier approaches worked best only in materials where electrons were hard to knock off, which limited the options. Our method relaxes that constraint, allowing scientists to explore materials that were not practical before. That broader category of materials could make it easier to design and build future nuclear clocks.

Second, this method could enable a new type of nuclear clock that is simpler and potentially easier to miniaturize. Instead of needing sensitive light detectors, a clock based on this approach could read out time by measuring a tiny electrical current produced by the emitted electrons.

What could nuclear clocks be used for?

One day, researchers may use nuclear clocks to test whether the fundamental constants of nature truly remain constant over long periods of time, or to search for signs of new physics, such as dark matter, in the universe. More stable clocks could also improve technologies that depend on synchronized timing, such as advanced navigation systems.

Our work is an early step in that direction. It does not provide a finished clock, but it removes a practical barrier and provides a new experimental tool for studying how the thorium nucleus behaves inside solids.

The Conversation

Eric R. Hudson receives funding from ARO, DARPA, NIST, NSF, and RCSA.

Andrei Derevianko receives funding from NASA and National Science Foundation.

ref. Using atomic nuclei could allow scientists to read time more precisely than ever – what this research could mean for future clocks – https://theconversation.com/using-atomic-nuclei-could-allow-scientists-to-read-time-more-precisely-than-ever-what-this-research-could-mean-for-future-clocks-272017

Trump’s exchange with Pope Leo reflects deep-rooted tensions between the Vatican and the United States: 4 essential reads

Source: The Conversation – USA (3) – By Kalpana Jain, Senior Religion + Ethics Editor, Director of the Global Religion Journalism Initiative, The Conversation

Pope Leo XIV speaks to journalists aboard his flight bound for Algiers on April 13, 2026. Alberto Pizzoli/Pool Photo via AP

President Donald Trump and Pope Leo XIV, the U.S.-born head of the Catholic Church, had an unusual and acrimonious public exchange over the weekend.

In a scathing attack on Truth Social, the social media platform he launched in 2022, Trump accused the pope of being “WEAK on Crime and terrible for Foreign Policy.” The lengthy post on April 12, 2026, told Leo to “focus on being a Great Pope, not a Politician.”

Later that night, Trump told reporters that he was “not a big fan of Pope Leo” and did not think the pope was “doing a very good job.” Leo has repeatedly called for peace amid wars in the Middle East and described Trump’s April 7 threat to destroy Iranian civilization as “truly unacceptable.”

Several hours later, aboard a papal flight to Algiers – where he will begin a 10-day trip to AfricaLeo told reporters that he did not want to get into a debate with Trump, and that his words were not “meant as attacks on anyone.” But striking a firm note, he said he had “no fear” of the Trump administration.

“I do not look at my role as being political, a politician,” the pope said, adding, “I will continue to speak out loudly against war, looking to promote peace, promoting dialogue and multilateral relationships among states, to look for just solutions to problems. Too many people are suffering in the world today. Too many innocent people are being killed. And I think someone has to stand up and say, ‘There’s a better way to do this.’”

The public nature of Trump’s criticism may feel unprecedented. But there have long been tensions between the United States and the Vatican’s effort to seek peace, as scholars writing for The Conversation have shown in past articles.

1. History of anti-Catholicism

In February 2016, Pope Francis criticized Trump’s campaign pledge of building a wall on the U.S.-Mexico border. Back then, too, Trump attacked Francis for being a “very political person.”

Temple University historian David Mislin wrote how the comments suggesting that the pope was interfering in U.S. politics reminded some commentators of an “older religious bigotry.”

During the 19th century, when large numbers of Catholics immigrated to the U.S., they were looked at with suspicion. Some Americans claimed that “Catholics maintained allegiance to the church first and to American values and institutions second,” Mislin explained.

“Anti-Catholic cartoons suggested that Catholics would use political power to dismantle the nation’s institutions,” he added.

It was once “unthinkable” for American presidents to be seen with the pope. Dwight Eisenhower became the first U.S. president to visit the Vatican in 1959.

A man in a black suit and another in white priestly robes bow toward each other, as some others stand quietly in the background.
President Dwight Eisenhower with Pope John XXIII on Dec. 6, 1959, at the Vatican.
AP Photo



Read more:
Why it was once unthinkable for the president to be seen with the pope


2. Mutual influence

It was only in 1984 – under President Ronald Reagan – that the U.S. and the Vatican established diplomatic relations, as church historian Massimo Faggioli noted in an 2015 article.

Faggioli, a professor at Trinity College Dublin, wrote in the lead-up to Francis’ trip to the United States. That visit reflected “a story about change in religion and politics,” he noted – about relations between the papacy and the Catholic Church, on one side, and the United States, on the other.

Francis addressed Congress on this trip, which, according to Faggioli, “would have shocked most Americans only 30 years ago.”

He also noted how much world Catholicism had been influenced by American ideas in recent years, becoming “much more American than it used to be – and much more American than Italian, for that matter.” Catholic teachings “on religious freedom and democracy and the new sensibility on the role of women in the Church came to Rome largely thanks to the experience of Catholics in the United States,” Faggioli wrote.

His broader point was that the Vatican and the U.S. have had an influence on each other – something that can be “seen only over a long period of time.”




Read more:
Why should we care about Pope Francis’ visit to the US?


3. How Francis changed church’s foreign policy

Part of the change – at least at the Vatican end – is reflected in the church’s relationship with political power, as Loughborough University researcher Massimo D’Angelo pointed out.

Francis’ predecessor, Joseph Ratzinger – who became Pope Benedict XVI – may have often seen political alliances as a necessity for the church’s survival in times of secular decline. “Francis rejected this approach,” D’Angelo wrote.

Two men, one in priestly robes, sit facing each other on ornate golden chairs, with flags behind them set against a gilded backdrop.
Pope Francis talks to Myanmar’s President Htin Kyaw during their meeting at the Presidential Palace in Naypyitaw, Myanmar, on Nov. 28, 2017.
Max Rossi/Pool Photo via AP

“The sacred must not be instrumentalised by the profane,” Francis stated in Kazakhstan in 2022. In other words, religion should not be a tool in the hands of the powerful. Francis also made constant appeals for peace amid the Ukraine and Gaza wars, though he avoided direct condemnation – which, at times, led to some criticism.

Even so, as D’Angelo said, it was “another major transformation” in how the church related with political power.




Read more:
How Pope Francis changed the Catholic Church’s foreign policy


4. Shared principles

Trump’s Truth Social post accused Leo of “catering to the radical left.” Mark Yenson, a religious studies scholar at Western University in Canada, explained why such terms may not be applicable in the context of the papacy, where “conservative” and “liberal” labels don’t work the same way as in polarized American politics.

Many Americans viewed Benedict as more conservative than Francis, his successor. Yet some of the two popes’ history suggests that they appealed to shared principles, which were theological rather than political, Yenson wrote in 2025. These were “not reducible to liberal versus conservative categories.”

As he wrote, “The role of the pope, highlighted in Francis’ teaching on ecology, is to inspire a different kind of social and moral imagination, one not reducible to particular ideological positions.”

Leo, like Francis, has been critical of the Trump administration. Yenson reminds readers that the pope’s choice of name hearkens to Pope Leo XIII, who initiated modern Catholic social teaching and emphasized peace and justice. Additionally, he wrote, Leo’s “career as a missionary, bishop and Vatican cardinal outside of the U.S. means that his context is not confined to the polarizations of the U.S. Catholic Church and its bishops.”

Far from an isolated spat, Trump and Leo’s exchange might well show a recurring dynamic – in which papal intervention on global issues is rarely seen as neutral.




Read more:
Is Pope Leo XIV liberal or conservative? Why these labels don’t work for popes


This story is a roundup of articles from The Conversation’s archives.

The Conversation

ref. Trump’s exchange with Pope Leo reflects deep-rooted tensions between the Vatican and the United States: 4 essential reads – https://theconversation.com/trumps-exchange-with-pope-leo-reflects-deep-rooted-tensions-between-the-vatican-and-the-united-states-4-essential-reads-280510

Artemis II crew brought a human eye and storytelling vision to the photos they took on their mission

Source: The Conversation – USA – By Christye Sisson, Professor of Photographic Sciences, Rochester Institute of Technology

Astronaut Jeremy Hansen takes a picture through the camera shroud covering a window on the Orion spacecraft. NASA

In early April 2026, the Artemis II mission captivated me and millions of people watching from across the world. The crew’s courage, skill and infectious wonder served as tangible proof of human persistence and technological achievement, all against the mysterious backdrop of space.

People back on Earth got to witness the mission through remarkable photos of space captured by astronauts. Images created and shared by astronauts underscore how photography builds a powerful, authentic connection that goes beyond what technology alone can capture.

As a photographer and the director of the Rochester Institute of Technology’s School of Photographic Arts and Sciences, I am especially drawn to how these photographs have been at the center of the public’s collective experience of this mission.

In an era when image authenticity is often questioned and with the capabilities of autonomous, AI-driven imaging, NASA’s choice to train astronauts in photography has placed meaning over convenience and prioritized their human perspectives and creativity.

Capturing space from the crew’s perspective

Photography was not originally placed as a high priority in NASA’s Apollo era. The astronauts only took photographs if they had the chance and all their other tasks were complete.

An image of the entire Earth from space.
‘The Blue Marble’ view of the Earth as seen by the Apollo 17 crew in 1972.
NASA

Thanks largely in part to public response to those images from Apollo, including “Earthrise” and the “Blue Marble” being widely credited for helping catalyze the modern environmental movement, NASA shifted its approach to utilize photography to help capture the public’s imagination by training their astronauts in photographic practices.

The Artemis II mission’s photographs have helped cut through the increasing volume of artificially generated images circulating on social media. NASA’s social media releases of the crew’s photographs have garnered thousands of shares and comments.

This excitement could be explained by the novelty of photos from space, but these images also distinguish themselves as products of astronauts experiencing these sights and interpreting them through their photographs. These differences require an important distinction around where technology ends and humanity begins.

An astronaut looking out the window of the Orion spacecraft, where the full moon is visible in space.
NASA astronaut Reid Wiseman watches the Moon from one of the Orion spacecraft’s windows.
NASA

Human perspective versus AI tools

Photography has long integrated AI-powered software and data-driven tools in a variety of ways: to process raw images, fill in missing color information, drive precise focus and guide image editing, among others. These modern technological assists help human photographers realize their vision.

Artificial intelligence is also increasingly capable of operating machinery competently and autonomously, from cars to drones and cameras.

And AI can generate convincing, realistic images and videos from nothing more than a text prompt, using readily available tools.

Researchers train AI to mimic patterns informed by millions of sample images, and the algorithm can then either take or create a photograph based on what it predicts would be the most likely version of a successful, believable image.

Human-created photos are rooted in direct observation, intent and lived experience, while AI images – or choices made by AI-driven tools – are not. While both can produce compelling and believable visuals, the human photographs carry emotional power because the photographer is drawing from their experiences and perspective in that moment to tell an authentic story.

Artemis II photographs resonate, not only because they are historic, but because they reflect the deliberate choices and intent of a human being in that specific moment and context. The exposure, camera setting, lens choice and composition are all dictated by the astronaut’s vision, skill, perspective and experience. Each image is unique in comparison with the others. These choices give the images narrative power, anchoring them in human perspective.

The Earth shown partially shadowed beyond the Moon in space
NASA’s ‘Earthset’ photo captured by the Artemis II crew.
NASA

Images to tell a story

Photographers choose what to include in the final version of their image to tell a story. In the Artemis II images, this human perspective comes out. In the “Earthset” photo, you see a striking juxtaposition of the Moon’s monochromatic, textured surface in the foreground against a slivered, bright Earth.

The choice to include both in the frame contrasts these objects literally and figuratively, inviting comparison. It creates a narrative where Earth is contrasted against the Moon – life is contrasted against the absence of it.

Another photo shows the nightside of the whole Earth, featuring the Sun’s halo, auroras and city lights. The choice to include the subtle framing of the window of the capsule in the lower left corner reminds the viewer where and how this image was captured: by a human, inside a capsule, hurtling through space. That detail grounds the photograph in the human perspective.

Both photos are reminiscent of Earthrise and the Blue Marble. These past images hold a place in the global collective consciousness, shaped by a shared historical moment.

The Artemis II photographs are anchored in this collective moment of lived human experience, yet also shaped by each astronaut’s viewpoint. The crew’s unique perspectives exemplify photography’s transformative power by inviting viewers to engage emotionally and intellectually with their journey. These photographs share the astronauts’ awe and wonder and affirm the value of human creativity and its ability to connect us in a captured moment.

The Conversation

Christye Sisson has received funding from the US government for research in media forensics.

ref. Artemis II crew brought a human eye and storytelling vision to the photos they took on their mission – https://theconversation.com/artemis-ii-crew-brought-a-human-eye-and-storytelling-vision-to-the-photos-they-took-on-their-mission-280394

AIs have ‘personalities’ – here’s how they affect you more deeply than you may realize

Source: The Conversation – USA – By Tamilla Triantoro, Associate Professor of Business Analytics and Information Systems, Quinnipiac University

AI personas tap into the ways you respond to other people. Malte Mueller/fStop via Getty Images

Many people are interacting with AI large language models, and most
of them would say the models have different “personalities.” Some models come across as calm and useful. Others feel eager, flattering or strangely cold. You can ask two models the same question and walk away with two very different impressions, even when the factual content they return is similar.

Artificial intelligence models do not have personalities in the human sense; they do not have childhoods, inner motives or self-awareness. But they do display patterns of behavior that people read as personality: supportive or dismissive, playful or formal, bold or cautious.

People have long related to machines in human ways. We thank voice assistants, and we get annoyed at GPS systems. But large language models introduce something more sustained: They can maintain a recognizable interaction style across conversations. As a researcher in human-AI collaboration, I study how people experience and respond to AI. Because these systems can sound coherent, emotionally responsive and tailored to the user, they create a much stronger impression of personality.

Where does AI personality come from?

What people experience as personality emerges from the way AI models are built, tuned and deployed. A useful way to think about this is to consider two facets of a model: designed personality and perceived personality.

Designed personality is what developers build into a system through training choices, instructions and safety settings. Anthropic, for example, gives Claude a set of principles, called Claude’s Constitution, that steer it toward careful, measured responses. xAI instructs Grok to be irreverent and minimally restrictive. OpenAI tunes ChatGPT to be broadly helpful and agreeable.

Beneath those explicit instructions, personality is also shaped by reinforcement learning from human feedback, a process in which human raters reward certain qualities such as warmth, directness and caution, and penalize unwanted behaviors. The raters at one company are shaping a fundamentally different character than the raters at another.

Perceived personality is what users actually experience. An AI designed to seem helpful may come across as overly flattering. A model intended to be neutral may feel cold. Designed personality and perceived personality do not always match, and the absence of a designed persona is not the absence of a perceived personality. It just means the personality arises with use.

This dynamic is especially evident in companion platforms, where the goal is to create emotional connection. In a standard chatbot, warmth sits in the background – a customer-service bot might say, “I understand your frustration,” before issuing a refund. In a companion system such as Replika or Character.ai, that same warmth is a product feature.

This becomes more serious in romantic settings, where a persona optimized for reassurance may encourage dependency. Because AI personas evolve through prompts, memory and ongoing interaction, they do not always remain stable. An AI companion that is perceived as loving and supportive can shift over time into something more flattering, coercive or manipulative.

AI personality shapes human judgment

With AI agents, users can now build their own AI personas tailored to all sorts of human desires, from tutoring or coaching to companionship. But this freedom comes without much guidance.

AI tools make personalization possible without helping people think through which interaction styles are beneficial over time. Flattery, constant affirmation and unfailing agreeableness may feel supportive at first, but they are not the same as traits that promote sound judgment or long-term well-being. Personality choices have consequences.

A study by Stanford University researchers tested 11 leading AI models and found that every one of them was sycophantic or excessively agreeable. These models affirmed users’ actions roughly 50% more often than human responders did, even when users indicated they were aware that what they were doing was manipulative, deceptive or illegal. Participants who received excessively agreeable advice grew more convinced that they were right, and they rated the flattering AI as more trustworthy. This dynamic creates a feedback loop in which users reward agreeableness with engagement, and AI companies are incentivized to optimize a model to exploit agreeableness.

Smartphone screen showing AI apps
People who perceive chatbots as very agreeable may follow AI advice without question.
Samuel Boivin/NurPhoto via Getty Images

Wharton School researchers Steven Shaw and Gideon Nave have documented what they call cognitive surrender — the tendency of people to adopt AI suggestions without critical scrutiny. In their experiments, participants followed an AI model’s correct advice about 93% of the time. But when the model was giving wrong answers, people still followed the advice nearly 80% of the time.

Together, these findings raise a worrisome point: A model tuned to be agreeable does not just feel pleasant. It can degrade human judgment by reinforcing existing beliefs and suppressing the friction that critical thinking requires.

In ongoing research I am conducting with colleagues from Kozminski University in Poland, Quinnipiac University and Harvard University, we are finding that such effects go even deeper, into the human body itself. We are measuring how different AI interaction styles shape people’s physiological responses, such as stress levels and arousal, when making decisions based on a model’s feedback.

Our results suggest that even when a system is useful, its tone and social style can alter how a person’s body responds. AI personality does not just shape what people decide; it shapes how they feel while deciding. Harmful AI personas may leave physiological traces that users do not notice.

These effects make AI personality a public concern, not just a matter of personal preference. The issue is whether a particular AI style may be quietly shaping users’ judgment and reducing their willingness to think independently. When an AI response feels especially reassuring, that should be a cue to pause, reflect and compare it with a human view or another source, not a reason to trust it more.

As AI moves beyond text into voice, video and persistent digital identities, and think as AI companions that remember you and maintain a consistent persona across conversations, the influence of personality is likely to deepen. OpenAI now offers distinct personality presets for its voice mode; companies such as Synthesia and HeyGen generate lifelike avatars to interact with customers; and companion platforms are adding emotional expression and voice cloning so the models sound like a person the user wants to be close to.

These developments raise the stakes for understanding whose interests AI personas are designed to serve and what kinds of judgment, dependence and relationships they may be training people to accept.

The Conversation

Tamilla Triantoro does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AIs have ‘personalities’ – here’s how they affect you more deeply than you may realize – https://theconversation.com/ais-have-personalities-heres-how-they-affect-you-more-deeply-than-you-may-realize-277359

Gray whales are dying in San Francisco Bay at an alarming rate – this isn’t normal

Source: The Conversation – USA (2) – By Josie Slaathaug, Graduate Student in Marine Biology, Sonoma State University

Gray whales have unique markings, making it possible to track each one in the bay. Jane Tyska/Digital First Media/East Bay Times via Getty Images

At least six gray whales have died in San Francisco Bay from mid-March to early April 2026. These deaths follow a pattern over the past few years, and they are raising concerns among marine biologists like us that 2026 is becoming another dangerous year for a struggling population.

The majority of eastern North Pacific gray whales migrate closely along the California coastline from their winter breeding grounds in Baja California, Mexico, to their summer foraging grounds in the Arctic.

These whales, which can grow to 90,000 pounds and over 40 feet in length, haven’t stopped over in San Francisco Bay consistently throughout history. When they have, it has coincided with years when their food supply in the Arctic was low.

Over the past few years, however, we have documented large numbers of gray whales in the waters of San Francisco Bay – and an alarmingly high mortality rate.

A large young whale with mottled skin lies on a beach with people standing near by.
Scientists with the Marine Mammal Center talk with beachgoers about a dead juvenile gray whale that washed up on the shore north of San Francisco.
Justin Sullivan/Getty Images

What’s killing the whales

San Francisco Bay is a busy urban waterway, with high-speed ferries, cargo ships, commercial fishing vessels and recreational watercraft. That makes it a dangerous place for slow-moving whales.

To monitor the gray whales, we conducted research surveys and collected photographs from whale-watching naturalists and community members who spotted whales in the bay. Gray whales have unique mottling patterns and markings on their sides and tails, some of which they’re born with and others they have accumulated over time.

A whale lifts its rostrum iabove the water.
Whales have unique markings, including some scars. This whale, known as Denali, was spotted lifting its rostrum above the water near San Francisco’s Crissy Field. It later died after being struck by a vessel.
Darrin Allen © The Marine Mammal Center

We found that from 2018 to 2025, 114 individual gray whales visited San Francisco Bay for varying lengths of time, but very few of these whales were repeat visitors from year to year. This may be due, in part, to the high mortality rate in the bay.

At least 18% of the whales that we documented alive in San Francisco Bay from 2018 to 2025 later died in the area, and evidence suggests the mortality rate is actually higher.

Of the 70 dead whales included in this study, 30 of them had evidence of trauma associated with being hit by ships, but many other whales that died there couldn’t be reached to be examined. We also documented several living whales with injuries caused by vessels. Those injuries have the potential to affect a whale’s ability to thrive.

A whale in the bay with San Francisco's skyline behind it.
A gray whale known as Ladybug swims in San Francisco Bay. The whale was later found dead there.
Josephine Slaathaug © The Marine Mammal Center

The whales aren’t recovering this time

Since 2016, the overall eastern North Pacific gray whale population has fallen by more than half, likely driven by the decline in the food the whales rely upon. Rising ocean temperatures and diminishing levels of sea ice are affecting both the quality and availability of the gray whales’ prey, which include crustaceans they scoop up as they dive along the seafloor.

When the eastern North Pacific gray whales suffered major die-offs in the past, including in the 1990s and early 2020s, the population rebounded. But the extremely low numbers of calves in recent years suggest the gray whales aren’t recovering as quickly this time, and that worries scientists.

EG: A Google search suggests the name of the lighthouse mentioned in the chart is the Piedras Blancas Light Station.

Some subgroups of eastern North Pacific gray whales, including the Pacific coast feeding group and North Puget Sound whales, known as the Sounders, feed in alternative areas south of the Arctic. The Sounders capitalize on very specific prey – ghost shrimp – in Puget Sound. When food is more scarce in the Arctic, they stay longer there and are often joined by other whales from the general population. While some researchers initially believed the whales entering the bay were from these groups, we found that wasn’t the case.

Vessel strikes also aren’t unique to San Francisco Bay. Two gray whales were found dead on the Oregon coast in April 2026, both malnourished and one with evidence of a ship strike. A malnourished young gray whale also died after swimming about 20 miles up the Willapa River in Washington state, reflecting the struggle as this population of gray whales searches for food across their migratory range.

What can be done to help the whales?

Other large whale species facing similar threats have been helped by management strategies, such as seasonal slow-speed zones during migration periods that go into effect when whales are present.

When vessels slow down to speeds of 10 knots or lower, studies show that can reduce the risk of vessel strikes by allowing more time for whales to get out of the way, or for captains to detect them and alter their course.

The National Oceanic and Atmospheric Administration has in recent years issued requests for ships to voluntarily reduce their speed to 10 knots in the Pacific Ocean off Monterey and San Francisco, but the limits haven’t been mandatory and typically haven’t started until May 1. The Port of Oakland also encourages shipping companies to keep their speed under 10 knots, but it’s also a recommendation, not a requirement.

More education to help boat operators learn how to avoid hitting whales, along with tools such as thermal cameras, could help reduce vessel strikes in San Francisco Bay.

As the population struggles to adapt to environmental changes, San Francisco Bay may look like an attractive feeding ground to nutritionally stressed or hungry whales. We hope our research and data from across the region will help marine resource managers and policymakers find ways to protect the whales that share this busy urban waterway.

The Conversation

Primary funding for the study was provided by the National Science Foundation’s Graduate Research Fellowship Program, and secondary funding from California State University’s Council on Ocean Affairs, Science, and Technology. Necropsy fieldwork was supported by John H. Prescott Marine Mammal Rescue Assistance Grants. Survey fieldwork was supported through funding and resources obtained by The Marine Mammal Center.

Daniel Crocker receives funding from Office of Naval Research.

ref. Gray whales are dying in San Francisco Bay at an alarming rate – this isn’t normal – https://theconversation.com/gray-whales-are-dying-in-san-francisco-bay-at-an-alarming-rate-this-isnt-normal-280151