Without timely treatment, _Bacillus anthracis_ can cause fatal infection.CDC
The bacteria that cause deadly anthrax disease persist in the earth, a place their ancestors preferred over petri dishes and blood-filled tissues.
The bacteria that cause anthrax are called Bacillus anthracis. In the soil, they hang out and can form communities around plant roots. They also interact with neighboring organisms, though they’re an admittedly less-than-ideal neighbor to the soil-dwelling amoebae they infect and kill.
As a public health researcher, I am fascinated by how diseases move among people, animals and the environment. When I worked in a state health department, I was surprised to learn how the bacteria that cause anthrax cycle between land and the animals that rely on that land – including people.
Anthrax in the ecosystem
Give these bacteria alkaline-rich dirt, calcium and some nitrogen, and they happily subsist in the ground. If the temperature, humidity or acidity is not favorable, these bacteria can also slumber for decades in a spore form – underfoot and forgotten by nearly all except cattle.
Cattle, deer and other large herbivores disturb the abodes of bacteria. They sometimes unintentionally eat anthrax spores along with their food or are exposed to them through a cut. After anthrax spores enter the animal’s body, immune cells known as macrophages pick up these spores for removal. But instead of being destroyed like other intruding pathogens, the spores germinate and multiply.
Once the spores take the form of bacteria, they can also mount an aggressive offensive. Anthrax bacteria can cleave vital proteins with toxins and wreak havoc on their cellular adversaries. Cattle succumb to the bacteria within days if left untreated – sometimes within 48 hours of infection.
Through the cattle’s death, the bacteria are brought back to the earth to vegetate or sporulate once more.
Humans seeding anthrax
People can get caught in the life cycle of Bacillus anthracis.
Throughout history, humans and animals have seeded new lands with Bacillus anthracis spores. The spores are hardy travelers: They can survive for over 50 years and are resilient to dehydration, radiation, toxic chemicals and enzymatic degradation.
Most cases of human anthrax result from working with animals – an occupational hazard for tanners, wool sorters and butchers.
Anthrax in people manifests as blisters and dark sores when a person is exposed to the spores through an open wound. When spores are inhaled, symptoms include fever, nausea and chest pain. Very few people ingest the bacteria or spores, but those who do typically get them from eating undercooked meat from an infected animal. Symptoms include vomiting, stomach pain and bloody diarrhea.
Inhalation anthrax is the most deadly type of anthrax. While researchers have estimated that 95% of people with inhalation anthrax die, this is based on historical outbreaks when patients often did not have timely diagnosis or treatment.
The bacteria that cause anthrax are forever associated with weapons that destroy people, overshadowing their ecologically complex role in animals and soils that sustain humanity.
In the soil, they interact with other organisms and plants in ways scientists are only beginning to understand. In animals, they are part of the circle of life and death that maintain populations.
Beneath the ever-expanding footprint of civilization, anthrax bacteria will continue to be inseparable from the earth that humans walk upon.
Hannah Kinzer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Few regions pose as much of an economic conundrum as Pittsburgh.
Is the city and region – once the center of American steelmaking – a paragon of postindustrial transformation, or a left-behind region still struggling to move beyond its industrial past?
Past researchers foretold with uncanny accuracy the problems the region would face if it did not move away from its monolithic dependence on the steel industry. One prognostication, made by two University of Pittsburgh economists in the 1960s, stands out more than others.
Pittsburgh’s rebrand gets a global stage
When, in May 2009, White House Press Secretary Robert Gibbs announced that Pittsburgh would host the G20 Summit of world leaders that fall, the assembled journalists of the White House press corps presumed it was a lighthearted joke before Gibbs jumped into the substance of the day’s briefing.
President Barack Obama announced that Pittsburgh would the site of the 2009 G20 Summit, a choice meant to showcase the city’s postindustrial transformation. Scott Olson via Getty Images News
That idealized story is based on real change in a region that suffered extraordinary structural decline when a century of dependence on heavy industry imploded in the 1970s. Yet it is a story that needs to be tempered by the chronic poverty and lack of development in many former mill towns of southwestern Pennsylvania, which have not shared in the greater region’s redevelopment. Some communities, including Braddock – ironically, where Andrew Carnegie began his steelmaking empire in the 1870s – remain among the poorest in the nation.
How Pittsburgh reinvented itself
University of Pittsburgh economists Edgar M. Hoover and Ben Chinitz led a multiyear study of Pittsburgh’s regional economy funded by the Ford Foundation, a private foundation that works to advance human welfare, at the beginning of the 1960s. They described the economic study as an “immersion in regional economics.” Their four-volume distillation of all aspects of the Pittsburgh economy foretold the decline the region would face due to the shifting economic geography of the steel industry and Pittsburgh’s extreme lack of industrial diversification – things local leaders commonly saw as strengths.
Their comprehensive work left little doubt about Pittsburgh’s fate if the city stayed its course. But moving away from steel proved far too difficult for regional civic and business leaders, as the region was almost entirely dependent on steel production and related industries. By putting off real economic change, the collapse of the 1980s was even more painful when it finally arrived.
Hoover and Chinitz’s message applied far beyond Pittsburgh. Pittsburgh may have been an extreme case, but they knew that all U.S. regions needed to learn to adapt in the face of accelerating and inexorable change. At the core of their thesis was the idea that many of the geographic linkages that had long bound certain industries to certain regions – like automotive in Michigan and meatpacking in the Midwest – were weakening. Just as the Pacific Northwest no longer relies on the timber industry, or as coal has failed to sustain prosperity in West Virginia, no region can rely on past dominance in any industry to ensure future prosperity.
Among other shifts they projected, Hoover and Chinitz foresaw that future competition between regions would not rest upon the ability to attract and retain specific industries. Instead, the success of regions would rest on their ability to attract and retain workers, something many regions long took for granted.
Workers and their families value regional amenities, affordability and many other factors that historically had little impact on corporate site selection. Today, the factors that make a region a place where workers want to live and work, like a strong job market, access to a quality education and affordable housing, shape the pattern of growth and decline among and within regions.
For Pittsburghers, whose city had for so long been singularly defined by the production of steel, the idea that industrial competitiveness was not paramount bordered on apostasy.
What other cities can learn from Pittsburgh
Pittsburgh’s transformation is incomplete, and ongoing. Looking ahead, history teaches us that all regions in the U.S. need to consider any current economic successes as temporary, eventually to be eviscerated by changing circumstances. Envisioning a future without steel was once an inconceivable scenario for Pittsburgh.
One of the key challenges Pittsburgh faced after the decline of steel was the significant loss of workers fleeing deindustrialization. At its economic rock bottom in the 1980s, Pittsburgh saw an exodus of young workers who saw their economic futures elsewhere. Those workers took with them their families and their future families, compounding and extending the repercussions of past job destruction. Rebuilding a competitive workforce took a career-span length of time, but is in many ways the core of Pittsburgh’s rebound.
Again, success is not spread evenly across the region. Where Pittsburgh’s new workers want to live, long-depressed communities, like Lawrenceville and East Liberty, have turned around, but where local amenities are lacking, depressed communities are finding it ever harder to abate decline. Many workers no longer need to live close to their jobs. Location of a major firm or factory is rarely enough to catalyze sustainable and prosperous communities.
It appears we are living in the future foretold by Ben Chinitz and Edgar M. Hoover. The message that workforce is crucial to economic development is now accepted in a way that was once difficult to accept. But workforce advantages, like most competitive advantages that regions have today, are fleeting.
Christopher Briem does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
“I feel that as an American citizen, we all have a great opportunity to be able to improve our life,” the 58-year-old woman explained in an interview I conducted with her in 2025. “Are you willing to put in the work, or are you not?”
Morales, whose name I changed to protect her privacy, was a stay-at-home mom devoted to caring for her large family. After her divorce, she worked at social service agencies and enrolled at a local college. Then her ex-husband stopped paying for child support, and she and her eight children faced eviction.
She said she is very grateful for the government benefits she received for the first time, including the Supplemental Nutrition Assistance Program, which helps low-income Americans buy groceries.
Those benefits made it possible for her to keep putting food on the table and remain housed until she earned a college degree and obtained jobs that could pay those bills. Now she assists families dealing with difficult medical decisions, a job that makes her feel she is able to help others through hard times in their lives.
Learning how people think about work
Morales is one of more than 100 Americans I have interviewed for my research on how people think about work and about government assistance. Currently, I am updating the research on how Americans think about government assistance, which is how I met Morales. Not all of the participants in these projects received SNAP benefits before or after these interviews.
But among those who had, I found her experience typical: SNAP provided a crucial source of support while they looked for work. With the exception of a few in their late 50s and 60s who faced age discrimination and eventually retired, all persisted until they found another job.
My research highlights that most of the people who get benefits through SNAP and other government programs want to work. And SNAP supports their work ethic.
The average SNAP benefits, which many people refer to as food stamps, as of 2025 are US$188 per person per month, which comes to about $6 a day. About 42 million low-income Americans receive them.
Morales was able to obtain the help she needed, but I also spoke with others who needed help and whose applications were denied. Now the holes in the safety net are growing.
Some provisions in the large tax and budget bill that Congress passed in July 2025 could jeopardize the SNAP benefits for millions of Americans.
For example, it expanded the number of people who will be subjected to a three-month limit on SNAP benefits.
And for the first time, the federal government will no longer cover the full cost of benefits; this will start with the 2028 fiscal year, which begins on Oct. 1, 2027. The big 2025 tax and budget package will also halve the federal government’s share of states’ administrative costs, starting Oct. 1, 2026.
In other words, the new rules presume that SNAP benefits undermine recipients’ work ethic. There are exceptions, of course, but my research and that of others shows that presumption is wrong for most people who receive those benefits.
How SNAP affects a willingness to work
The government first imposed work requirements for most working-age adults to receive food stamps in the 1970s. It has set time limits for most “able-bodied adults without dependents” since 1996, making some exceptions during severe recessions.
Among families that include children and working-age adults without disabilities who receive SNAP benefits, more than 9 in 10 include someone with a job.
These requirements can become counterproductive when people who get SNAP benefits have to miss work, for example, to provide proof of their employment track record because their caseworkers have lost their paperwork.
A real-world SNAP experiment
Economists Jason B. Cook and Chloe N. East noted in a study originally published in 2023 and revised in 2025 that the caseworker an applicant is assigned can affect whether someone’s SNAP application gets approved.
Caseworkers don’t make the approval decisions, but they vary in their diligence in ensuring that applicants answer all the required questions. Applicants who are unlucky in the caseworker they are assigned are less likely to provide all the relevant information, leading to a denial.
Comparing applicants who were randomly assigned to more helpful or less helpful caseworkers, the economists followed what happened to nearly 200,000 SNAP applicants in one state, tracking their employment and earnings for three years whether or not they received SNAP benefits.
If Secretary Rollins is right that SNAP benefits undermine a work ethic, someone who doesn’t receive benefits should be working more than someone who is the same in other ways but does receive benefits. But that’s not what Cook and East found.
The economists found that for people who had previously held steady jobs, those who received SNAP benefits were far more likely to be working again two and three years later than the ones who were denied benefits.
And they were earning more money as well.
They also found that for SNAP applicants who had not worked steadily before applying for benefits, receiving benefits made no difference to their future employment.
In other words, SNAP benefits and similar programs that help people facing economic hardship can make someone more likely to earn income rather than less so. They do this by providing some of the money low-income people need to put food on the table so they can focus on finding a good job.
As Millie Morales put it, “If I don’t have a decent place to eat and sleep and shower and take care of myself, how am I then supposed to go look for a job, or go to a job, or go to school?”
Claudia Strauss has received funding from NSF and the Wenner-Gren Foundation. No current funding.
Across cultures, languages and economic systems, feeling connected to the natural world is consistently linked to living a more hopeful, purposeful and resilient life. nymphoenix/iStock via Getty Images Plus
When life feels overwhelming, many people instinctively turn to nature. A walk in a park. Sitting by the ocean. Watching a sunset. Is this just a pleasant feeling, or is there something deeper at work?
As environmental psychologists based in the U.S. and in Germany, we were part of a team of more than 100 researchers who set out to examine this phenomenon on a global scale and determine how consistent it is around the world.
Across countries as diverse as Brazil, Japan, Nigeria, Germany and Indonesia, we saw a clear pattern: People who felt more connected to nature also reported higher well-being.
Worldwide oneness with nature
Researchers who study people’s relationship with the natural world often use the term “nature connectedness.” This phrase doesn’t simply mean going hiking or visiting a park. Nature connectedness refers to the extent to which people see nature as part of who they are – whether they feel an emotional bond with the natural world and experience a sense of oneness with it.
Someone who has a high degree of nature connectedness might agree with statements like, “My relationship to nature is an important part of who I am.” It reflects identity and meaning, not just exposure.
In a new study, people who had a stronger sense of nature connectedness tended to have a higher degree of mindfulness.
We drew on data collected between 2020 and 2022 from more than 38,000 participants through a large international collaboration that was established to gauge how people responded to the COVID-19 pandemic. Participants came from 75 countries and were on average in their teens, 20s or 30s. They completed questionnaires that explored the link between people’s bond with nature and several aspects of well-being.
The questionnaires probed people’s sense of purpose in life; their feelings of hope, life satisfaction and optimism; their sense of resilience and their ability to cope with stress they felt; as well as whether they practice mindfulness as they go through their everyday life.
Across this large international sample, we found that people who felt more connected to nature consistently reported higher levels of well-being and mindfulness. This was true not just for feeling satisfied with life but also for deeper aspects of flourishing, such as having a sense of direction and meaning. And these associations held even when accounting for age and gender.
Does national context matter?
We also explored whether specific characteristics of a country strengthen the benefits of feeling connected with nature.
For example, we looked at things such as how well countries take care of their air, and water systems and ecosystems, as well as whether citizens have equal access to education, democratic participation, and other key social and financial resources, and whether cultures tend to prioritize collective well-being over individual priorities. There were some differences, but the main takeaway was pretty clear: A connection with nature and well-being shows up across a wide range of economic, cultural and environmental contexts.
In other words, the psychological benefits of feeling connected to nature do not appear to be limited to wealthy Western nations or specific cultural worldviews.
One reason why feeling a connection with nature may be linked to well-being is that nature connectedness fosters mindfulness – the ability to be present and attentive.
Another possibility is that bonding with nature may also make people more resilient. People who feel connected to something larger than themselves may find it easier to cope with stress and uncertainty. A sense of belonging – even to the natural world – can provide psychological grounding in a world characterized by stressors. There may also be a feedback loop: Feeling better may encourage people to engage more deeply with nature, strengthening the bond over time.
Implications for policy and everyday life
These findings matter beyond academic debates. Around the world, policymakers are increasingly recognizing the links between human health and environmental sustainability. International agreements such as the Convention on Biological Diversity, a landmark treaty signed by 196 countries in 1992, emphasize the importance of restoring humanity’s relationship with nature.
These policy actions seek to protect Earth’s ecosystems, but our results suggest they may also benefit people’s psychological well-being. Similarly, designing cities with accessible green spaces, incorporating nature-based experiences into schools and supporting community engagement with local environments may do more than beautify neighborhoods – they may also help people flourish.
Across cultures, languages and economic systems, feeling connected to the natural world is consistently linked to living a more hopeful, purposeful and resilient life. At a time when mental health challenges are rising globally, reconnecting with nature is not a luxury but a fundamental – and widely shared – human need.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Teens in the U.S. are obtaining medication abortion pills through telehealth, and young people age 18 to 24 are ordering medication abortion at much higher rates than older adults.
Those are the key findings of a new study that my colleagues and I published in the journal JAMA Health Forum.
We examined requests for medication made to an online telemedicine service – one of the few to support people in all 50 states, without age restrictions. We compared average weekly request rates both before and after the Supreme Court overturned Roe vs. Wade in June 2022. Over time, we examined request rates across three age groups (15-17, 18-24 and 25-49) and by the severity of state-level abortion restrictions.
After Roe was overturned, researchers expected the number of abortions across the U.S. to fall. Intuitively, this makes sense, as most states have at least one law substantially restricting abortion services, which limits access at a clinic.
The main reason for this is the steep rise in medication abortion services through telehealth, which has expanded access for tens of thousands of people. As of early 2025, an estimated 1 in 4 abortions are done via telehealth. Until now, research and media attention have largely focused on this phenomenon among adults rather than among teenagers.
Why it matters
Understanding this trend among adolescents is important because minors – or teenagers under 18 – face a unique legal situation when it comes to abortion.
More than 7 million teenage girls age 13 to 17 live in a state with an abortion ban, and the legal landscape is quickly changing for teens.
In most states, adolescents seeking abortion services must navigate parental involvement laws, which require a minor to obtain consent for, or notify a parent of, their abortion. Such laws make it difficult or even impossible for many teens under 18 to obtain care, even in states like Massachusetts or Pennsylvania that have moved to protect abortion access.
In some cases, teens seek judicial bypass services, which help them circumvent the parental involvement process. In addition to legal barriers, teens who seek abortion may already face stigma around teen pregnancy and sex, likely lack reliable access to a car – or may not even have a driver’s license – and probably don’t have US$600 or more on hand to pay for an abortion at a clinic.
To circumvent these barriers, minors are bypassing parental involvement requirements and requesting telehealth at higher rates in states with parental involvement laws, compared with their counterparts in more liberal abortion access states.
This is important because this trend may be evidence of the huge barrier of parental involvement laws. It may also signal that states with parental involvement laws also have additional policies restricting abortion – such as mandatory waiting periods or gestational bans – and that minors are living in an even more restrictive policy context than adults.
What still isn’t known
More research is needed to understand how and why teens are turning to online providers. Findings will help clinicians and advocates support adolescents who are ordering telehealth medication abortion online.
There are some very real legal risks involved for any teenager ordering pills online, and young people have been criminalized for taking abortion pills ordered from online sources.
Furthermore, anti-abortion prosecutors and lawmakers frequently target teens. For example, Idaho has become notorious for passing an “abortion trafficking” law, which makes it illegal to help minors access abortion.
At the federal level, attempted revisions to the Food and Drug Administration’s approval of the abortion drug mifepristone have explicitly tried to ban access for minors, and federal officials continue to spread misinformation about the safety of medication abortion for teens.
The Research Brief is a short take on interesting academic work.
Dana Johnson receives funding from the National Institute of Health, the Society of Family Planning Research Fund, and the UW-Madison Collaborative for Reproductive Equity. She also serves on the Board of Directors for Jane’s Due Process.
Laura D. Lindberg is affiliated with Youth Reproductive Equity and Power to Decide
As of July 2026, graduate degree programs in nursing, public health, social work, public policy and more will no longer be defined as professional degrees by the Department of Education.
The change limits how much federal financial aid students in those programs can qualify for under new borrowing limits set by the big tax and spending cuts bill passed by Congress in 2025.
The Department of Education said excluding these degrees from the professional degree classification is solely an “internal definition” and “not a value judgment about the importance of (these) programs.” The department argues these changes will push some graduate programs to reduce their tuition costs.
Every day, survivors of domestic violence rely on a care network built and maintained by a system of nurses, forensic nurse examiners, social workers, therapists and emergency shelter managers. Many of these jobs require graduate training that comes with substantial education costs. To afford these degrees, students often rely on federal financial aid.
The status change will cut the amount of lifetime federal aid students in these programs can receive by about half relative to students in professional programs. In combination with ongoing federal funding cuts, the change threatens to destabilize an already strained social service system.
We are faculty and student researchers at the University of Denver’s Systems, Housing, and Anti-violence Policy Evaluation Lab. We are alarmed at what the status change means for social service providers, especially those serving survivors of domestic and sexual violence.
Excluding programs that prepare individuals to work with survivors of domestic violence from the professional degree designation risks discouraging entry into these professions nationwide. Fewer people entering the profession will impact both the quality and availability of care for those who rely on these services. Moreover, increasing the amount of private debt students will take on to complete these degrees will have lasting consequences.
Professional degree classification and loans
Under the new rules going into effect this summer, graduate student borrowers face annual loan limits and lifetime caps on total borrowing for federal student aid.
In Colorado, and elsewhere, the cost of graduate education often exceeds what students can pay without borrowing. The new cap of $20,500 per year for students in nonprofessional graduate degree programs is far less than the total cost of attendance at major Colorado universities.
At the University of Colorado Boulder, annual costs can top $38,000 including food, housing, books and transportation for in-state students with a full-time credit load. Tuition and fees alone cost about $16,000.
For students who cannot pursue these degrees without adequate financial aid, this policy will create barriers to entering the field. Others will be saddled with private debt that lacks the protections and favorable borrowing terms of federal loans.
Graduate students lack comparable need-based grant programs and instead rely largely on direct, unsubsidized loans and Grad PLUS loans, which cover educational expenses not met by other financial aid, such as food, housing and books. But the Grad PLUS loan program is set to end for new borrowers on July 1, 2026, further tightening access to advanced degrees.
People seeking undergraduate degrees can apply for income-based financial aid through the Free Application for Federal Student Aid program, known as FAFSA. Royalty-free/Getty Images
Impacts on long-term labor force
Removing these programs from the list of professional status degrees that qualify for higher loans delivers both a symbolic and financial blow to the essential services that support survivors of domestic violence.
Denise Smith, assistant professor at the University of Colorado Anschutz College of Nursing, told Denver7 the policy leaves nurses feeling devalued. She argues that this reinterpretation of the professional degree definition could reduce growth in the nursing profession, with long-term impacts on patients’ access to care.
Victim advocacy roles such as shelter managers and housing navigators, which sometimes require a graduate degree, are already chronically underpaid. The median annual salary for social workers nationwide is about $61,000.
In fields that rely on moral wages – compensating poorly paid staff with the intrinsic satisfaction of helping those in need – recognition matters. The work done by graduates with these degrees has not changed, and the skills these workers bring to domestic and sexual violence response programs are still vital.
According to a 2025 survey by NO MORE, an advocacy coalition supporting victims of domestic violence, 80% of organizations in the sexual and domestic violence sector in the U.S. have experienced service disruptions due to federal funding instability. A multistate survey in 2021 of domestic violence programs found that 90% reported high staff turnover due to inadequate funding and the lack of livable wages. State coalitions against domestic violence say employees who remain at these jobs often juggle multiple roles and face substantial burnout.
Trauma-informed counseling in shelters and community programs is critical for survivors’ recovery and long-term well-being. However, long waits to access shelter or see a mental health professional, along with workforce shortages, already limit access. Fewer forensic nurses to conduct sexual assault exams further threatens survivor safety, especially amid nationwide nursing shortages.
In rural and underserved areas, advanced practice nurses – those with advanced clinical practice and education experience – are often the only consistent providers of care for the local population. Reducing support for nurse training puts entire communities at risk and weakens vital services, including documentation of abuse that can be essential for domestic violence legal proceedings.
These combined challenges highlight the fragility of the system that supports survivors. Without continued investment in training and recognition for these professionals, the network that provides safety and support will be weakened even further.
Kaitlyn M. Sims receives funding from the Wisconsin Department of Children and Families, the Denver Basic Income Project, the Arnold Ventures Foundation, and the Institute for Humane Studies.
Kaelyn Lara and Leslie Carvalho do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Emily Hodgson Anderson, Professor of English and Dean of Undergraduate Education, USC Dornsife College of Letters, Arts and Sciences
In February 2023, a little more than a year after the launch of ChatGPT, Vanderbilt University sent an email to its student body in the wake of a fatal campus shooting at Michigan State.
“The recent Michigan shootings are a tragic reminder of the importance of taking care of each other,” the email read in part. In tiny type at the bottom of the message, a disclaimer appeared: “paraphrased from OpenAI’s ChatGPT.”
Students immediately objected.
“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” one senior wrote.
A Vanderbilt apology email quickly followed. The university launched a professionalism and ethics investigation. One associate dean couched the misstep as a result of learning pains tied to the adoption of new technology.
Chatbots have spawned a host of ethical questions about writing assistance for teachers, students and authors.
But similar debates about ghostwriting have been taking place for over a century, revealing a persistent discomfort with the idea that the words we read might not belong to the person whose name is attached to them.
Outsourcing authorship
Ghostwriting, a paid arrangement in which one person writes under another’s name, has existed for over a century.
The term seems to have first appeared in the English language in a 1908 newspaper article, which I encountered while researching my forthcoming book, “Ghostwriting: A Secret History, from God to A.I.” The story appeared in the Daily Star, in Lincoln, Nebraska, and describes an anonymous writer who earned US$5,000 to help a high-society woman write a book.
Today, ghostwriting usually involves collaborations between professional writers and celebrities or professionals who otherwise wouldn’t have the time, skill or connections to write a book.
On publication of the manuscript, the ghostwriter is typically named, albeit obliquely – perhaps identified as a friend or consultant in the acknowledgments section. In some instances, the ghostwriter’s name appears alongside the credited author’s on the cover. Either way, the client assumes ownership of the ghostwriter’s work.
An ethical gray area
And yet when I type “the practice of one person writing in another person’s name” into Google, the search engine doesn’t spit out “ghostwriting.”
My first hit is “pseudonym” or “alias.” “Plagiarism,” “libel” and “slander” aren’t far behind. A 1953 article titled “Ghost Writing and History” that appeared in The American Scholar also points out that in the mid-20th century, “forgery” – falsely imitating another’s work with the intent to deceive – and “ghostwriting” could be used interchangeably by scholars.
In other words, even when consensual and compensated, ghostwriting has some relatives that are ethically suspect. And maybe that’s why many clients obscure the fact that they’ve used a ghostwriter, and why responses to ghostwritten works often reflect uneasiness with the practice.
“You should be ashamed,” read one social media post, written in response to Millie Bobby Brown’s 2023 debut novel, which she co-wrote with a ghostwriter. “[The ghostwriter’s] name should be on the cover. She was the one who actually wrote the book.”
The discomfort goes both ways: “I feel so guilty and ashamed whenever I use a ghostwriter now because I feel people will think I’m lying,” an anonymous poster on Reddit admitted.
Both the criticism and self-flagellation imply that the act of claiming another person’s words can render these words deceitful, even if the words have been paid for and the content is true.
“I meant to try (to write the book myself),” Goldberg writes. “And when it turned out I couldn’t quite pull it off … I looked for help.”
Goldberg frames the assistance of ghostwriting as something she deserved after overcoming obstacles as a Black woman. But Goldberg also has financial resources available that others looking for writing assistance usually don’t. High-end ghostwriters collect in the mid-six figures for their services; Prince Harry’s ghostwriter, J.R. Moehringer, supposedly scored a $1 million advance.
Cue chatbots. Generative AI promises to be the ghostwriter for the masses, so much so that ghostwriter Josh Lisec explained to me how, in the future, ghostwriting will need to be marketed as a boutique service for elites if it is to survive.
Naming names
Whether you’re paying for a ghostwriter or using a free chatbot, “assistance” or “collaboration” on intellectual and artistic work is not automatically unethical.
Editors have long made a career out of helping authors shape their writing. Visual artists have long employed studio assistants. Television shows only get written collaboratively in writers’ rooms.
And yet, accepting assistance on intellectual or artistic work can raise legitimate questions, particularly with regards to how that assistance is acknowledged and how much assistance can be accepted while still calling a project “ours.”
In the late 19th century, for example, one sculptor went to court to rebut a claim that his assistant – whom the press referred to as a “ghost” – had completed sculptures for which the sculptor took credit. The judge announced that an artist could accept, with integrity, a certain amount of mechanical assistance. But he added that there was a threshold when artistic assistance became “dishonest.” The judge made the accused sculptor craft a bust in real time to prove his skill.
French sculptor Auguste Rodin observes his assistants as they make plaster casts of his works. Corbis/Getty Images
Similarly, most educators find it more ethical when their students turn to ChatGPT for editing assistance but much less so when they use it to generate a document from scratch.
The same policies that govern appropriate A.I. use also come up in ghostwriting contracts. The ghostwriter signs a “warranty of originality” that promises the author that the ghostwriter has – via platforms such as iThenticate – fact-checked and plagiarism-checked their work.
When inaccuracies do crop up, ghostwriters often take the fall.
Former Department of Homeland Security Secretary Kristi Noem blamed her ghostwriter for indicating in her memoir that she had met North Korean dictator Kim Jong Un. Physician David Agus, who teaches at the University of Southern California Keck School of Medicine, held his ghostwriter responsible for the many instances of plagiarism that were identified in his popular science books.
Ghostwriters willingly provide assistance and accept responsibility for the originality of what they write. Scholars have permission to use generative AI, provided they properly cite its use.
And yet when Vanderbilt administrators advertised that their email had been written with the assistance of ChatGPT, students and faculty pushed back.
University policies and book contracts may offer veils of legitimacy and shields from legal liability. But in the end, readers still seem to want the words they’re reading to come from the mind of the person whose name is on the byline.
Emily Hodgson Anderson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Iain Boyd, Director of the Center for National Security Initiatives and Professor of Aerospace Engineering Sciences, University of Colorado Boulder
Iran launched two missiles, possibly modified versions of this Khorramshahr ballistic missile, at the island of Diego Garcia.Iranian Defense Ministry via AP
Iran fired two ballistic missiles on March 20, 2026, at the Indian Ocean island of Diego Garcia, which hosts a strategically important joint U.S.-U.K. military base, according to U.S., U.K. and Israeli officials. One missile broke apart during flight, and the other appears to have been destroyed by U.S. missile defenses.
Diego Garcia is about 2,500 miles (4,000 kilometers) from Iran, which is about twice as far as the top range Iran has declared that its ballistic missiles have. Parts of Western Europe, Asia and Africa lie within a 2,500-mile (4,000-km) radius of Iran, raising concerns about the vulnerability of these areas.
However, there’s no evidence that Iran has developed a new type of missile or that it can otherwise hit targets at the longer range. Iran most likely modified an existing type of missile, but increasing a missile’s range poses significant challenges.
Ballistic missile basics
A ballistic missile is launched on a rocket and, after separating from it, subsequently flies mostly under the influence of gravity to its destination. The name refers to the characteristic arc of projectiles whose trajectories are largely shaped by gravity. The range of these missiles is determined by the size of the rocket.
Short-range ballistic missiles can fly about 300 to 600 miles (500 to 1,000 km) and can be launched from mobile trucks. They are used for destroying key defensive infrastructure such as radars.
Medium-range ballistic missiles have ranges of about 600 to 1,800 miles (1,000 to 3,000 km). They are used to attack more strategic targets such as command and control centers where military leaders coordinate operations. Intermediate-range ballistic missiles operate over about 1,800 to 3,400 miles (3,000 to 5,500 km), putting much larger geographical regions at risk.
Intercontinental ballistic missiles, or ICBMs, have a range of about 3,100 to 6,200 miles (5,000 to 10,000 km), making it possible to strike targets over an enormous area. These very long-range weapons require multiple rocket stages. They fly very high, exiting the atmosphere and entering into space, before arcing back toward Earth.
At the height of the Cold War, both the Soviet Union and the United States had thousands of ICBMs armed with nuclear warheads aimed at each other. Each weapon could obliterate an entire city, and nuclear-armed ICBMs have been the basis of mutually assured destruction in which both sides were deterred from ever using the missiles.
The ranges of these missiles – up to 500 miles (800 km) – are insufficient for Iran to use them against Israel directly because the closest distance between the two countries is about 550 miles (900 km). However, Iranian-backed militias have deployed these weapons in neighboring countries, such as Lebanon and Syria, and have launched them from there in attacks against Israel.
Iran has also developed intermediate-range ballistic missiles such as the Shahab-3, Sejjil and Khorramshahr weapons. These missiles have ranges of up to 1,250 miles (2,000 km), which means they can reach Israel directly from Iran.
Harder to go farther
Scaling up from short range to medium range to intermediate requires larger and larger rockets, which presents a number of increasingly difficult technical challenges. Larger rockets create more dynamic vibrations that the missile structure and all its components must survive. This requires an advanced manufacturing and testing infrastructure.
The size of the rocket also determines how much payload the missile can deliver. This challenge is very well-illustrated by the enormous Saturn V rocket that took astronauts to the Moon. Of the total launch mass, less than 2% was delivered to the lunar surface, with propellant taking up almost all the remaining mass.
ICBMs also have a small payload mass, and this in part explains why militaries more often load them with nuclear warheads than conventional chemical explosives. Pound for pound, nuclear warheads produce much larger effects. It is usually not worth the very high cost of sending an ICBM many thousands of miles just to blow up a single building.
Finally, maintaining control of the missile and hitting a target with sufficient accuracy becomes increasingly more difficult as range is extended. Missile navigation systems based on gyroscopes have slight errors that increase with time, and GPS-guided missiles can be jammed.
Limits on Iran’s reach
Having successfully launched satellites into space using two-stage rockets, however, perhaps it is not too surprising that Iran has been able to build on those successes to achieve longer ranges for its missiles. The simplest modification to extend a missile’s range is to reduce its payload.
In the Iranian attack on Diego Garcia, however, one of the missiles failed in flight and the other appeared to have been destroyed by U.S. defenses. The missile failure may indicate that Iran is attempting to operate these systems at distances they are not reliably capable of.
The apparent ability of the U.S. to defend against the second missile suggests that the Iranian intermediate range ballistic missiles do not pose a significant military threat. This conclusion is further supported by the earlier high-volume attack by Iran in December 2025 when it launched hundreds of missiles and drones in a concerted raid against Israel. Almost all were shot down by a combination of Israeli and U.S. defenses.
Surprising but not so threatening
Ultimately, while Iran’s long-range attack on Diego Garcia caught the world off guard, it was likely intended more for its psychological and political effects than for posing a real military threat.
It is worth noting that an additional challenge with fielding intermediate-range ballistic missiles is the cost, which scales with the size of the rocket required. A two-stage rocket that can fly 2,500 miles (4,000 km) is probably one of the most expensive weapons that Iran possesses: It is therefore unlikely to have many of them. When launched in small salvos, these missiles are highly susceptible to the sophisticated air defense systems of the U.S. and its allies.
Still, the attack has certainly gotten the attention of the world and may increase pressure for diplomatic approaches to end the conflict with Iran quickly.
Iain Boyd receives funding from the U.S. Department of Defense and Lockheed Martin Corporation.
Source: The Conversation – USA (3) – By Eranda Jayawickreme, Professor of Psychology & Senior Research Fellow, Program for Leadership and Character, Wake Forest University
Traditional dancers perform in front of the Buddhist Temple of the Tooth, celebrating the Buddhist festival of Esala Perahera, in Kandy, Sri Lanka, on Aug. 8, 2025.Ishara S. Kodikara/AFP via Getty Images
I grew up in Sri Lanka. Much of my adolescence was spent in Kandy, a city built around a lake, set amid the lush tea plantations of the hill country. Its northern shore houses the Temple of the Tooth, one of Buddhism’s most sacred sites. Each year, it came alive with drummers, dancers and elephants parading through the streets in a “perahera,” or procession, honoring the Buddha’s relic.
But Buddhism was only one part of Kandy’s mosaic of religious life. I went to a high school where students from different religious and ethnic backgrounds got along easily. Within walking distance stood Buddhist temples, Christian churches, brightly colored Hindu temples, or “kovils,” and Muslim mosques whose call to prayer echoed across the city multiple times a day. Religious observances filled the calendar; Sri Lanka has more holidays than almost any other country.
Our own home was a glimpse into the island’s diversity. I attended both churches and temples with ease. My mother regularly visited a Hindu kovil with a close friend – though she was Catholic and my father was Buddhist. Her family had emigrated from Kerala, the southwestern tip of India, at the turn of the 20th century. His was Sinhalese, Sri Lanka’s largest ethnic group.
I grew up during Sri Lanka’s civil war, which consumed the country from 1983 to 2009. The brutal conflict was fought between the Sinhalese-majority government and the Liberation Tigers of Tamil Eelam, a separatist group fighting to create an independent state for the Tamil minority. An estimated 80,000-100,000 people lost their lives, and the war divided the country along religious and ethnic lines. Meanwhile, a separate insurrection led by the Janatha Vimukthi Peramuna, a Marxist political party, tore through the southern part of the country in the late 1980s, killing tens of thousands of people.
As a child, I did not possess the vocabulary to describe my own personal experience during this tumultuous time. All I knew was that some people withdrew into their own groups and vilified Sri Lankans who were different from them. Others worked hard to maintain relationships. Ordinary people in extraordinary circumstances could still choose connection over anger.
Those experiences sparked enduring interest in a question that animates my work as a personality psychologist. What allows people to live together across deep religious differences, without sliding into hostility or dehumanization? What helps them commit to pluralism?
Over time, I have come to believe that pluralism requires more than laws and institutions, although such structures are important. It is a moral commitment: a virtue that we each have a responsibility to cultivate.
What pluralism is
The phrase “pluralism” is often used loosely. Sometimes it simply refers to diversity: people of many religions or ethnicities living in one society.
This can look quite ordinary: a Buddhist teacher attending a Christian colleague’s church wedding out of respect, or a Muslim shopkeeper and a Buddhist neighbor debating over tea, disagreeing sharply, but chatting again the following day. Many of the shopkeepers my family relied on every week were either Tamil or Muslim. One of my tutors – a Muslim man who had worked for the Sri Lankan foreign service in his youth – would sit with me over lessons and then linger to talk with me about politics, culture and the country.
Pluralism lives in these repeated, small acts: decisions to sustain relationships with people whose deepest convictions differ from your own. And it begins with tolerance.
True tolerance cannot exist without disapproval. If I fully agree with your beliefs, I do not need to tolerate them. Tolerance begins when you encounter a view or practice that you find mistaken, troubling or even morally wrong and choose not to interfere with it – because you recognize coercion is not the appropriate response.
Pluralism moves beyond tolerance. It’s not just permitting someone’s beliefs; it’s trying to understand them and getting to know them. This is not the absence of conviction. It is the determination to live out one’s deepest convictions within a shared civic space, and to treat other people not as a threat but as key contributors to the community.
It can help to think about pluralism as a continuum. At the opposite end is hate: “I do not accept your existence.” Next is indifference: “I do not care what you believe.” Indifference is followed by tolerance as patience or forbearance: “I disapprove, but I will not interfere.”
The deeper form of tolerance is based in respect: “I affirm your humanity, even while disagreeing.” Finally, the last space on the spectrum is what scholars label relational or covenantal pluralism: “I’m committed to our connection, even though we disagree.”
Historically, religious conflict often centered on theological disputes: questions about doctrine, salvation or authority. Enlightenment thinkers such as John Locke, Immanuel Kant and Jean-Jacques Rousseau grappled with a shared question: How can diverse societies hold together in the face of such differences?
One answer was that societies needed some form of shared civic framework to bind citizens. Two centuries later, the sociologist Robert Bellah argued that Americans had developed just such a framework: a “civil religion” of shared symbols, narratives and moral commitments – such as the American flag, the Constitution and Memorial Day – that transcended particular faiths while sustaining a sense of common purpose.
Often, though, religious pluralism is less about theological differences themselves. Instead, conflict frequently erupts over social and political differences emerging from foundational values and identities.
Sri Lanka provides vivid examples of this disagreement. Article 9 of the country’s constitution grants Buddhism the “foremost place” among religions. Many religious minorities feel that provision writes a hierarchy into law, granting special privileges to the majority religion.
Or think about the consequences of the devastating 2019 Easter bombings – coordinated attacks on churches and hotels in three Sri Lankan cities by members of the Islamist militant group National Thowheeth Jama’ath.
A relative of a victim of the Easter bombings prays at their burial site in Negombo, Sri Lanka, on April 28, 2019. AP Photo/Manish Swarup
The resulting wave of anti-Muslim sentiment was not really driven by theological differences but questions about identity, trust and political power. Social media misinformation and opportunistic political rhetoric cast Muslims as outsiders threatening a Sinhala-Buddhist national identity. The question at stake was not which religion was true but who “truly” belonged to the nation.
If societies cannot sustain engagement across differences, shared civic life becomes impossible. This challenge, in my view, is not only institutional but also personal: What habits of mind allow religious pluralism to flourish?
Psychology of disagreement
On a personal level, pluralism begins in a moment of objection. You hear a belief that conflicts with your own. You see a religious symbol you find troubling. You run into a policy grounded in values that you reject. Our first reaction is often intuitive and emotional: irritation, aversion, anger, discomfort. Moral psychology suggests that such reactions feel automatic, confirming our sense that our view is the obvious truth.
What matters is what happens next. Some people quickly dismiss ideas they don’t like, shutting down curiosity. Others pause to reflect: asking why they reacted as they did, what the other person might value, and whether broader principles like freedom of conscience or fairness should guide their response.
This is a hard standard to live up to and one which I’ve struggled with myself. In the wake of the Easter bombings, I found myself growing impatient with Sri Lankans who continued to defend the actions of the government, even as it was detaining about 2,000 Muslims, often on thin evidence; banning women’s religious head coverings; and pardoning the ultranationalist monk most associated with anti-Muslim mob violence. I sometimes caught myself doing exactly what I study, reducing complex people to the worst version of their position. I stopped asking what they were trying to protect or what fears were driving their stance.
It took deliberate effort to step back and try to understand their perspective charitably, even while continuing to disagree. I had to reflect on the fact that for Sinhalese Buddhists carrying the memory of decades of Tamil separatist violence, the government’s response in the wake of the bombings could seem like a way to take the country’s security seriously. The tragedy was that this fear of violence was directed at an entire community, rather than the fringe actors who had committed the crime.
A Muslim woman takes part in a remembrance ceremony in front of St. Anthony’s Church in Colombo, Sri Lanka, on May 21, 2019, a month after a series of deadly Easter Sunday blasts. Ishara S. Kodikara/AFP via Getty Images
Reflection does not guarantee tolerance; we may still conclude that a belief is too harmful to accept. But it could also lead to a “principled allowance,” which is what makes tolerance possible: deciding that others have a right to hold or express views we dislike.
From there, the path can diverge again. Some people settle for a minimal “live-and-let-live” coexistence, while others move toward deeper dialogue and cooperation.
In other words, pluralism is not a single decision. It’s a series of steps to uphold a relationship, shaped by virtues such as humility, empathy, patience, fairness and courage. We can strongly disagree with someone but still ask: What does this belief mean to them?
That said, I still wrestle with where the boundaries of pluralism lie. What about when someone’s convictions lead to clear harm to vulnerable people? I do not have a clean answer. Over the years, though, I’ve come to believe that the difficulty of the question is not a reason to abandon the commitment. Committing to pluralism is a sign of character – one that can be strengthened by practicing particular virtues.
Which virtues support pluralism?
One is intellectual humility: recognizing the limits of our knowledge. It does not mean abandoning conviction. It means acknowledging the possibility that we’re wrong.
Studies suggest that intellectual humility is associated with openness to opposing viewpoints, attempting to understand how another person sees the world. When combined with curiosity, it moves beyond strategic tolerance toward fostering genuine relationships.
Another key virtue is empathy – but a specific kind of empathy. As an emotion, empathy can be biased; it may pull us toward people who look like us, feel close to us, or whose suffering resonates with our own experience. Another form of empathy, though, is perspective-taking: trying to understand another person’s thoughts, feelings or point of view. Studies have found that perspective-taking can reduce prejudice against people with different views.
Similarly, the virtue of curiosity can help reframe disagreement. Instead of seeing difference as a threat to our own identity, it becomes an opportunity to learn. Higher levels of curiosity have been found to both increase people’s motivation to learn and reduce their desire to distant themselves from people with different views.
Pluralism is challenging when emotions run high. That means another virtue it requires is self-regulation, the ability to reflect before reacting. Without it, moral disagreement can quickly descend into condemnation.
Tamil war survivors pray for family members during a commemoration ceremony in Mullaitivu, Sri Lanka, on May 18, 2024. Buddhika Weerasinghe/Getty Images
Finally, pluralism takes courage. People sometimes confuse pluralism with moral relativism: the view that right and wrong are just matters of opinion, with no universal moral foundation. Pluralism doesn’t mean giving up your values, but it requires bravery to discuss them openly with people who strongly disagree.
It is still early, but the emerging picture is consistent with what I observed as a child: that the people around me who maintained friendships across ethnic and religious lines were not people without convictions. They were people who had cultivated specific habits of mind that made that pluralism possible, despite blowback from others within their own community.
Putting it into practice
One practical way to build these habits is to practice what some researchers call an “ideological Turing test.” The rule is straightforward: Before you criticize someone’s position, you first have to explain it so accurately and charitably that they would recognize themselves in your summary. They would say, “Yes, that’s what I believe.”
Doing this well is hard. You have to get curious about what the other person is actually trying to protect, what they fear, what trade-offs they’re willing to live with, and what experiences might have shaped their perspective in the first place. This exercise quietly changes the aim of the conversation: Instead of trying to defeat the other person, you try to understand them.
The process also tends to trigger intellectual humility, because when we make a serious attempt to represent opposing views fairly, we may notice faults in our own thinking. None of this requires agreement, but it does reduce our tendency to caricature the other side.
Pluralism can also be strengthened by reframing our sense of “we.” In polarized environments, “we” tends to shrink until it names only the people who pray, vote and live exactly like us. Pluralism pushes in the opposite direction: It asks us to include fellow citizens whose deepest convictions diverge from our own. Community is a shared civic fate – the responsibilities, institutions and hopes we share, despite enduring disagreement.
Many times over the years, I have thought of a story my father told me, a vivid example of “we.” In 1983, Tamil militants killed 13 government soldiers, and anti-Tamil riots swept across the country. Sinhalese mobs attacked Tamil homes, businesses and neighborhoods in what became known as Black July – days of violence orchestrated by the government that killed thousands of Tamils and displaced many more. The riots are widely regarded as the spark that turned simmering tensions into a full-scale civil war.
A woman holds a portrait of her missing relatives during a protest by Tamils demanding justice for their loved ones near mass graves in Jaffna, Sri Lanka, on July 26, 2025. AFP via Getty Images
My grandparents and uncle were living in Kandy. When violence reached their area, they hid Tamil neighbors in their home, sheltering them from the mobs outside. My father said it was a split-second decision, motivated by the recognition that the people next door were their neighbors rather than members of a different ethnic and religious group.
Their actions required courage and a moral clarity that cut against the chaos of the moment. This clarity doesn’t appear out of nowhere; it emerges from habits practiced long before the moment of crisis arrives.
To build that courage in ourselves, we can also build habits of praise, noticing and naming when others are respectful to people across a divide. Virtues grow where they are socially reinforced. Each person can build accountability by committing with a friend or colleague to one concrete practice of pluralism: asking clarifying questions before responding, summarizing an opposing view before critiquing it, or pausing before posting an incendiary comment online.
Thinking back to my childhood, I remember the evening in 1993 when neighbors gathered outside after news that Sri Lanka’s president at the time, Ranasinghe Premadasa, had been assassinated. We could hear faraway fireworks lit by others who were rejoicing in his passing. And yet we stood together quietly.
The silence of the people around us did not erase our differences; the sound of the fireworks in the distance was a callous reminder of the disagreements that did exist. But to me, our neighbors’ silence affirmed something deeper: that our disagreements did not cancel our shared humanity.
In an era when religious and moral differences often feel like threats to identity, cultivating an individual ethic of pluralism may be one of the most critical civic tasks before us. Pluralism is not who we are by default. But it can be who we become – slowly, deliberately and together.
Eranda Jayawickreme receives funding from the Templeton Religion Trust (grant ID TRT-2024-33487). He is a member of the Labour Party (UK).
Source: The Conversation – USA – By Alejandro Hortal-Sánchez, Visiting Assistant Professor of Philosophy, Wake Forest University; University of North Carolina – Greensboro
A classic example of a nudge is making the healthy choices easier to grab in a cafeteria.Maskot via Getty Images
Twelve-year-old Jaysen Carr died in July 2025. While he swam in Lake Murray, a reservoir a few miles from Columbia, South Carolina, Naegleria fowleri – a rare amoeba found in warm fresh water – entered through his nose, causing a rapidly fatal brain infection.
Each year in the United States, drowning causes roughly 4,500 deaths, while infections from brain-eating amoebas typically number only two or three. Yet the vividness of these rare deaths powerfully shapes how people perceive and respond to risk. After a 2025 amoeba-related death made headlines in Iowa, for example, open-water swimmers began questioning whether lakes were safe, even as health officials emphasized how rare such infections remain.
Is it irrational to avoid swimming in lakes on hot summer days? How rational is it to fear flying? How many people worry about contaminants in their drinking water yet never think twice about skipping sunscreen, despite skin cancer being the most common, and largely preventable, cancer in the United States?
These reactions raise a deeper question: What does it mean to call a response “rational” or “irrational”? These are the kinds of ideas I explore in my research on behavioral public policy. How do the assumptions scientists make about human rationality shape the tools governments use to improve social welfare?
When mistakes aren’t really mistakes
Behavioral economists, following Daniel Kahneman, emphasize how heuristics – the mental shortcuts or rules of thumb people use to make quick decisions – produce systematic biases or predictable errors in judgment. From this perspective, these biases born from shortcuts lead people to make choices that do not serve their own interests or stated preferences.
Evolutionary psychologists such as Gerd Gigerenzer instead see those same shortcuts as adaptive responses to uncertainty. Rather than errors, they’re efficient strategies shaped by the environments in which human reasoning actually evolved.
These two perspectives are in disagreement about what counts as rational – and why that matters for policy.
Consider a few familiar examples. Frame the same medical procedure as having a 90% survival rate rather than a 10% mortality rate and patients respond very differently. Set one option as the default – whether in organ donation, retirement savings or privacy settings – and most people stick with it simply because opting out takes effort.
From a behavioral economics perspective, these are clear cases of bias: judgments shaped by framing, whatever feels most vivid, or inertia rather than careful deliberation.
From an evolutionary perspective, however, the picture changes. In complex environments with limited time, information and attention, relying on defaults or whatever feels most vivid or familiar can be an efficient way to decide without becoming overwhelmed. What looks like a mistake when judged against idealized models of rational choice may instead be a sensible response to real-world uncertainty.
This perspective helps explain why small changes in choice environments – nudges such as placing salad bars directly in cafeteria serving lines or listing vegetarian options first on menus – can significantly shift behavior without forcing anyone to choose differently. In other words, nudges work precisely because they align with, not fight against, the shortcuts people already use, making the desired behavior the path of least resistance.
Behavioral economists defend nudges as tools for correcting cognitive biases. Gigerenzer criticizes them as ethically problematic and argues that public policy should emphasize education over subtle choice manipulation.
If human rationality is seen as deeply flawed, nudges appear attractive because they make better decisions easier without demanding reflection.
If, instead, rationality is viewed as adaptive and teachable, policy should focus on strengthening people’s capacity to learn, adapt and decide for themselves.
Rationality isn’t just one thing
From bestselling books such as behavioral economist Dan Ariely’s “Predictably Irrational” to the worldwide expansion of behavioral “nudge” units in government, many contemporary developments suggest that people are poor decision-makers. Struggles with retirement savings, health, weight loss and environmental protection seem to confirm that view.
And yet, as a species, humans have been extraordinarily successful – adapting to diverse environments, building complex societies and accumulating knowledge across generations.
My claim is that this apparent contradiction dissolves once you recognize that rationality is not a single thing. Human beings can be both rational and irrational, depending on the scientific lens in use. From a behavioral economics perspective, many decisions appear biased and suboptimal. From an ecological or evolutionary perspective, those same decisions can look adaptive, efficient and sensible given the environments in which they are made.
At this point, the disagreement is not merely empirical but conceptual. People often assume that “rationality” names a single property of human behavior, when in fact its meaning depends on the scientific framework being applied.
Consider love. In neuroscience, love appears as patterns of brain activity and hormones. In psychology, it is studied through attachment and emotion. In sociology, it takes the form of social bonds and norms.
None of these accounts is wrong – but none captures love in full. I suggest rationality works in much the same way.
The danger arises when one perspective is treated as the whole story. Reducing love entirely to brain chemistry, or rationality entirely to cognitive biases, treats a partial explanation as a complete one. Scientific disciplines illuminate different aspects of complex phenomena, but none has a monopoly on their meaning.
Forgetting this carries a cost: We risk drawing overly narrow conclusions – about human behavior, intelligence or public policy – by mistaking the limits of a single framework for the limits of human rationality itself.
Seen this way, fear of rare brain-eating amoebas, of flying, or of tap water is not simply a failure of reason. Such reactions may appear irrational under one standard yet reflect a form of rationality adapted to uncertainty, vivid impressions and limited information.
What ultimately matters is not labeling people as rational or irrational, but being explicit about which conception of rationality is at work – and why. That choice, in turn, shapes whether public policy aims to nudge behavior, educate citizens or redesign environments so that human reasoning can operate at its best.
Alejandro Hortal-Sánchez does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.