Source: The Conversation – UK – By Vincent Gauci, Professorial Fellow, School of Geography, Earth and Environmental Sciences, University of Birmingham
Satellites circling the Earth have many different functions, including navigation, communications and Earth observation. About 8%-10% of all active satellites are military or “dual use” serving intelligence or reconnaissance functions as spy satellites.
But it was a climate satellite serving as both spy and “name and shame” police officer in the sky that recently caught the world’s attention when it went quiet.
MethaneSat was developed to spot emission hot spots or plumes of invisible methane pollution from space. Built by the US non-profit, the Environmental Defense Fund with Nasa’s support, it tracked methane leaks from oil and gas sites, farms and landfills across the globe.
These are among the biggest human-caused emission sources. But methane emissions are traditionally hard to spot because they come from so many relatively small point sources or plumes.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
This specialist observation satellite was developed and deployed because methane acts differently to other greenhouse gas emissions. Methane is a powerful greenhouse gas that, over 20 years, is more than 80 times more powerful a greenhouse gas than carbon dioxide.
Methane also has a short lifetime. Where carbon dioxide stays in the atmosphere for in excess of 100 years, relying on plant uptake for its removal from the atmosphere and conversion into other carbon forms, methane is broken down in the atmosphere by molecules known as hydroxyl radicals. These are nicknamed “the atmosphere’s detergent”, because they effectively remove methane from the atmosphere in less than ten years.
A gas flare at an oil refinery – one of many pinpoint sources of methane emissions. hkhtt hj/Shutterstock
This combination of short lifetime and high global warming potential (a measure of the climate strength of the gas relative to carbon dioxide) makes methane both a problem and an ideal target for reduction. In fact, growth in atmospheric methane is occurring at such a rate that it is placing us dangerously off track from meeting our Paris agreement obligations to stay within 1.5°C of climate warming by 2050 and 2°C by 2100.
Eyes in the sky
But how can we achieve these reductions and what was the role of MethaneSat in seeking to meet this objective?
There are two ways atmospheric methane concentrations can be reduced. A recent and more challenging proposition is that methane is actively removed from the atmosphere.
This is difficult because it relies on technological advances that are at their earliest stages (although growing more trees can go some way to achieving this). Another more realistic approach is to reduce emissions and then to let atmospheric chemistry do the work of removing excess methane in the atmosphere.
The global methane pledge was announced in 2021 at the UN climate summit, Cop26, in Glasgow. This aimed to reduce human-caused methane emissions by 30% on 2020 levels by 2030. More than 150 countries have now signed up to this pledge. If successful, it could reduce warming by up to 0.2°C by 2050. That’s why MethaneSat was so useful.
MethaneSat is fitted with a hyperspectral sensor – which can record sunlight reflected off Earth in hundreds of narrow colour bands across the spectrum, far beyond what our eyes can see. It’s capable of picking up concentrations of methane in air at minute quantities.
This sensor allowed the satellite to spot individual plumes of methane, so it had a crucial role in identifying those problem areas. Given that these are dispersed but also individual point sources, it was invaluable in intervening in the leaks, permitting identification of those responsible so they could be held to account and so address the problem.
No one instrument can cover what MethaneSat could do with freely available data. It had high precision, high spatial resolution and, critically, global coverage and it was particularly useful at identifying plumes in nations that don’t have the resources for the sort of regional surveys using aircraft mounted systems that can fill the gap in developed regions.
Now that MethaneSat is no longer operational, there are some other tools to identify small anthropogenic emissions sources, but they tend to be regionally focused like the aircraft measurements mentioned.
Other satellites gather similar data but that data sits behind commercial paywalls, whereas MethaneSat data was freely available. Collectively, these drawbacks mean that it’s just going to be that much harder to spot the emissions MethaneSat was so good at tracking.
Don’t have time to read about climate change as much as you’d like?
Vincent Gauci receives funding from the NERC, Spark Climate Solutions, the JABBS Foundation and has received funding from the Royal Society, Defra and the AXA Research Fund.
Zonal pricing would have categorised Britain into distinct zones, each with wholesale electricity prices that reflect how much power is generated locally, and how much demand there is for it. It would have raised prices in areas with lots of demand but low generation, like London, and lowered them where supply outstrips demand, such as in the turbine-rich Scottish Highlands.
This might have caused an immediate increase in the energy bills of already vulnerable households in some high-demand, low-generation areas, such as Tower Hamlets in London and Blackpool in north-west England.
But the idea was to encourage the construction of renewable energy to meet high demand in higher-priced zones, and prompt big electricity consumers to move to where electricity is cheaper. It was also intended to ease the need for new infrastructure to transmit electricity over long distances, like pylons. Australia, Norway and several EU nations already use this method.
The ultimate goal of zonal pricing was to make the price of electricity more accurately reflect generation and transmission costs. However, one thing has significantly inflated electricity prices in recent years, which this pricing method wouldn’t have addressed on its own: gas.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
Gas is expensive, even more so since Russia’s invasion of Ukraine. Britain’s electricity system operator brings power plants onto the system to meet demand in order of the lowest to highest marginal costs.
The point at which supply meets demand forms the wholesale price of electricity. Renewable sources, like wind and solar, have zero or very low marginal costs. But most of the time the wholesale price is set by gas plants, because they can readily fill a gap in supply but have high and erratic marginal costs (largely tied to what they pay for fuel).
We need another, cheaper technology to set the wholesale price of electricity. Batteries, which can store electricity over several hours, and options capable of storing energy for longer, such as compressed air and low-carbon hydrogen, could be just the thing.
The idea is simple: batteries can be charged at times when there is a lot of surplus electricity generation (on a bright, windy day, for example) and discharge it at times of peak demand (or when the sun doesn’t shine and the wind doesn’t blow). This would entail grid operators (and ultimately, consumers) not having to pay gas plants to fire up when renewable generation cannot meet the shortfall.
Unfortunately, batteries comprised just 6% of Britain’s total electricity capacity in 2024. Investment in energy storage has lagged behind what the government forecasts is necessary to meet its 2030 clean power goals, but it is at least increasing.
Research shows that the more money that is invested in batteries, the more associated costs come down. If used instead of gas to stabilise the grid, energy storage could significantly lower the wholesale cost of the UK’s energy over time, and with the right balance of policies, household bills too. This would require subsidies to cover some of the cost of making and installing batteries, and planning mandates to build new renewables alongside new batteries.
Affordable and fair
The government could also try alternatives to zonal pricing. Wholesale electricity prices could reflect the “strike” price in renewable energy contracts. This is the price at which developers have agreed to build clean electricity generation projects, like wind farms. This would mean that gas no longer sets the wholesale price, but stable, predictable prices agreed years in advance, which would help to regulate the retail costs consumers pay.
Solar arrays installed on farmland in Devon, southern England. Pjhpix/Shutterstock
These types of reforms can help set efficient energy prices, which the government usually talks about as the price needed to encourage investment in new energy technologies. But just because prices are efficient, it doesn’t mean they’re fair. Some households struggle to afford their energy bills even when markets are working efficiently. So, when prices change to encourage cleaner energy, it can hit them harder.
The government should implement new policies and expand eligibility for existing measures to take the burden off energy-poor households. These include social tariffs, which offer discounted rates to vulnerable consumers, and discounts for blocks of electricity use when renewables are generating a lot of it.
This support, combined with increasing investment in energy storage and renewables, will lower the wholesale price of electricity over time – and make energy more affordable (and fair) for everyone.
Don’t have time to read about climate change as much as you’d like?
Anupama Sen has previously received funding from the Quadrature Climate Foundation and Children’s Investment Fund Foundation.
Cassandra Etter-Wenzel and Sam Fankhauser do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Blindness, pneumonia, severe diarrhoea and even death – measles virus infections, especially in children, can have devastating consequences. Fortunately, we have a safe and effective defence. Measles vaccines are estimated to have averted more than 60 million deaths between 2000 and 2023.
But there’s more at stake than just measles itself. Emerging research suggests that the measles vaccination may offer surprising additional health benefits. Children who receive the vaccine have been shown to have a significantly lower risk of infections from diseases unrelated to measles.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
One explanation for this broader benefit is the idea of “measles amnesia.” This refers to the ability of the measles virus to erase parts of the body’s immune memory.
Our immune system contains various cells that protect us from infections. Some produce antibodies that neutralise viruses, while others detect and destroy infected cells. Immune memory allows the body to “remember” past infections and mount faster responses in the future.
However, measles infection may reduce the number and diversity of these memory cells – leaving children vulnerable to a wide range of diseases they had previously developed immunity to. In other words, the virus doesn’t just make children ill in the short term, it may also undo years of immune protection.
In one study, researchers found that between 11% and 73% of antibodies targeting other diseases were lost after a measles infection in unvaccinated children. This immune depletion was not observed in children who had received the vaccine, suggesting that vaccination protects against this damaging effect.
This broad loss of immune protection may explain why measles outbreaks are often followed by spikes in other infectious diseases. Ongoing studies are exploring the impact of measles amnesia in regions such as West Africa, where measles and other infections remain widespread.
A vaccine that does more?
Another theory for the vaccine’s broader benefit is known as the “non-specific effect”. Unlike measles amnesia, which explains how the virus weakens immunity, the non-specific effect suggests that the measles vaccine actively strengthens the immune system against a wide range of pathogens.
Recent research has shown that measles vaccination may enhance the function of certain immune cells, making them more effective at fighting off other diseases. Some scientists believe this effect, rather than protection against amnesia alone, could be the primary reason why vaccinated children have better overall health outcomes.
The measles vaccine is a live attenuated vaccine, which means it uses a weakened version of the virus to stimulate a strong immune response. Live vaccines, including the BCG vaccine for tuberculosis, are known to provide broad immune training effects, which may explain this non-specific protection.
Forgotten the dangers
In the 1960s, before widespread vaccination, measles caused around 2.6 million deaths per year. It’s hard to imagine today, but that’s partly the problem.
As measles became rare, society began to forget how serious it is. We forgot how contagious it is (one infected person can spread the virus to up to 90% of nearby unvaccinated people) and we forgot how effective vaccination is (two doses provide more than 90% long-term protection).
And in some circles, this fading memory has been replaced by something more dangerous: mistrust. Misinformation, vaccine myths, and anti-vaccine rhetoric are spreading, just like the virus itself.
So, whether the additional protection offered by the vaccine is due to prevention of immune amnesia, a non-specific immune boost, or both, the takeaway is the same: Vaccinate children against measles. Because when we protect them from measles, we may also be protecting them from so much more.
Antony Black does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Is it true that male animals are dominant over females? Previous studies have often found male-biased power in primates and other mammals.
A new study, investigating physical encounters between members of the same species in 121 primates (around a quarter of all primate species) found that half of all aggressive contests were between males and females. But males won these contests in only 17% of primate populations, with females dominating in 13% – making it almost as likely for females to dominate males.
The remaining 70% of primate populations showed no clear-cut dominance of one sex over the other. This study may have shown different results to previous research because it assessed individual contests rather than categorising species based on their social structure and physical attributes.
Get your news from actual experts, straight to your inbox.Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
The new study found male dominance, where males have a greater ability to influence the behaviour of the opposite sex, to be prevalent in primate species where the males are much larger than the females. This enables males to gain dominance through physical force or coercion. It was also widespread in species where males have weapons and mate with lots of females.
This is typical of African and Asian monkeys and the great apes, such as gorillas. Weighing in at around 200kg, a silverback male can be twice the size of the females within his troop. Male gorillas also have large canine teeth that can seriously injure or even kill other gorillas.
Female power was seen in primate species that had a scarcity of females, one exclusive sexual partner, similar sized males and females but did not have bodily weapons, according to the new study. These are all factors that give females more choice over who to mate with.
Female dominance was also seen in species where fighting with a male was less risky for the dependent offspring of females. For example, some primates “park” their young on their own in nests while foraging, rather than carrying them around. If a mother is holding her baby when she’s attacked, she may submit to protect her young.
Finally, matriarchal societies were common in species that live primarily in trees, which makes it easier to flee an attacker.
Female-dominated species were more likely in lorises, galagos and lemurs. So, contrary to the film Madagascar where King Julien is the king of the lemurs, females are, in fact, in charge. In the ring-tailed lemurs, females control access to food and mates, and maintain the dominance hierarchy where males are often at the bottom.
This is also true of bonobos, the closest relatives of humans. Although male bonobos are larger, females form coalitions to overcome the physical power of the males and force them into submission. This show of solidarity has also been shown in humans.
Think of how the suffragettes campaigned for women’s rights to vote in the UK. Or more recently, how women demanded new safety measures after Sarah Everard was murdered by Metropolitan Police officer Wayne Couzens in 2021.
Although female dominance has been documented less often in the wider animal kingdom, there are some examples that defy expectations. Spotted hyenas have a matriarchal society where females dominate the clans. They even have a pseudo-penis that they erect to indicate submission to more dominant individuals.
Naked mole rats have a queen that gives birth to all of the young while her offspring find food and defend the nest. The males are subordinate to the queen, but so too are the other females. In fact, the queen bullies the other members of her colony so much that the females are all rendered sterile through stress.
But what about the 70% of primate species that were found to show no dominant sex bias in the new study? These were largely the South American monkeys such as marmosets, tamarins and capuchins, that are generally small, live in trees, are social and omnivorous.
They also tended to have a prehensile tail that helps them grasp things. The ecology of these species fall in the middle of the male and female dominated species, with size difference and weapons being neither extreme nor absent, mating systems being neither polygamous nor monogamous, and the frequency of females being nether abundant nor rare.
The absence of a definitive sex-bias in dominance found in the majority of primate species may be a result of the rarity of contests between males and females, or because males and females were both equally likely to win. Nevertheless, dominance varied within species. For example the percentage of intersexual contests won by female patas monkeys ranged from 0% to 61%, depending on the population studied.
What does this mean for humans?
Human traits are not skewed towards those of male-dominated societies in other primates. We may not live in trees but males do not have natural weapons. Males are not always bigger than females, females do not tend to outnumber males and our sexual habits are varied.
Humans are actually more aligned to the 70% of species that show no clear distinction in sex biases, where species of either sex can become dominant. Let’s see which way evolution takes us.
Louise Gentle works for Nottingham Trent University.
This dairy farm in California’s Central Valley has installed solar panels on a portion of its land.George Rose/Getty Images
Imagine that you own a small, 20-acre farm in California’s Central Valley. You and your family have cultivated this land for decades, but drought, increasing costs and decreasing water availability are making each year more difficult.
Now imagine that a solar-electricity developer approaches you and presents three options:
You can lease the developer 10 acres of otherwise productive cropland, on which the developer will build an array of solar panels and sell electricity to the local power company.
You can select 1 or 2 acres of your land on which to build and operate your own solar array, using some electricity for your farm and selling the rest to the utility.
Or you can keep going as you have been, hoping your farm can somehow survive.
Thousands of farmers across the country, including in the Central Valley, are choosing one of the first two options. A 2022 survey by the U.S. Department of Agriculture found that roughly 117,000 U.S. farm operations have some type of solar device. Our own work has identified over 6,500 solar arrays currently located on U.S. farmland.
Our study of nearly 1,000 solar arrays built on 10,000 acres of the Central Valley over the past two decades found that solar power and farming are complementing each other in farmers’ business operations. As a result, farmers are making and saving more money while using less water – helping them keep their land and livelihood.
A hotter, drier and more built-up future
Perhaps nowhere in the U.S. is farmland more valuable or more productive than California’s Central Valley. The region grows a vast array of crops, including nearly all of the nation’s production of almonds, olives and sweet rice. Using less than 1% of all farmland in the country, the Central Valley supplies a quarter of the nation’s food, including 40% of its fruits, nuts and other fresh foods.
The food, fuel and fiber that these farms produce are a bedrock of the nation’s economy, food system and way of life.
But decades of intense cultivation, urban development and climate change are squeezing farmers. Water is limited, and getting more so: A state law passed in 2014 requires farmers to further reduce their water usage by the mid-2040s.
The trade-offs of installing solar on agricultural land
When the solar arrays we studied were installed, California state solar energy policy and incentives gave farm landowners new ways to diversify their income by either leasing their land for solar arrays or building their own.
There was an obvious trade-off: Turning land used for crops to land used for solar usually means losing agricultural production. We estimated that over the 25-year life of the solar arrays, this land would have produced enough food to feed 86,000 people a year, assuming they eat 2,000 calories a day.
There was an obvious benefit, too, of clean energy: These arrays produced enough renewable electricity to power 470,000 U.S. households every year.
But the result we were hoping to identify and measure was the economic effect of shifting that land from agricultural farming to solar farming. We found that farmers who installed solar were dramatically better off than those who did not.
They were better off in two ways, the first being financially. All the farmers, whether they owned their own arrays or leased their land to others, saved money on seeds, fertilizer and other costs associated with growing and harvesting crops. They also earned money from leasing the land, offsetting farm energy bills, and selling their excess electricity.
Farmers who owned their own arrays had to pay for the panels, equipment and installation, and maintenance. But even after covering those costs, their savings and earnings added up to US$50,000 per acre of profits every year, 25 times the amount they would have earned by planting that acre.
Farmers who leased their land made much less money but still avoided costs for irrigation water and operations on that part of their farm, gaining $1,100 per acre per year – with no up-front costs.
The farmers also conserved water, which in turn supported compliance with the state’s Sustainable Groundwater Management Act water use reduction requirements. Most of the solar arrays were installed on land that had previously been irrigated. We calculated that turning off irrigation on this land saved enough water every year to supply about 27 million people with drinking water or irrigate 7,500 acres of orchards. Following solar array installation, some farmers also fallowed surrounding land, perhaps enabled by the new stable income stream, which further reduced water use.
Irrigation is key to cropland productivity in California’s Central Valley. Covering some land with solar panels eliminates the need for irrigation of that area, saving water for other uses elsewhere. Citizen of the Planet/UCG/Universal Images Group via Getty Images
Changes to food and energy production
Farmers in the Central Valley and elsewhere are now cultivating both food and energy. This shift can offer long-term security for farmland owners, particularly for those who install and run their own arrays.
Recent estimates suggest that converting between 1.1% and 2.4% of the country’s farmland to solar arrays would, along with other clean energy sources, generate enough electricity to eliminate the nation’s need for fossil fuel power plants.
Though many crops are part of a global market that can adjust to changes in supply, losing this farmland could affect the availability of some crops. Fortunately, farmers and landowners are finding new ways to protect farmland and food security while supporting clean energy.
Farms are much more than the land they occupy and the goods they produce. Farms are run by people with families, whose well-being depends on essential and variable resources such as water, fertilizer, fuel, electricity and crop sales. Farmers often borrow money during the planting season in hopes of making enough at harvest time to pay off the debt and keep a little profit.
Installing solar on their land can give farmers a diversified income, help them save water, and reduce the risk of bad years. That can make solar an asset to farming, not a threat to the food supply.
Jacob Stid works for Michigan State University. Funding for this work came from the US Department of Agriculture’s National Institute of Food and Agriculture program and the Department of Earth and Environmental Sciences at Michigan State University. He also receives funding from the Foundation for Food and Agricultural Research.
Annick Anctil receives funding from NSF and USDA.
Anthony Kendall receives funding from the USDA, NASA, the NSF, and the Foundation for Food and Agricultural Research. He is an Assistant Professor at Michigan State University, and serves on the nonprofit board of the FLOW Water Advocates.
Younger Americans are more likely to express belief in witchcraft and luck, as our new research shows.
As sociologistswho researchthe social dynamics of religion in the United States, we conducted a nationally representative survey in 2021. Our survey posed dozens of questions to 2,000 Americans over the age of 18 on a wide range of beliefs in supernatural phenomena – everything from belief in the devil to belief in the magical power of crystals.
Our statistical analyses found that supernatural beliefs in the United States tend to group into four types.
The first represents what many consider “traditional religious beliefs.” These include beliefs in God, the existence of angels and demons, and belief in the soul and its journey beyond this lifetime.
A second represents belief in “spiritual and mental forces,” some of which are associated with either paranormal or new age beliefs. These include communicating with the dead, predicting the future, or believing that one’s soul can travel through space or time.
A third group represents belief in “witches and witchcraft.” This was measured on our survey with questions about the existence of “black magic” and whether it was “possible to cast spells on people.”
Our analysis finds that higher education and higher income are associated with lower levels of all four types of supernatural belief. Those with a bachelor’s degree or higher, for instance, score below average on all four types of belief, while those with less education score higher than average on all four.
Looking at race and ethnicity, we found that Latino or Hispanic individuals were more likely than white individuals to express belief in the “witches and witchcraft” form of supernatural belief. About 50% of Latino or Hispanic individuals in our survey, for example, strongly agreed that “witches exist.” This compares with about 37% of white individuals.
Comparing gender differences, we find that women are more likely than men to believe in the “spiritual and mental forces” forms of supernatural belief. For instance, about 31% of women in our survey agreed that “it is possible to communicate with the dead” compared with about 22% of men.
Why it matters
Our research addresses two key questions: first, whether people who hold one type of supernatural belief are also more likely to hold other types of supernatural beliefs; and second, how do different types of supernatural belief vary across key demographic groups, such as across educational levels, racial and ethnic groups, and gender?
Answering these questions can be surprisingly difficult. Most scientific surveys of the U.S. public include, at best, only one or two questions about religious beliefs; rarely do they include questions about other types of supernatural beliefs, such as belief in paranormal or superstitious forces. This could lead to an incomplete understanding of how supernatural beliefs and practices are changing in the United States.
An increasing number of Americans are leaving organized religion. However, it is not clear that supernatural beliefs have or will follow the same trajectory – especially beliefs that are not explicitly connected to those religious identities. For example, someone can identify as nonreligious but believe that the crystal they wear will provide them with supernatural benefits.
Moreover, recognizing that supernatural beliefs can include more than traditionally religious supernatural beliefs may be vital for better understanding other social issues. Research has found, for example, that belief in paranormal phenomena is associated with lower trust in science and medicine.
What’s next
Our survey provides some insight into the nature and patterns of supernatural belief in the U.S. at one point in time, but it does not tell us how such beliefs are changing over time.
We would like to see future surveys – both ours or from other social scientists – that ask more diverse questions about belief in supernatural beings and forces that will allow for an assessment of such changes.
The Research Brief is a short take on interesting academic work.
Christopher P. Scheitle receives funding from the National Science Foundation and the John Templeton Foundation. The research discussed in this article was supported by a grant from the Science and Religion: Identity and Belief Formation grant initiative spearheaded by the Religion and Public Life Program at Rice University and the University of California-San Diego and provided by the Templeton Religion Trust via The Issachar Fund.
Bernard DiGregorio receives funding from the National Science Foundation. The research discussed in this article was funded by a grant from the Science and Religion: Identity and Belief Formation grant initiative spearheaded by the Religion and Public Life Program at Rice University and the University of California-San Diego and provided by the Templeton Religion Trust via The Issachar Fund.
Katie E. Corcoran receives funding from the National Science Foundation, the John Templeton Foundation, and the Patient-Centered Outcomes Research Institute. The research discussed in this article was supported by a grant from the Science and Religion: Identity and Belief Formation grant initiative spearheaded by the Religion and Public Life Program at Rice University and the University of California-San Diego and provided by the Templeton Religion Trust via The Issachar Fund.
Source: The Conversation – USA (2) – By Elise Silva, Director of Policy Research at the Institute for Cyber Law, Policy, and Security, University of Pittsburgh
Artificial intelligence has taken off on campus, changing relationships between students and professors and among students themselves.Photo by Annie Spratt on Unsplash
The advent of generative AI has elicited waves of frustration and worry across academia for all the reasons one might expect: Early studies are showing that artificial intelligence tools can dilute critical thinking and undermine problem-solving skills. And there are many reports that students are using chatbots to cheat on assignments.
But how do students feel about AI? And how is it affecting their relationships with peers, instructors and their coursework?
I am part of a group of University of Pittsburgh researchers with a shared interest in AI and undergraduate education. While there is a growing body of research exploring how generative AI is affecting higher education, there is one group that we worry is underrepresented in this literature, yet perhaps uniquely qualified to talk about the issue: our students.
Our team ran a series of focus groups with 95 students across our campuses in the spring of 2025 and found that whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom. While AI products such as ChatGPT, Gemini or Claude are, of course, affecting how students learn, their emergence is also changing their relationships with their professors and with one another.
‘It’s not going to judge you’
Most of our focus group participants had used AI in the academic setting – when faced with a time crunch, when they perceive something to be “busy work,” or when they are “stuck” and worry that they can’t complete a task on their own. We found that most students don’t start a project using AI, but many are willing to turn to it at some point.
Many students described positive experiences using AI to help them study or answer questions, or give them feedback on papers. Some even described using AI instead of a professor, tutor or teaching assistant. Others found a chatbot less intimidating than attending office hours where professors might be “demeaning.” In the words of one interviewee: “With ChatGPT you can ask as many questions as you want and it’s not going to judge you.”
But by using it, you may be judged. While some were excited about using AI, many students voiced mild feelings of guilt or shame about their AI use due to environmental or ethical concerns, or just coming across as lazy. Some even expressed a feeling of helplessness, or a sense of inevitability regarding AI in their futures.
Anxiety, distrust and avoidance
While many students expressed a sense that faculty members are, as one participant put it, “very anti-ChatGPT,” they also lamented the fact that the rules around acceptable AI use were not sufficiently clear. As one urban planning major put it: “I feel uncertain of what the expectations are,” with her peer chiming in, “We’re not on the same page with students and teachers or even individually. No one really is.”
Students also described feelings of distrust and frustration toward peers they saw as overly reliant on AI. Some talked about asking classmates for help, only to find that they “just used ChatGPT” and hadn’t learned the material. Others pointed to group projects, where AI use was described as “a giant red flag” that made them “think less” of their peers.
These experiences feel unfair and uncomfortable for students. They can report their classmates for academic integrity violations – and enter yet another zone in which distrust mounts – or they can try to work with them, sometimes with resentment. “It ends up being more work for me,” a political science major said, “because it’s not only me doing my work by myself, it’s me double checking yours.”
Student-teacher relationships are a key part of a good college experience. What if students avoid professors and rely instead of always-available chatbots? U.S. Department of Education
Distrust was a marker that we observed of both student-to-teacher relationships and student-to-student relationships. Learners shared fears of being left behind if other students in their classes used chatbots to get better grades. This resulted in emotional distance and wariness among students. Indeed, our findings reflect other reports that indicate the mere possibility that a student might have used a generative AI tool is now undercutting trust across the classroom. Students are as anxious about baseless accusations of AI use as they are about being caught using it.
Students described feeling anxious, confused and distrustful, and sometimes even avoiding peers or learning interactions. As educators, this worries us. We know that academic engagement – a key marker of student success – comes not only from studying the course material, but also from positive engagement with classmates and instructors alike.
AI is affecting relationships
Indeed, research has shown that faculty-student relationships are an important indicator of student success. Peer-to-peer relationships are essential too. If students are sidestepping important mentoring relationships with professors or meaningful learning experiences with peers due to discomfort over ambiguous or shifting norms around the use of AI technology, institutions of higher education could imagine alternative pathways for connection. Residential campuses could double down on in-person courses and connections; faculty could be incentivized to encourage students to visit during office hours. Faculty-led research, mentoring and campus events where faculty and students mix in an informal fashion could also make a difference.
We hope our research can also flip the script and disrupt tropes about students who use AI as “cheaters.” Instead, it tells a more complex story of students being thrust into a reality they didn’t ask for, with few clear guidelines and little control.
As generative AI continues to pervade everyday life, and institutions of higher education continue to search for solutions, our focus groups reflect the importance of listening to students and considering novel ways to help students feel more comfortable connecting with peers and faculty. Understanding these evolving interpersonal dynamics matters because how we relate to technology is increasingly affecting how we relate to one another. Given our experiences in dialogue with them, it is clear that students are more than ready to talk about this issue and its impact on their futures.
Acknowledgment: Thank you to the full team from the University of Pittsburgh Oakland, Greensburg, Bradford and Johnstown campuses, including Annette Vee, Patrick Manning, Jessica FitzPatrick, Jessica Ghilani, Catherine Kula, Patty Wharton-Michael, Jialei Jiang, Sean DiLeonardi, Birney Young, Mark DiMauro, Jeff Aziz, and Gayle Rogers.
Elise Silva does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Soixante-dix ans après son ouverture, le 17 juillet 1955 à Anaheim en Californie, Disneyland constitue aujourd’hui plus que jamais le prototype d’une industrie d’un genre nouveau, dont les ramifications – mondialisation culturelle, pratiques managériales « performatives », divertissement et urbanisme « post-modernes » – ne cessent de s’étendre. Retour sur un phénomène qui devait révolutionner les pratiques de loisirs.
Septembre 1959. Nikita Khrouchtchev, en visite d’État aux États-Unis, se voit refuser la visite de Disneyland pour des motifs de sécurité. Outre provoquer la colère du dignitaire soviétique, cet incident diplomatique mineur renforce la fonction symbolique du parc dans l’Amérique d’après-guerre qui, d’échappée temporaire aux réalités angoissantes de la guerre froide, s’établit de fait en sanctuaire infranchissable au communisme et à ses émissaires. C’est dire l’influence politique et économique d’une entreprise – et avec elle, d’une industrie entière – pourtant préoccupée de cultiver une certaine insouciance parmi son public.
Avec ses « lands » ou contrées, disposées en éventail autour de son château de conte de fées et soigneusement thématisées autour des genres cinématographiques dominant alors le box-office, Disneyland, inauguré le 17 juillet 1955 à Anaheim (Californie), fournit au secteur son prototype autant que son emblème.
Si, bien sûr, Disneyland fut le premier à convoquer des décors et des imaginaires populaires empruntés au film, d’autres parcs auront avant lui témoigné d’un même goût pour un paysagisme « illusioniste », tels les jardins à l’anglaise, dont l’apparence « naturelle » tient d’un art consommé de l’artifice. Et dans leurs ambitions de pourvoir aux amusements et à l’émerveillement des visiteurs, Disneyland et ses avatars sont les légataires des Tivoli qui, ouverts en France dans des parcs autrefois privés et rendus publics avec la Révolution, offrent aux foules les divertissements jusqu’alors réservés à l’aristocratie : parades et spectacles, feux d’artifice, jeux, mais aussi merveilles de la technique, tels que vols en ballon, « katchelis » (ou, « roues diaboliques ») et autres « promenades aériennes » (premières itérations des grandes roues ou des montagnes russes).
La généalogie des parcs à thèmes les relie également aux panoramas (et à leurs cousins, les dioramas), aux jardins zoologiques (en particulier ceux édifiés par l’Allemand Carl Hagenbeck), voire aux expositions coloniales – tous dispositifs immersifs propices aux voyages immobiles et destinés à étancher la soif de dépaysement d’un XIXe siècle européen épris d’exploration et de conquête.
Ouvrant une nouvelle ère des loisirs, l’âge industriel a tôt fait de mettre ses moyens au service de l’amusement des masses : la Révolution industrielle fournit aussi bien leurs publics que leurs technologies aux premiers parcs d’attractions (dont les Luna Park [1903] et Dreamland [1904] de Coney Island à New York) – quand les prodiges de la technologie ne constituent pas l’objet même du spectacle, à l’instar des expositions universelles.
La thématisation
Comme sa désignation de parcs à thèmes le laisse entendre, c’est la thématisation qui, en assurant la cohérence et l’exotisme de chacune de ses contrées, fournit à Disneyland le principe organisateur d’une industrie encore à naître – à la différence des simples « parcs d’attractions » d’alors. Mieux encore, la thématisation y convoque un imaginaire cinématographique identifiant chacun des « lands » à des univers de fiction canoniques : aventure, western, science-fiction et, bien sûr, les films d’animation des studios.
Le temps de la visite, les thèmes substituent à l’« ici » et au « maintenant » du réel un ailleurs imaginaire, fournissant la pierre angulaire d’un art du récit dans l’espace : les paysages exotiques du parc suffisent en quelque sorte à mettre en scène le visiteur dans son propre rôle de touriste, l’invitant à rejouer le scénario du voyage en pays inconnu.
De toutes ses dimensions collectives et sociales, c’est le caractère supposément disciplinant du parc qui aura le plus retenu l’attention des critiques – lesquels y voient volontiers l’expression même de l’hégémonie capitaliste. Lieux privilégiés de la fausse conscience et marqueurs par excellence de la condition « post-moderne », les parcs à thèmes substitueraient la copie à l’original pour mieux mystifier le visiteur et l’inciter à une consommation effrénée.
Les parcs sont de fait le véhicule de grands récits qui leur fournissent une indéniable coloration idéologique. Le colonialisme constitue en quelque sorte le corollaire naturel de la quête d’exotisme que les parcs entendent satisfaire au travers de paysages prétendument sauvages et inexplorés : forêts tropicales, Ouest américain, ou cosmos. C’est à un éloge du libre marché comme moteur de progrès social et technique que s’adonnent les parcs Disney, lesquels effacent toute trace de conflits (y compris à Frontierland, inspiré du genre pourtant réputé violent du Western) et livrent de l’histoire une vision linéaire et consensuelle.
Enfin, à l’image des banlieues résidentielles d’après-guerre qui les voit naître, l’ordre « familial » des parcs prioritairement dévolus aux classes moyennes en fait aux États-Unis aussi bien le relais d’un certain conformisme social que l’un des instruments de la ségrégation urbaine, aidé en cela par des entrées payantes et une localisation excentrée.
Écarts de conduite
Disneyland comme ses prédécesseurs se prête pourtant à bien des écarts de conduite, assouvissant même pour certains les penchants anarchistes du public autant que ses plaisirs licencieux : déjà la foule populaire de Coney Island se voyait priée de saccager la porcelaine d’un intérieur bourgeois dans un grand jeu de chamboule-tout, quand les « tunnels de l’amour » permettaient aux corps de se rapprocher loin du regard des chaperons. Tout comme au carnaval, ce renversement temporaire des hiérarchies et des valeurs ne vaut peut-être toutefois que comme « soupape de sécurité » au service de l’ordre établi, désamorçant par le jeu toute velléité contestataire.
Il reste que loin d’être passif, le public déjoue parfois le script attendu : dans des mises en scène plus ou moins élaborées, certains prennent un malin plaisir à afficher des mines blasées ou à dénuder scandaleusement leurs poitrines lors de « photos finish » censées les surprendre au beau milieu de chutes vertigineuses.
De même, l’ambiance familiale et l’apparente hétéro-normativité des parcs Disney les désignent comme lieux d’activisme pour la visibilité LGBTQIA+, au travers de gays days originellement « sauvages » et désormais programmés avec l’assentiment de l’entreprise.
Un « Tchernobyl culturel »
Symbole de la mondialisation, Disneyland aura vu son arrivée en France décriée comme un « Tchernobyl culturel », quand le Puy du Fou exporte son savoir-faire – y compris auprès de régimes autoritaires – sans, lui, causer d’émotions particulières. Certains exemples défient également les récits conventionnels d’une mondialisation pilotée depuis les États-Unis : né en 1983 de la volonté de l’entreprise japonaise qui le détient et première déclinaison internationale d’un parc à thèmes, Tokyo Disneyland relève en vérité de l’import, non de l’export, culturel.
Si Disney est directement partie prenante de l’édification de villes entières (telles Celebration (Floride, États-Unis) ou Val d’Europe (Seine-et-Marne), où l’entreprise se fait parmi les plus puissants relais du New Urbanism), certaines municipalités de République populaire de Chine se font décor et, à l’image de parcs à thèmes, revêtent l’apparence de villes lointaines comme Paris ou Hallstatt.
Ce sont jusqu’aux pratiques managériales des parcs à thèmes qui se répandent dans les entreprises de services (restauration, hôtellerie, santé, etc.) selon des logiques de Disneyisation.
La composante la plus marquante en est certainement le « travail performatif », qui encourage les employés à considérer leur activité comme performance théâtrale et à convoquer certaines émotions pour mieux se conformer aux exigences de leur rôle. Entre jeu et travail cette fois, les parcs à thèmes brouillent, une fois de plus, les frontières.
Thibaut Clément ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
*Face à la menace russe et au désengagement états-unien, certains pays de l’Union européenne souhaitent s’engager vers une défense européenne plus autonome. Ils semblent aussi prendre au sérieux la proposition française de dissuasion nucléaire étendue au continent. Or les pays européens ont déjà été confrontés à ces perspectives, dans les années d’après-guerre. Que s’est-il alors passé ? Quelles leçons tirer de cette histoire ? Gilles Richard, professeur émérite en histoire contemporaine à l’Université Rennes 2, répond à nos questions. *
Malgré la proposition de de Gaulle, l’Allemagne (la RFA, alors), comme le reste de l’Europe, choisit la protection nucléaire états-unienne. L’Allemagne ne souhaitait pas dépendre de la France gaullienne et voulait conserver son indépendance pour mener à bien son objectif prioritaire, la réunification. Cette réunification dépendait de l’accord des États-Unis et de l’Union soviétique, qu’il s’agissait de ne pas froisser. Après la réunification en 1990, l’Allemagne décida de reconstruire économiquement l’ex-RDA – y investissant des sommes colossales – sans avoir à supporter en plus le coût de trop lourdes dépenses militaires. Elle a donc maintenu des relations diplomatiques très étroites avec les États-Unis.
Aujourd’hui le chancelier allemand Friedrich Merz se dit favorable à une discussion avec Paris sur la création européenne d’une force indépendante de dissuasion nucléaire. La France et l’Allemagne redécouvrent, en quelque sorte, la pertinence de la proposition gaullienne. Ce dernier avait compris que l’Organisation du traité de l’Atlantique-Nord (Otan) n’était pas parfaitement fiable. En effet, dans l’article 3 du traité de l’Atlantique Nord, signé en avril 1949, les États de l’Alliance se doivent mutuelle assistance. Mais, lors de la mise en place de l’Otan et d’un commandement intégré l’année suivante, au moment où la guerre de Corée battait son plein, il fut acté que les États-Unis n’interviendraient qu’après un vote du Congrès. Cela signifiait que l’assistance n’était pas vraiment automatique. Si les Européens profitèrent des bonnes relations avec leurs alliés d’outre-Atlantique pendant la guerre froide, rien ne fut jamais gravé dans le marbre. De Gaulle en avait conscience et il proposa en 1959 une direction collégiale de l’Otan, associant États-Unis, Royaume-Uni et France. Elle aurait permis d’éviter que l’Organisation ne dépendît que de Washington alors que la présidence d’Eisenhower, favorable aux Européens, allait prendre fin en 1960.
Aujourd’hui Emmanuel Macron offre la protection nucléaire française aux Européens mais il a bien précisé qu’il ne déléguerait pas son pouvoir. La dissuasion nucléaire européenne assurée par la France dépendra donc du choix ultime de notre président. Dès lors, que se passerait-il si, par exemple, Marine Le Pen était élue ? Les alliés européens pourraient-ils compter sur elle pour les protéger face à Poutine ? Si le président français garde la main sur le nucléaire, l’Europe reste dans le flou. Ce qui démontre, s’il le fallait, que l’Union européenne n’est pas un État organisé en tant que tel, mais seulement une « fédération d’États », selon la formule de Paul Magnette, qui n’est même pas régie par une constitution. Face au problème de sa défense, l’Union est ainsi clairement renvoyée à sa nature politique ambiguë. La défense, « pouvoir régalien » par excellence, pose en effet avec force la grande question politique qui commande tout l’avenir de l’Union européenne : quand se dotera-t-elle d’une constitution à part entière, définissant les pouvoirs respectifs de l’Union et des États la composant sur les plans législatif, exécutif et judiciaire ? Une constitution qui devra évidemment être élaborée collectivement et être ratifiée par l’ensemble des peuples composant l’Union.
On le voit, la création d’une défense commune pose le problème de fond que les Européens n’ont jamais pu ou voulu résoudre, celui d’un pouvoir politique commun, capable, entre autres choses, de piloter une défense commune. Depuis le départ, la « construction européenne » s’est faite sur la base de traités diplomatiques ajoutés les uns aux autres, excluant la possibilité de construire un État européen démocratique. C’est ce qui est à nouveau en jeu à travers la question de la défense.
« La France doit-elle partager son “parapluie nucléaire” avec l’Europe ? », 28 minutes, Arte-YouTube/28 minutes – Arte (mars 2025).
Au-delà de la dissuasion nucléaire, certains pays européens souhaitent avancer vers l’idée d’une défense commune. Il y a également un précédent historique, avec la Communauté européenne de défense, qui n’a pas abouti. Quel enseignement tirer de cet épisode ?
G. R. : En juin 1950, alors que les négociations venaient à peine de débuter pour créer une Communauté européenne du charbon et de l’acier (Ceca), éclata la guerre de Corée. Les États-Unis en profitèrent pour imposer le réarmement de la RFA, née au printemps 1949. Depuis 1947, les gouvernements français résistaient par tous les moyens au réarmement allemand qu’ils redoutaient par-dessus tout, sans doute autant que la menace soviétique. Robert Schuman et Jean Monnet avaient d’ailleurs conçu le projet de Ceca dans l’esprit d’un compromis avec les États-Unis : la RFA reconstituait ses capacités économiques, à commencer par son industrie lourde (la base de toute industrie d’armement), mais dans un cadre européen qui permettrait de l’encadrer strictement et d’empêcher qu’il servît à une renaissance du « militarisme allemand », comme on disait à Paris.
La guerre de Corée bouleversa tous les calculs français car les États-Unis exigèrent le réarmement de l’Allemagne, en première ligne face au bloc soviétique et dotée du principal potentiel industriel en Europe. C’était la condition qu’ils mettaient à la création d’un état-major occidental et à l’envoi de forces militaires importantes sur le continent – les soldats états-uniens avaient quitté le sol européen en 1947, ne laissant que des unités relativement peu nombreuses en RFA.
Lors de la conférence des douze ministres de la défense des pays membres de l’Otan, en septembre 1950, Jules Moch, ministre de la défense du gouvernement français d’alors, se retrouva totalement isolé et la France menacée de devoir quitter l’Organisation alors que la guerre de Corée faisait planer la menace d’une troisième guerre mondiale. Elle dut alors céder, tout en essayant de trouver une formule qui limitait au maximum le risque du réarmement du voisin tant redouté.
Du lundi au vendredi + le dimanche, recevez gratuitement les analyses et décryptages de nos experts pour un autre regard sur l’actualité. Abonnez-vous dès aujourd’hui !
C’est alors que Jean Monnet, proche de René Pleven, président du Conseil en exercice, bricola en six jours un plan destiné à satisfaire les États-Unis sans recréer une armée allemande à part entière. Pour cela, il imagina, dans le cadre de l’Europe des six en train de naître (la Ceca, instituée en 1951), la création d’une force de 40 divisions qui mélangeraient des bataillons nationaux – allemands, italiens, français, etc. – et un ministre commun de la défense, uniquement chargé des aspects logistiques (équipements, mobilisation).
Ce plan Monnet, qui devint le « plan Pleven », fut adopté par l’Assemblée nationale le 24 octobre 1950. Les Britanniques étaient absents, car ils n’avaient pas voulu renoncer à une part de leur souveraineté lors des négociations de la Ceca.
Pourtant, les États-Unis jugèrent ce plan irréaliste et ils obtinrent qu’une armée européenne fût mise en place avec des divisions allemandes, moyen indispensable à leurs yeux pour qu’elle fût efficace sur le terrain en cas de conflit. De plus, cette armée européenne fut placée sous commandement états-unien dans le cadre de l’Otan. On aboutit ainsi à une redéfinition du plan Pleven initial qui devint la Communauté européenne de défense (CED). Le traité remanié prévoyait néanmoins – Jean Monnet et les partisans d’une Europe fédérale y tenaient – dans son article 33 la mise en place à terme d’un pouvoir politique commun pour piloter l’armée européenne.
Commença alors un long parcours du combattant, si l’on peut dire, pour faire signer par les gouvernements puis ratifier par les Parlements ce traité né dans l’urgence de la guerre de Corée. Or une majorité de Français ne voulaient pas d’armée européenne avec des divisions allemandes aux côtés des divisions françaises. Rappelons que, dans les gouvernements Adenauer, on comptait divers anciens nazis, à commencer par Hans Maria Globke, chef de la chancellerie fédérale et principal conseiller d’Adenauer. Plus largement, le traumatisme de l’Occupation restait vivant dans tous les esprits, cinq ans seulement après la Libération. Enfin, le contexte dans lequel la CED était née évolua lui-même assez vite. Dès 1951, après les victoires de la Corée du Nord soutenue par 800 000 « volontaires » chinois, les Occidentaux rétablirent le front sur le 38ᵉ parallèle et les combats devinrent résiduels. Puis, en mars 1953, Staline mourut et une nouvelle phase de la guerre froide débuta, bientôt nommée « coexistence pacifique » par Khrouchtchev.
Le gouvernement Pinay signa en mai 1952 le traité instituant la CED, mais sa ratification par le Parlement fut sans cesse repoussée dans le contexte international qui vient d’être décrit et sous la pression croissante et convergente des communistes et des gaullistes qui rejetaient toute idée d’intégration militaire et politique dans une instance supranationale. Ils trouvèrent de nombreux soutiens dans d’autres partis, notamment chez les socialistes et les radicaux. Seuls les démocrates-chrétiens du MRP et la grande majorité des modérés (CNIP) défendirent le projet jusqu’au bout.
Quand Pierre Mendès France décida de crever cet abcès qui ne cessait de gonfler, les jeux étaient faits. En août 1954, l’Assemblée nationale, en votant une question préalable, rejeta l’examen du traité instituant la CED, qui fut alors définitivement enterré. Il n’y eut donc ni armée européenne ni ébauche de gouvernement européen.
La construction européenne reprit en 1955 avec la conférence de Messine mais cantonnée au seul plan économique et sans pouvoir politique démocratique (traité de Rome, mars 1957). Nous en sommes toujours là. Une Europe de la défense est sans doute nécessaire, mais elle n’est pas crédible sans Europe politique.
Comment avancer vers une Europe de la défense ?
G. R. : L’historien que je suis n’a pas les solutions ! En tant que citoyen, il est néanmoins possible d’affirmer qu’il ne peut y avoir d’armée européenne sans un État européen, avec un Parlement qui vote les crédits militaires communs, un ministère qui donne des ordres, impulse des plans de construction d’armes avec des normes communes, un gouvernement qui décide éventuellement d’entrer en guerre. Or, Commission européenne et Parlement de Strasbourg ne forment pas un État constitutionnellement organisé et légitime pour assumer ces fonctions. Pour l’heure, l’Union européenne n’est pas capable d’assurer la sécurité de ses habitants.
La vraie question est finalement politique. Sommes-nous prêts à entrer dans un processus de construction d’un État fédéral européen démocratique ? Sommes-nous prêts à construire des « États-Unis d’Europe » comme le souhaitaient les militants fédéralistes des années 1950 ? Et, si oui, comment faut-il procéder ? Avec quels États ? Seulement la France et l’Allemagne pour commencer ? Avec quelques autres (Benelux, Espagne, Italie…) ? Mais alors, comment rester solidaires des États frontaliers de la Russie (les États baltes en premier lieu), terriblement inquiets pour leur avenir et ne jugeant fiable que « le parapluie nucléaire » états-unien, même s’il l’est de moins en moins ?
Les Européens payent soixante-quinze années de « construction européenne » faite sur une base essentiellement économique et technocratique – le marché, la concurrence, les flux. Ils se retrouvent aujourd’hui au pied du mur. Rien de solide ne pourra se faire sans mettre la démocratie au cœur du projet d’union des nations européennes.
Gilles Richard ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Source: The Conversation – France – By Diana Pérez-Arechaederra, Associate Professor of Organizational Psychology, ESCP Business School
In the 2000s, when I worked as a psychologist in long-term elderly care and primary healthcare services, many of the patients I saw were living with chronic or complex conditions. These situations required that patients trust care providers, consistently adhere to treatments and, often, receive care over an extended period of time.
But what stood out to me were the differences in how those protocols were applied. Some practitioners took time to explain something clearly, asked questions that showed genuine care, or invited patients into a conversation about their treatment. I also noticed how differently patients responded when none of that happened.
The quality of communication – the level of respect, attention and clarity – often made the difference between patients’ cooperation and resistance, between their motivation and withdrawal.
These observations led me to systematically investigate the psychological processes involved in how patients perceive fairness in healthcare.
What I found, in collaboration with colleagues, is that this “soft” dimension of care – how people perceive their treatment, how information is shared with them, and how much time and space they are given to take part in the process – has very real effects on behaviour. Patients’ perception of respect – what we call interactional fairness – often hinges on whether they are given the chance to ask questions, make sense of information, weigh different options and even participate in making decisions. For patients to follow a practitioner’s recommendations, they need to feel informed, heard, respected and involved – not just treated.
Interactional justice – the sense of being treated with dignity, attentiveness and respect
Informational justice – the perception that shared information is clear, complete, timely and relevant
We surveyed over 850 patients in Spain and the United States who had visited a healthcare provider in the previous six months. We asked them how they experienced their interactions with health professionals, how much they trusted those professionals, how satisfied they were with the service, whether they followed medical advice, and whether they intended to return to the same provider.
What we saw was a clear pattern. Patients who perceived fairness – being treated with respect and given clear and appropriate information – were more likely to trust their healthcare provider. That trust, in turn, shaped whether they felt able to engage with treatment and sustain their relationship with (or, in the language of our study, their “loyalty” to) the healthcare service or physician. What we call informational fairness had a particularly strong direct link to adherence to treatments or clinical advice, showing its importance for understanding patient behaviour.
In healthcare, patients are navigating uncertainty, vulnerability, and long-term relationships with systems and providers. Their ability to understand, participate in and trust that process is integral to care.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
Insights across borders
Despite the structural and institutional differences between Spain, with its predominantly public healthcare system, and the United States, where healthcare is largely organised through the private sector, our goal was to identify common patterns in how patients interpret and engage with services. Specifically, we sought to understand whether similar cognitive and emotional processes create the patient experience, regardless of the broader healthcare system in place.
Using path analysis models, we assessed the relationships between patients’ perceptions of fairness and their resulting levels of trust and satisfaction, and then, the relationship between those perceptions and patients’ adherence and loyalty to the service. While patients in the United States exhibited slightly stronger associations between perceived fairness and both trust and satisfaction, the overall nature of the relationships was highly consistent across both countries.
These findings suggest that despite differences in how care is delivered and financed, patients in both countries respond to their healthcare interactions in fundamentally similar ways. This matters for healthcare providers and policymakers across diverse settings who are aiming to enhance patient-centred care.
Recognizing patients as agents
At the heart of this is an ethical question: Are patients treated as agents in their own care, or simply as objects of intervention?
Medicine is not a closed, flawless system. It is a developing field of research being translated into practice, and its shortcomings are shaped by social and structural biases, and by the fact that patients may not be given all of the options they should receive. In areas such as women’s health, chronic pain, mental health and rare diseases, patients often offer insights that clinical protocols miss. When their lived experience is ignored or dismissed, we lose opportunities for better diagnoses, more responsive and efficient care, and more sustainable treatment plans.
When I was working in elderly care, I remember the testimony of a resident who was very upset because his parenteral treatment (an injection) had been changed to an enteral one (a drink). Nobody informed him about the change. When I asked him why he was so unhappy, he said: “I much preferred the injections because the clinician who came to administer them was very nice to me. We were friends. Now, I’ll never see her again.”
I’m not sure whether continuing with the parenteral administration was even possible, but what was certain is that nobody asked him what he preferred. And that had an impact on him.
Listening to patients is not merely being polite: it is recognizing that they have information that professionals lack. And that the ethical foundation of health care depends not only on what medical professionals do to patients, but on how they work with them.
What can be done
Creating fairer care involves the following concrete practices, which come from our findings:
Designing information systems that support timely, accessible and patient-centred communication
Designing procedures and allocating enough time for professionals to conduct themselves in accordance with interactional and informational fairness principles
Training for professionals in relational and communication skills that foster patients’ perceptions of respect and dignity
Educating patients about what care can reasonably provide to help set appropriate expectations
Reframing patient participation so that patients are not just surveyed after the fact, but listened to and given agency throughout the care process
None of this is separate from clinical quality. On the contrary, it is what allows clinical care to work best and for all. When patients feel that they matter – that they are respected and informed – they are more likely to collaborate, follow through and return for more care if they need it. That would benefit patients, their practitioners, healthcare systems and society.
The scientific article referred to in this piece was funded by the Spanish Ministry of Science and Innovation and the Instituto de Salud Carlos III (ISCIII), whose projects, RD24/0005/0018, were co-funded by the European Union and the Facility for Recovery and Resilience (MRR). The Network for Research on Chronicity, Primary Care and Health Promotion (RICAPPS) was involved in the development of RD24/0005/0018. Projects PI22/01677 and PI20/00321 were co-financed by the European Union. The government of Castilla y León also collaborated in the funding of this study through research projects BioSan 2009 and BioSan 2011. These funders played no role in the study design, data analysis, results reporting or the decision to submit the manuscript for publication.