Climate models reveal how human activity may be locking the Southwest into permanent drought

Source: The Conversation – USA (2) – By Pedro DiNezio, Associate Professor of Atmospheric and Ocean Sciences, University of Colorado Boulder

A worker moves irrigation tubes on a farm in Pinal County, Ariz. A two-decade drought has made water supplies harder to secure. Carolyn Cole/Los Angeles Times via Getty Images

A new wave of climate research is sounding a stark warning: Human activity may be driving drought more intensely – and more directly – than previously understood.

The southwestern United States has been in a historic megadrought for much of the past two decades, with its reservoirs including lakes Mead and Powell dipping to record lows and legal disputes erupting over rights to use water from the Colorado River.

This drought has been linked to the Pacific Decadal Oscillation, a climate pattern that swings between wet and dry phases every few decades. Since a phase change in the early 2000s, the region has endured a dry spell of epic proportions.

The PDO was thought to be a natural phenomenon, governed by unpredictable natural ocean and atmosphere fluctuations. But new research published in the journal Nature suggests that’s no longer the case.

Working with hundreds of climate model simulations, our team of atmosphere, earth and ocean scientists found that the PDO is now being strongly influenced by human factors and has been since the 1950s. It should have oscillated to a wetter phase by now, but instead it has been stuck. Our results suggest that drought could become the new normal for the region unless human-driven warming is halted.

The science of a drying world

For decades, scientists have relied on a basic physical principle to predict rainfall trends: Warmer air holds more moisture. In a warming world, this means wet areas are likely to get wetter, while dry regions become drier. In dry areas, as temperatures rise, more moisture is pulled from soils and transported away from these arid regions, intensifying droughts.

While most climate models simulate this general pattern, they often underestimate its full extent, particularly over land areas.

Two men stand beside a cement box. The landscape is dry around them.
Arizona Game and Fish Department workers pump water into a wildlife water catchment south of Tucson in July 2023. In normal years, the catchment receives enough rainwater, but years of drought have changed that.
Andrew Caballero-Reynolds/AFP via Getty Images

Yet countries are already experiencing drought emerging as one of the most immediate and severe consequences of climate change. Understanding what’s ahead is essential, to know how long these droughts will last and because severe droughts can have sweeping affects on ecosystems, economies and global food security.

Human fingerprints on megadroughts

Simulating rainfall is one of the greatest challenges in climate science. It depends on a complex interplay between large-scale wind patterns and small-scale processes such as cloud formation.

Until recently, climate models have not offered a clear picture of how rainfall patterns are likely to change in the near future as greenhouse gas emissions from vehicles, power plants and industries continue to heat up the planet. The models can diverge sharply in where, when and how precipitation will change. Even forecasts that average the results of several models differ when it comes to changes in rainfall patterns.

The techniques we deployed are helping to sharpen that picture for North America and across the tropics.

We looked back at the pattern of PDO phase changes over the past century using an exceptionally large ensemble of climate simulations. The massive number of simulations, more than 500, allowed us to isolate the human influences. This showed that the shifts in the PDO were driven by an interplay of increasing warming from greenhouse gas emissions and cooling from sun-blocking particles called aerosols that are associated with industrial pollution.

From the 1950s through the 1980s, we found that increasing aerosol emissions from rapid industrialization following World War II drove a positive trend in the PDO, making the Southwest rainier and less parched.

After the 1980s, we found that the combination of a sharp rise in greenhouse gas emissions from industries, power plants and vehicles and a reduction in aerosols as countries cleaned up their air pollution shifted the PDO into the negative, drought-generating trend that continues today.

This finding represents a paradigm shift in our scientific understanding of the PDO and a warning for the future. The current negative phase can no longer be seen as just a roll of the climate dice – it has been loaded by humans.

Our conclusion that global warming can drive the PDO into its negative, drought-inducing phase is also supported by geological records of past megadroughts. Around 6,000 years ago, during a period of high temperatures, evidence shows the emergence of a similar temperature pattern in the North Pacific and widespread drought across the Southwest.

Tropical drought risks underestimated

The past is also providing clues to future rainfall changes in the tropics and the risk of droughts in locations such as the Amazon.

One particularly instructive example comes from approximately 17,000 years ago. Geological evidence shows that there was a period of widespread rainfall shifts across the tropics coinciding with a major slowdown of ocean currents in the Atlantic.

These ocean currents, which play a crucial role in regulating global climate, naturally weakened or partially collapsed then, and they are expected to slow further this century at the current pace of global warming.

A recent study of that period, using computer models to analyze geologic evidence of earth’s climate history, found much stronger drying in the Amazon basin than previously understood. It also shows similar patterns of aridification in Central America, West Africa and Indonesia.

The results suggest that rainfall could decline precipitously again. Even a modest slowdown of a major Atlantic Ocean current could dry out rainforests, threaten vulnerable ecosystems and upend livelihoods across the tropics.

What comes next

Drought is a growing problem, increasingly driven by human influence. Confronting it will require rethinking water management, agricultural policy and adaptation strategies. Doing that well depends on predicting drought with far greater confidence.

Climate research shows that better predictions are possible by using computer models in new ways and rigorously validating their performance against evidence from past climate shifts. The picture that emerges is sobering, revealing a much higher risk of drought across the world.

The Conversation

Pedro DiNezio receives funding from the U.S. National Science Foundation, National Oceanic and Atmospheric Administration, and WTW Research Network.

Timothy Shanahan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Climate models reveal how human activity may be locking the Southwest into permanent drought – https://theconversation.com/climate-models-reveal-how-human-activity-may-be-locking-the-southwest-into-permanent-drought-262837

Premier League: from red success to grey failure – how kit colours impact performance

Source: The Conversation – UK – By Zoe Wimshurst, Senior Lecturer of Sport Psychology, Health Sciences University

As the Premier League season kicks off, fans will debate their new kits almost as much as new signings. But could shirt colour actually give teams a performance edge? Science suggests they can.

One of the most studied colour effects in sport is that of red kits leading to greater success. In the Premier League era, more than half of all champions have worn red home kits, and a study looking at the 2004 Olympic Games found that in combat sports, where the colours of red and blue are randomly assigned, athletes wearing red were more likely to win.

These effects have also been shown in Rugby League and esports (video game competitions).

But why is this? It has been suggested that from both a cultural and biological perspective, red is associated with dominance and aggression. Wearing red has been shown to boost players feelings of dominance whereas an opponent who is wearing red is perceived as more threatening.

Research has also shown that taekwondo referees award more points to fighters in red than blue – even when digital manipulation allows them to view exactly the same fight with just the colours reversed. Studies on football players have also found that strikers score fewer goals when facing a goalkeeper wearing red.

There are other useful colours, too. The gold selected by Crystal Palace is a strong contender as it offers high visibility under both daylight and flood lights. Lighter colours which will offer a high contrast against the pitch, such as the whites chosen by Chelsea and Nottingham Forest, will also stand out.

Psychologists call these “colour singletons”, hues that are unique in the visual scene. Studies show that our attention is automatically drawn to them. Unusual colours that are unlikely to match those found on the pitch or advertising boarding will make players easier to detect at a glance.

Tottenham Hotspurs players of the 2016–17 season wearing white.
Tottenham spurs players of the 2016–17 season wearing white.
wikipedia, CC BY-SA

Patterns matter too. High-contrast blocking or stripes can help separate a moving object from its background. Bournemouth’s striped away kit should be more visible than a plain mid-tone shirt. The contrast between the luminous top half of Fulham’s away shirt and the relatively dark shorts should also enhance detection.

Camouflage effect

Despite this evidence, not a single Premier League club has chosen red for an away kit this season. Instead, there are some novel choices such as lilac, cream and turquoise. A previous example of a novel kit choice not working so well was in 1996 when Manchester United’s infamous grey away kit was scrapped mid-game after gong 3-0 down to Southampton.

The manager, Alex Ferguson, claimed players couldn’t see each other clearly. It wasn’t just an excuse, the grey was a near perfect match for the concrete of the stadium and blended into the blur of the crowd.

Camouflage effects like this are well documented in biology. Indeed, animals depend on them to make detection by predators harder. In a stadium, muted greys or browns can do the same. Brentford’s new brown away kit risks a similar problem, especially in overcast conditions or with concrete-backed stands. Black kits can also fade into the background, particularly in low light conditions where there is reduced contrast.

This season, Tottenham Hotspur*, Manchester City and Aston Villa have all selected black away shirts which could lead to lower visibility of teammates.

Camouflage is not limited to dull colours. Newcastle’s green away kit, while bright, is likely to merge with the turf, particularly in players peripheral awareness where the human visual system is not designed to see colours clearly.

Another subtle visual trap is “countershading”, a gradient that goes from dark to light found in many animals to make them less detectable. In football, a dark shirt with pale shorts could break up a players outline in bright sunlight. This is great for a deer-avoiding predators, less helpful if you are trying to spot your striker in space.

So why don’t clubs use this science to select kits? The answer is most likely commercial. Away kits are as much about selling shirts as improving performance. Novelty colours create buzz, drive sales and help clubs stand out on the high street, even if they blend in on the pitch.

Colour is not just fashion. It is also linked to psychology, perception and physics. The right shade can make you unmissable, the wrong one can make you disappear. In elite sport, with such fine margins between success and failure, kit colour is an area which should not be overlooked.

The Conversation

Zoe Wimshurst is the owner of Performance Vision Ltd, a company specialising in visual training and consultancy services.

ref. Premier League: from red success to grey failure – how kit colours impact performance – https://theconversation.com/premier-league-from-red-success-to-grey-failure-how-kit-colours-impact-performance-263062

What does pocket money teach children? It can offer social as well as financial education

Source: The Conversation – UK – By Gaby Harris, Lecturer, Manchester Metropolitan University

A3pfamily/Shutterstock

If you’re a parent, the summer holidays and approaching new school year might have you questioning your children’s access to pocket money – how much they get, how much they’re spending and what they’re spending money on.

How pocket money is provided varies. So be reassured there is no right, wrong or normal way to give your kids money. For some households, it will be weekly small amounts simply for kids to use at their leisure. For others, it will include forms of payment for work done around the house.

According to recent data from NatWest, children get an average of £3.85 a week, and £9.13 if you factor in income for chores.

While around one in three households give regular allowances, many households give pocket money flexibly. Much of this flexibility depends on how much children contribute to the household.

The language used in recent years in reports from banks such as NatWest and GoHenry on pocket money describe “entrepreneurial”, “determined” and “industrious” children who are earning more and spending responsibly. NatWest claims children are learning “great money management” and “positive behaviours”.

This positions pocket money as more than just disposable income – as a learning opportunity. But it’s worth looking closely at what money teaches children, and what it is we want them to learn.

On the face of it, teaching children to be hardworking, and rewarding that hard work, sounds alright. But we need to consider this carefully in a time of work precarity, debt and declining welfare.

This kind of financial literacy encourages an individualised idea of what money is and how it is valued. The consequence of this is that inequalities in income and finances become linked with personal failures of “not working hard enough”, rather than systemic problems.

In reality, a lack of access to money is not often a reflection of how hard someone works, but based on background, race, gender or disability.

Banks’ advice for parents also suggest that pocket money can be used to reward good behaviour. But what good behaviour means is up for debate. For one thing, it likely varies between parents and children, so becomes a tool for what parents think good behaviour is.

Money has a social power that children understand. My research demonstrates how they can use this to negotiate with each other, interpret parent rules and most importantly rework for their own purposes. I document the example of the teenage girl who knew her parents would give her more money if she went out with people they approved of. While the girl saw this as something she could negotiate for her own benefit, we must also ask what this teaches kids about coercion and control.

The risk is that parents will inadvertently encourage their children to associate money with control and a need to conform to access money. The effect of this can be far reaching.

Forthcoming research by my colleague at the London School of Economics, Liz Mann, explores how witnessing controlling behaviour over money in childhood may increase women’s desires for independence in adulthood, even if this leaves them economically disadvantaged in their relationships.

Building a better future

If we are going to make connections between money and behaviour, it would be far better to think about traits such as kindness, generosity, inclusivity. The evidence is there to suggest this is much more in line with how children think about and use money.

Children's hands holding coins
Children know the social power of money.
A3pfamily/Shutterstock

Children are very aware of their families’ financial situations and often adjust their spending around this. They are also savvy and communal with how they think about money. They create their own little economies based on sharing, borrowing and bartering with each other. These are much better skills of responsibility centred around sharing and caring.

NatWest’s recent report also suggests that, while kids might be feeling the cost-of-living squeeze every bit as much as adults, they remain steadfast in their generosity. They donate to causes important to them, including social, medical and environmental issues. Given the inclination for donations, there is scope to encourage a new generation of socially minded spenders.

This can include conversations with children on where their money comes from and where goes when they spend it. Think about how their money can support local, small businesses which sustain and develop local communities, rather than big business. Think too, about their awareness of differences in household income, and use this as a tool to discuss inequality in income and wealth and the benefits of redistribution.

Rather than focusing on ideas of “good” behaviour, or that their own industriousness is all they need to sustain them, we should be taking the lead from kids and encouraging discussions of money in ways which can include topics of fairness, redistribution and ethical spending. That is the kind of social power pocket money should encourage.

The Conversation

Gaby Harris has received funding from the Economic and Social Research Council.

ref. What does pocket money teach children? It can offer social as well as financial education – https://theconversation.com/what-does-pocket-money-teach-children-it-can-offer-social-as-well-as-financial-education-262377

Jane Austen fight club: experts go head-to-head arguing for her best leading man

Source: The Conversation – UK – By James Vigus, Senior Lecturer in English, Queen Mary University of London

To mark the 250th anniversary of her birth, we’re pitting Jane Austen’s much-loved novels against each other in a battle of wit, charm and romance. Seven leading Austen experts have made their case for her ultimate leading man, but the winner is down to you. Cast your vote in the poll at the end of the article, and let us know the reason for your choice in the comments. It’s breeches at dawn.

Edward Ferrars, Sense and Sensibility

Championed by James Vigus, senior lecturer in English, Queen Mary University of London

Edward Ferrars, supposedly “idle and depressed”, gets a bad press. Even Elinor, who loves him, struggles to decipher his reserve. The explanation – his secret engagement to scheming Lucy Steele – seems discreditable. Yet among Sense and Sensibility’s showy, inadequate men, reticent Edward (alongside Colonel Brandon) is a hero.

Unlike Willoughby, who jilts Marianne to marry for money, Edward dutifully sticks with Lucy, wanting her to avoid penury. Significantly, Elinor approves. Edward has an “open affectionate heart”, this inwardness contrasting Willoughby’s more superficial “open affectionate manners”. And his “saucy” teasing of Marianne’s fashionable love of picturesque landscapes elicits her first-name-terms affection for him.

Edward, though, is serious – a Christian stoic like Elinor. Resistant to family pressure, he “always preferred” the church, an understated vocation. No orator, Edward speaks plainly: “I am grown neither humble nor penitent by what has passed. – I am grown very happy.” This happiness, the moral luck of gaining Elinor and a clergyman’s living, is credible because it’s deserved.

Henry Tilney, Northanger Abbey

Championed by Sarah Annes Brown, professor of English literature, Anglia Ruskin University

There are many reasons why I love Jane Austen, but the charm of her leading men isn’t high on the list. In Austen’s novels, a witty and charming male should be approached with extreme caution. He is likely to prove an unsuitable suitor who must be rejected in favour of someone worthier – and duller.

But Northanger Abbey’s Henry Tilney is the exception. This is particularly true of the earlier part of the novel. There, he teases Catherine by imagining how she’ll describe her first meeting with him at the Lower Rooms in Bath in her diary.

He then goes on to gossip about ladies’ fashions with chaperone Mrs Allen. She asks for his opinion on Catherine’s own gown: “It is very pretty, madam,” said he, gravely examining it; “but I do not think it will wash well; I am afraid it will fray.”

It is very difficult to imagine Mr Darcy concerning himself with such trifles.
Admittedly Henry becomes a bit more finger-wagging in the second half of the novel – but then, he has been saddled with Austen’s silliest heroine.


This article is part of a series commemorating the 250th anniversary of Jane Austen’s birth. Despite having published only six books, she is one of the best-known authors in history. These articles explore the legacy and life of this incredible writer.


Colonel Brandon, Sense and Sensibility

Championed by Michael Meeuwis, associate professor of literature, University of Warwick

Austen wrote Colonel Brandon’s background to reflect the violence and seductions of the 18th-century novel. He nearly elopes with his brother’s wife Eliza, then he rescues Eliza and her daughter (also named Eliza) after seduction by someone else. Finally, he fights a duel with Willoughby over Eliza junior.

Here, Austen suggests that women in the 18th-century novel were generally so interchangeable they didn’t even need separate names. Sense and Sensibility’s heroine, Elinor, is magnificently unimpressed by his story. She “sighed over the fancied necessity of this; but to a man and a soldier she presumed not to censure it.”

Such wry commentary is only possible in a novel where quieter life prevails – and Brandon becomes a romantic hero of that world too. In marrying him, Marianne gains access to his library, where she may read – and perhaps even write – the kinds of books where women have names.

Edmund Bertram, Mansfield Park

Championed by Jane E. Wright, senior lecturer in English literature, University of Bristol

Edmund Bertram, the older cousin of Austen’s heroine, Fanny Price, in Mansfield Park, isn’t as dashing, wildly rich, or immediately appealing as some of Austen’s other leading men. A second son with a compromised inheritance, he is a matter-of-fact character training to be clergyman. He also exhibits misjudgment in falling in love (or infatuation) with the unsuitable Mary Crawford.

However, in addition to his seriousness about the church and responsibility in managing his father’s estate, he is the only one of Austen’s leading men who – against his family’s unkindness – is not only consistently caring towards the leading lady, but both notices her intelligence and takes trouble to support it.

In the fluctuations of the novel’s plot, he and Fanny offer care, caution, and comfort to each other, so that, in some respects, they might be said to come to their eventual marriage on slightly more equal terms.

Fitzwilliam Darcy, Pride and Prejudice

Championed by Penny Bradshaw, associate professor of English literature, University of Cumbria

On one level, Mr Darcy needs no championing. Cultural evidence (from branded tea-towels and other merchandise, to multiple portrayals on screen) suggests that he remains the most popular of Austen’s heroes.

His “fine, tall person” and “handsome features” are clearly important factors here, but his chilly reserve and initial dismissal of Elizabeth Bennet as merely “tolerable” do not immediately endear him to the reader.

The source of Darcy’s very great appeal lies partly in the fact that he begins to love her in spite of his own prejudices and because, while Darcy does undoubtedly admire Lizzie’s appearance (including her “fine eyes”), his admiration extends to qualities which, at this point in time, were hardly typical of the fictional heroines of romance.

Lizzie bears little resemblance to the usually rather passive and often victimised heroines encountered in countless popular novels of the late-18th and early-19th century. Crucially, Darcy is drawn to the “liveliness” of Lizzie’s mind and as a hero he therefore validates a new kind of heroine: a woman whose wit and intelligence is as much a part of her attraction as physical appearance.

Captain Wentworth, Persuasion

Championed by Emrys D. Jones, senior lecturer in 18th-century literature and culture, King’s College London

Frederick Wentworth isn’t meant to be admired from a distance like certain other Austen love interests. At various points in Persuasion, his thoughts are relayed to us through the free indirect discourse that more usually channels the inner lives of Austen’s heroines. And then, in the extraordinary penultimate chapter of the novel, we get his longing and his frustration straight from the source, in probably the most beautiful love letter in the history of literary fiction.

“Tell me not that I am too late,” he implores Anne Elliot. Notwithstanding his illustrious naval career, Wentworth is more vulnerable in that moment than any of the leading men before him. He writes of his soul being pierced, of his feelings overpowering him, using language that would, anywhere else in Austen, be mocked as excessive or indulgent. Wentworth carries it off, and in doing so proves that he’s a different kind of hero.

George Knightley, Emma

Championed by Christine Hawkins, teaching associate in school of the arts, Queen Mary University of London

George Knightley is underappreciated. “A sensible man about seven or eight and thirty” of a “cheerful manner” he is often undemonstrative, unshowy and cool. Not the classic dreamboat. But Knightley shows his worth through his honesty, trustworthiness and reliability.

Unlike the ostentatious Darcy, Knightley doesn’t offend and alienate everyone he meets. He is thoughtful and kind to others, championing the derided farmer Mr Martin, covering Harriet’s social embarrassment, and soothing the wounded feelings of Miss Bates. Knightley shows his sense of social responsibility. He is intelligent, practical and grounded.

Knightley is also Emma’s devoted lover: “I have not a fault to find with her … I love to look at her”. He sees her best qualities. But crucially, he questions her behaviour when he must (“I will tell you truths”) offering guidance and support when she acts wrongfully. Knightley is a secure, confident man, and his happy union with Emma is based on what every woman surely wants – equality and respect.

Now the experts have made their case, it’s your turn to decide which of Austen’s seven leading men is her best. Click the image below to vote in our poll, and see if other readers agree with you.

This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Jane Austen fight club: experts go head-to-head arguing for her best leading man – https://theconversation.com/jane-austen-fight-club-experts-go-head-to-head-arguing-for-her-best-leading-man-252756

Skin cancer: is HPV also a potential cause?

Source: The Conversation – UK – By Sarah Allinson, Professor, Department of Biomedical and Life Sciences, Lancaster University

HPV are a common group of viruses which can infect skin and other parts of the body. Anusorn Nakdee/ Shutterstock

Skin cancer is typically caused by damage to the skin’s cells from ultraviolet radiation. But a recent case study has just shed light on another potential cause: human papillomavirus.

The report, which was published in the New England Journal of Medicine, focused on the case of a 34-year-old woman who had been diagnosed with over 40 squamous cell carcinomas (SCC). This is the second most common type of skin cancer.

The woman also had many wart-like growths in her mouth and on her skin. These were attributed to a human papillomavirus (HPV) infection.

Human papillomavirus is a common group of viruses that can infect skin and other parts of the body. While HPV often does not cause any problems or symptoms in most people, in some cases it can cause warts and is even linked to certain types of cancer – such as cervical cancer.

The woman in the latest report was referred by her doctor to the team of researchers who conducted the case study. She had already undergone multiple surgeries and rounds of immunotherapy to remove a large squamous cell carcinoma that repeatedly grew back on her forehead. The patient’s doctor believed this might be due to a condition that made it more difficult for her immune cells to fight off the tumours.

The researchers performed a genetic analysis on this recurrent tumour to understand why it continued to grow back. Under normal circumstances, SCC tumours have a genetic signature that shows their mutations were caused by ultraviolet radiation. These mutations usually drive their growth.

However, this patient’s cancer didn’t have these signature mutations. Instead, the researchers found that the HPV infection living on her skin had integrated itself into the DNA of the tumour on her forehead. It seemed that it was the virus that was actually driving the cancerous growth.

There are more than 200 different types of HPV viruses, only a few of which have been associated with cancers. HPV19, which infects skin, had not previously been linked to cancer. But in this case, it had gone rogue and caused the carcinoma.

Unique case

This recent case study is unique, it should be said. There were many factors that made it possible for the HPV infection to drive the recurrent growth of skin cancer.

The patient had a long history of health problems beginning in early childhood. This had brought her to the attention of researchers who were studying people who had problems with their immune system. A 2017 case report on her revealed that she had inherited mutations in two genes that play a role in immune function.

One of the mutated genes was ZAP70, which is involved in the normal function of a type of immune cell called a T-cell. This cell plays an essential role in helping the body successfully fight infections.

A digital depiction of T-cells attacking a cancer cell.
T-cells play a role in protecting the body against cancer and other pathogens.
ART-ur/ Shutterstock

Inherited changes in ZAP70 that prevent it from working were previously known to cause a condition called severe combined immunodeficiency. This condition is usually diagnosed in infancy and, if not treated with a stem cell transplant, leads to death within the first couple of years of life. Being in her late 20s at that time, the woman became the oldest patient ever to be diagnosed with a ZAP70 immune condition.

The second mutated gene, RNF168, is involved in repairing damage to DNA.

The new team decided to investigate whether it was the unique combination of mutations in both genes that was allowing the HPV infection to cause cancer. However they concluded that the mutated RNF168 gene was a red herring.

The research team found that the patient’s RNF168 mutation was relatively common in the wider American population and wasn’t linked to any health issues. Further investigation of her cells also revealed that her DNA repair processes were functioning normally.

They then moved on to the ZAP70 gene. Here they found that although the patient’s ZAP70 gene was mutated, it still partly worked. This explained why she hadn’t succumbed to severe combined immunodeficiency in childhood. However, the mutation still made her immune system less effective. So because her T-cell response wasn’t fully functional, her body was unable to recognise and eliminate HPV-infected cells.

After receiving a stem cell transplant that replaced her immune cells with fully functioning ones from a donor, the woman made a complete recovery. The new T-cells were able to recognise and destroy the HPV-infected cells, including the skin cancer. Hopefully she will now remain cancer-free for years to come.

Immune health and cancer

This story highlights how important our immune system is in protecting us against cancer. Without it, even innocuous viruses that usually harmlessly co-exist on our skin can drive the formation of aggressive cancers.

It also demonstrates how modern genomic technology is transforming our understanding of disease. Without genetic sequencing, doctors would still be none the wiser about why this unfortunate woman had so many aggressive skin tumours.

But this study also raises questions about whether HPV-driven skin cancer could be a wider, previously unrecognised problem. The authors suggest that in the future, patients with aggressive and recurrent squamous cell carcinomas should be profiled for T-cell function and the presence of HPV infections. Like the woman in this story, they too might benefit from immune boosting therapies to treat their cancers.

The Conversation

Sarah Allinson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Skin cancer: is HPV also a potential cause? – https://theconversation.com/skin-cancer-is-hpv-also-a-potential-cause-262450

How poisoned data can trick AI − and how to stop it

Source: The Conversation – USA – By M. Hadi Amini, Associate Professor of Computing and Information Sciences, Florida International University

Data poisoning can make an AI system dangerous to use, potentially posing threats such as chemically poisoning a food or water supply. ArtemisDiana/iStock via Getty Images

Imagine a busy train station. Cameras monitor everything, from how clean the platforms are to whether a docking bay is empty or occupied. These cameras feed into an AI system that helps manage station operations and sends signals to incoming trains, letting them know when they can enter the station.

The quality of the information that the AI offers depends on the quality of the data it learns from. If everything is happening as it should, the systems in the station will provide adequate service.

But if someone tries to interfere with those systems by tampering with their training data – either the initial data used to build the system or data the system collects as it’s operating to improve – trouble could ensue.

An attacker could use a red laser to trick the cameras that determine when a train is coming. Each time the laser flashes, the system incorrectly labels the docking bay as “occupied,” because the laser resembles a brake light on a train. Before long, the AI might interpret this as a valid signal and begin to respond accordingly, delaying other incoming trains on the false rationale that all tracks are occupied. An attack like this related to the status of train tracks could even have fatal consequences.

We are computer scientists who study machine learning, and we research how to defend against this type of attack.

Data poisoning explained

This scenario, where attackers intentionally feed wrong or misleading data into an automated system, is known as data poisoning. Over time, the AI begins to learn the wrong patterns, leading it to take actions based on bad data. This can lead to dangerous outcomes.

In the train station example, suppose a sophisticated attacker wants to disrupt public transportation while also gathering intelligence. For 30 days, they use a red laser to trick the cameras. Left undetected, such attacks can slowly corrupt an entire system, opening the way for worse outcomes such as backdoor attacks into secure systems, data leaks and even espionage. While data poisoning in physical infrastructure is rare, it is already a significant concern in online systems, especially those powered by large language models trained on social media and web content.

A famous example of data poisoning in the field of computer science came in 2016, when Microsoft debuted a chatbot known as Tay. Within hours of its public release, malicious users online began feeding the bot reams of inappropriate comments. Tay soon began parroting the same inappropriate terms as users on X (then Twitter), and horrifying millions of onlookers. Within 24 hours, Microsoft had disabled the tool and issued a public apology soon after.

Data poisoning explained.

The social media data poisoning of the Microsoft Tay model underlines the vast distance that lies between artificial and actual human intelligence. It also highlights the degree to which data poisoning can make or break a technology and its intended use.

Data poisoning might not be entirely preventable. But there are commonsense measures that can help guard against it, such as placing limits on data processing volume and vetting data inputs against a strict checklist to keep control of the training process. Mechanisms that can help to detect poisonous attacks before they become too powerful are also critical for reducing their effects.

Fighting back with the blockchain

At Florida International University’s solid lab, we are working to defend against data poisoning attacks by focusing on decentralized approaches to building technology. One such approach, known as federated learning, allows AI models to learn from decentralized data sources without collecting raw data in one place. Centralized systems have a single point of failure vulnerability, but decentralized ones cannot be brought down by way of a single target.

Federated learning offers a valuable layer of protection, because poisoned data from one device doesn’t immediately affect the model as a whole. However, damage can still occur if the process the model uses to aggregate data is compromised.

This is where another more popular potential solution – blockchain – comes into play. A blockchain is a shared, unalterable digital ledger for recording transactions and tracking assets. Blockchains provide secure and transparent records of how data and updates to AI models are shared and verified.

By using automated consensus mechanisms, AI systems with blockchain-protected training can validate updates more reliably and help identify the kinds of anomalies that sometimes indicate data poisoning before it spreads.

Blockchains also have a time-stamped structure that allows practitioners to trace poisoned inputs back to their origins, making it easier to reverse damage and strengthen future defenses. Blockchains are also interoperable – in other words, they can “talk” to each other. This means that if one network detects a poisoned data pattern, it can send a warning to others.

At solid lab, we have built a new tool that leverages both federated learning and blockchain as a bulwark against data poisoning. Other solutions are coming from researchers who are using prescreening filters to vet data before it reaches the training process, or simply training their machine learning systems to be extra sensitive to potential cyberattacks.

Ultimately, AI systems that rely on data from the real world will always be vulnerable to manipulation. Whether it’s a red laser pointer or misleading social media content, the threat is real. Using defense tools such as federated learning and blockchain can help researchers and developers build more resilient, accountable AI systems that can detect when they’re being deceived and alert system administrators to intervene.

The Conversation

M. Hadi Amini has received funding for researching security of transportation systems from U.S. Department of Transportation. Opinions expressed represent his personal or professional opinions and do not represent or reflect the position of Florida International University.

This work was partly supported by the National Center for Transportation Cybersecurity and Resiliency (TraCR). Any opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of TraCR, and the U.S. Government assumes no liability for the contents or use thereof.

Ervin Moore has received funding for researching security of transportation systems from U.S. Department of Transportation. Opinions expressed represent his personal or professional opinions and do not represent or reflect the position of Florida International University.

This work was partly supported by the National Center for Transportation Cybersecurity and Resiliency (TraCR). Any opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of TraCR, and the U.S. Government assumes no liability for the contents or use thereof.

ref. How poisoned data can trick AI − and how to stop it – https://theconversation.com/how-poisoned-data-can-trick-ai-and-how-to-stop-it-256423

Spiderweb silks and architectures reveal millions of years of evolutionary ingenuity

Source: The Conversation – USA – By Ella Kellner, Ph.D. Student in Biological Sciences, University of North Carolina – Charlotte

An orchard orb weaver spider rests in the center of her web. Daniela Duncan/Moment via Getty Images

Have you ever walked face-first into a spiderweb while on a hike? Or swept away cobwebs in your garage?

You may recognize the orb web as the classic Halloween decoration or cobwebs as close neighbors with your dust bunnies. These are just two among the many types of spiderweb architectures, each with a unique structure specially attuned to the spider’s environment and the web’s intended job.

While many spiders use their webs to catch prey, they have also evolved unusual ways to use their silk, from wrapping their eggs to acting as safety lines that catch them when they fall.

As a materials scientist who studies spiders and their silks, I am curious about the relationship between spiderweb architecture and the strength of the silks spiders use. How do the design of a web and the properties of the silk used affect a spider’s ability to catch its next meal?

Webs’ ancient origins

Spider silk has a long evolutionary history. Researchers believe that it first evolved around 400 million years ago. These ancestral spiders used silk to line their burrows, protect their vulnerable eggs and create sensory paths and guidelines as they navigated their environment.

To understand what ancient spiderwebs could have looked like, scientists look to the lampshade spider. This spider lives in rock outcroppings in the Appalachian and Rocky mountains. It is a living relative of some of the most ancient spiders to ever make webs, and it hasn’t changed much at all since web-building first evolved.

A black and brown spider camouflaged over a mossy rock, with a circular, flat web around it, stuck to the rock
A lampshade spider in its distinctive web between rocks.
Tyler Brown, CC BY-SA

Aptly named for its web shape, the lampshade spider makes a web with a narrow base that widens outward. These webs fill the cracks between rocks where the spider can be camouflaged against the rough surface. It’s hard for a prospective meal to traverse this rugged landscape without being ensnared.

Web diversity

Today, all spider species produce silk. Each species creates its own specific web architecture that is uniquely suited to the type of prey it eats and the environment it lives in.

Take the orb web, for example. These are aerial, two-dimensional webs featuring a distinctive spiral. They mostly catch flying or jumping prey, such as flies and grasshoppers. Orb webs are found in open areas, such as on treelines, in tall grasses or between your tomato plants.

Image of a black spider spinning an an irregular web
A black widow spider builds three-dimensional cobwebs.
Karen Sloane-Williams/500Px Plus via Getty Images

Compare that to the cobweb, a structure that is most often seen by the baseboards in your home. While the term cobweb is commonly used to refer to any dusty, abandoned spiderweb, it is actually a specific web shape typically designed by spiders in the family Theridiidae. This spiderweb has a complex, three-dimensional architecture. Lines of silk extend downwards from the 3D tangle and are held affixed to the ground under high tension. These lines act as a sticky, spring-loaded booby trap to capture crawling prey such as ants and beetles. When an insect makes contact with the glue at the base of the line, the silk detaches from the ground, sometimes with enough force to lift the meal into the air.

Watch a redback spider build the high-tension lines of a cobweb and ensnare unsuspecting ants.

Web weirdos

Imagine you are an unsuspecting beetle, navigating your way between strands of grass when you come upon a tightly woven silken floor. As you begin to walk across the mat, you see eight eyes peeking out of a silken funnel – just before you’re quickly snatched up as a meal.

Spiders such as funnel-web weavers construct thick silk mats on the ground that they use as an extension of their sensory systems. The spider waits patiently in its funnel-shaped retreat. Prey that come in contact with the web create vibrations that alert the spider a tasty treat is walking across the welcome mat and it’s time to pounce.

A light-brown spider facing the camera, with a funnel shaped web surrounding it
A funnel-web spider peeks out of its web in the ground.
sandra standbridge/Moment via Getty Images

Jumping spiders are another unusual web spinner. They are well known for their varied colorations, elaborate courtship dances and being some of the most charismatic arachnids. Their cuteness has made them popular, thanks to Lucas the Spider, an adorable cartoon jumping spider animated by Joshua Slice. With two huge front eyes giving them depth perception, these spiders are fantastic hunters, capable of jumping in any direction to navigate their environment and hunt.

But what happens when they misjudge a jump, or worse, need to escape a predator? Jumpers use their silk as a safety tether to anchor themselves to surfaces before leaping through the air. If the jump goes wrong, they can climb back up their tether, allowing them to try again. Not only does this safety line of silk give them a chance for a redo, it also helps with making the jump. The tether helps them control the direction and speed of their jump in midair. By changing how fast they release the silk, they can land exactly where they want to.

A brown spider with green iridescence in mid-air, tethered to a leaf behind it with a thin strand of silk
A jumping spider uses a safety tether of silk as it makes a risky jump.
Fresnelwiki/Wikimedia Commons, CC BY-SA

To weave a web

All webs, from the orb web to the seemingly chaotic cobweb, are built through a series of basic, distinct steps.

Orb-weaving spiders usually start with a proto-web. Scientists think this initial construction is an exploratory stage, when the spider assesses the space available and finds anchor points for its silk. Once the spider is ready to build its main web, it will use the proto-web as a scaffold to create the frame, spokes and spiral that will help with absorbing energy and capturing prey. These structures are vital for ensuring that their next meal won’t rip right through the web, especially insects such as dragonflies that have an average cruising speed of 10 mph. When complete, the orb weaver will return to the center of the web to wait for its next meal.

The diversity in a spider’s web can’t all be achieved with one material. In fact, spiders can create up to seven types of silk, and orb weavers make them all. Each silk type has different material and mechanical properties, serving a specific use within the spider’s life. All spider silk is created in the silk glands, and each different type of silk is created by its own specialized gland.

A pale brown spider at the center of its spiral patterned  orb-web
A European garden spider builds a two-dimensional orb web.
Massimiliano Finzi/Moment via Getty Images

Orb weavers rely on the stiff nature of the strongest fibers in their arsenal for framing webs and as a safety line. Conversely, the capture spiral of the orb web is made with extremely stretchy silk. When a prey item gets caught in the spiral, the impact pulls on the silk lines. These fibers stretch to dissipate the energy to ensure the prey doesn’t just tear through the web.

Spider glue is a modified silk type with adhesive properties and the only part of the spiderweb that is actually sticky. This gluey silk, located on the capture spiral, helps make sure that the prey stays stuck in the web long enough for the spider to deliver a venomous bite.

To wrap up

Spiders and their webs are incredibly varied. Each spider species has adapted to live within its environmental niche and capture certain types of prey. Next time you see a spiderweb, take a moment to observe it rather than brushing it away or squishing the spider inside.

Notice the differences in web structure, and see whether you can spot the glue droplets. Look for the way that the spider is sitting in its web. Is it currently eating, or are there discarded remains of the insects it has prevented from wandering into your home?

Observing these arachnid architects can reveal a lot about design, architecture and innovation.

The Conversation

Ella Kellner does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Spiderweb silks and architectures reveal millions of years of evolutionary ingenuity – https://theconversation.com/spiderweb-silks-and-architectures-reveal-millions-of-years-of-evolutionary-ingenuity-261928

How to improve the monitoring of chemical contaminants in the human body

Source: The Conversation – France – By Chang He, Professor of environmental sciences, The University of Queensland

From pesticides in our food to hormone disruptors in our kitchen pans, modern life is saturated with chemicals, exposing us to unknown long-term health impacts.

One of the surest routes to quantifying these impacts is the scientific method of biomonitoring, which consists of measuring the concentration of chemicals in biological specimens such as blood, hair or breastmilk. These measurable indicators are known as biomarkers.

Currently, very few biomarkers are available to assess the impact of chemicals on human health, even though 10 million new substances are developed and introduced to the market each year.

My research aims to bridge this gap by identifying new biomarkers of chemicals of emerging concern in order to assess their health effects.

What makes a good biomarker

One of the difficulties of biomonitoring is that once absorbed in our bodies, chemical pollutants are typically processed into one or more breakdown substances, known as metabolites. As a result, many chemicals go under the radar.

In order to understand what happens to a chemical once it has entered a living organism, researchers can use various techniques, including approaches based on computer modelling (in silico models), tests carried out on cell cultures (in vitro approaches), and animal tests (in vivo) to identify potential biomarkers.

The challenge is to find biomarkers that allow us to draw a link between contamination by a toxic chemical and the potential health effects. These biomarkers may be the toxic product itself or the metabolites left in its wake.

But what is a “good” biomarker? In order to be effective in human biomonitoring, it must meet several criteria.

First, it should directly reflect the type of chemical to which people are exposed. This means it must be a direct product of the chemical and help pinpoint the level of exposure to it.

Second, a good biomarker should be stable enough to be detectable in the body for a sufficient period without further metabolization. This stability ensures that the biomarker can be measured reliably in biological samples, thus providing an accurate assessment of exposure levels.

Third, a good biomarker should enable precise evaluation. It must be specific to the chemical of interest without interference from other substances. This specificity is critical for accurately interpreting biomonitoring data and making informed decisions about health risks and regulatory measures.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


Two examples of ‘bad’ biomarkers

One example of a “bad” biomarker involves the diester metabolites of organophosphate esters. These compounds are high-production-volume chemicals widely used in household products as flame retardants and plasticizers, and are suspected to have adverse effects on the environment and human health.

Recent findings showed the coexistence of both organophosphate esters and their diester metabolites in the environment. This indicates that the use of diesters as biomarkers to estimate human contamination by organophosphate esters leads to an overestimation.

Using an inappropriate biomarker may also lead to an underestimation of the concentration of a compound. An example relates to chlorinated paraffins, persistent organic pollutants that are also used as flame retardants in household products. In biomonitoring, researchers use the original form of chlorinated paraffins due to their persistence in humans. However, their levels in human samples are much lower than those in the environment, which seems to indicate underestimation in human biomonitoring.

Recently, my team has found the potential for biodegradation of chlorinated paraffins. This could explain the difference between measurements taken in the environment and those taken in living organisms. We are currently working on the identification of appropriate biomarkers of these chemicals.

Current limitations in human biomonitoring

Despite the critical importance of biomarkers, several limitations hinder their effective use in human biomonitoring.

A significant challenge is the limited number of human biomarkers available compared to the vast number of chemicals we are exposed to daily. Existing biomonitoring programmes designed to assess contamination in humans are only capable of tracking a few hundred biomarkers at best, a small fraction of the tens of thousands of markers that environmental monitoring programmes use to report pollution.

Moreover, humans are exposed to a cocktail of chemicals daily, enhancing their adverse effects and complicating the assessment of cumulative effects. The pathways of exposure, such as inhalation, ingestion and dermal contact, add another layer of complexity.

Another limitation of current biomarkers is the reliance on extrapolation from in vitro and in vivo models to human contexts. While these models provide valuable insights, they do not always accurately reflect human metabolism and exposure scenarios, leading to uncertainties in risk assessment and management.

To address these challenges, my research aims to establish a workflow for the systematic identification and quantification of chemical biomarkers. The goal is to improve the accuracy and applicability of biomonitoring in terms of human health.

Innovative approaches in biomarker research

We aim to develop a framework for biomarker identification that could be used to ensure that newly identified biomarkers are relevant, stable and specific.

This framework includes advanced sampling methods, state-of-the-art analytical techniques, and robust systems for data interpretation. For instance, by combining advanced chromatographic techniques, which enable the various components of a biological sample to be separated very efficiently, with highly accurate methods of analysis (high-resolution mass spectrometry), we can detect and quantify biomarkers with greater sensitivity and specificity.

This allows for the identification of previously undetectable or poorly understood biomarkers, expanding the scope of human biomonitoring.

Additionally, the development of standardized protocols for sample collection and analysis ensures consistency and reliability across different studies and monitoring programmes, which is crucial for comparing data and drawing meaningful conclusions about exposure trends and health risks.

This multidisciplinary approach will hopefully be providing a more comprehensive understanding of human exposure to hazardous chemicals. This new data could form a basis for improving prevention and adapting regulations in order to limit harmful exposure.


Created in 2007 to help accelerate and share scientific knowledge on key societal issues, the Axa Research Fund has supported nearly 700 projects around the world conducted by researchers in 38 countries. To learn more, visit the website of the Axa Research Fund or follow @AXAResearchFund on X.

The Conversation

Chang He received funding from the AXA Research Fund.

ref. How to improve the monitoring of chemical contaminants in the human body – https://theconversation.com/how-to-improve-the-monitoring-of-chemical-contaminants-in-the-human-body-233255

4 out of 5 US troops surveyed understand the duty to disobey illegal orders

Source: The Conversation – USA – By Charli Carpenter, Professor of political science, UMass Amherst

National Guard members arrive at the Guard’s headquarters at D.C. Armory on Aug. 12, 2025 in Washington. Anna Moneymaker/Getty Images

With his Aug. 11, 2025, announcement that he was sending the National Guard – along with federal law enforcement – into Washington, D.C. to fight crime, President Donald Trump edged U.S. troops closer to the kind of military-civilian confrontations that can cross ethical and legal lines.

Indeed, since Trump returned to office, many of his actions have alarmed international human rights observers. His administration has deported immigrants without due process, held detainees in inhumane conditions, threatened the forcible removal of Palestinians from the Gaza Strip and deployed both the National Guard and federal military troops to Los Angeles to quell largely peaceful protests.

When a sitting commander in chief authorizes acts like these, which many assert are clear violations of the law, men and women in uniform face an ethical dilemma: How should they respond to an order they believe is illegal?

The question may already be affecting troop morale. “The moral injuries of this operation, I think, will be enduring,” a National Guard member who had been deployed to quell public unrest over immigration arrests in Los Angeles told The New York Times. “This is not what the military of our country was designed to do, at all.”

Troops who are ordered to do something illegal are put in a bind – so much so that some argue that troops themselves are harmed when given such orders. They are not trained in legal nuances, and they are conditioned to obey. Yet if they obey “manifestly unlawful” orders, they can be prosecuted. Some analysts fear that U.S. troops are ill-equipped to recognize this threshold.

We are scholars of international relations and international law. We conducted survey research at the University of Massachusetts Amherst’s Human Security Lab and discovered that many service members do understand the distinction between legal and illegal orders, the duty to disobey certain orders, and when they should do so.

A man in a blue jacket, white shirt and red tie at a lectern, speaking.
President Donald Trump, flanked by Secretary of Defense Pete Hegseth and Attorney General Pam Biondi, announced at a White House news conference on Aug. 11, 2025, that he was deploying the National Guard to assist in restoring law and order in Washington.
Hu Yousong/Xinhua via Getty Images

Compelled to disobey

U.S. service members take an oath to uphold the Constitution. In addition, under Article 92 of the Uniform Code of Military Justice and the U.S. Manual for Courts-Martial, service members must obey lawful orders and disobey unlawful orders. Unlawful orders are those that clearly violate the U.S. Constitution, international human rights standards or the Geneva Conventions.

Service members who follow an illegal order can be held liable and court-martialed or subject to prosecution by international tribunals. Following orders from a superior is no defense.

Our poll, fielded between June 13 and June 30, 2025, shows that service members understand these rules. Of the 818 active-duty troops we surveyed, just 9% stated that they would “obey any order.” Only 9% “didn’t know,” and only 2% had “no comment.”

When asked to describe unlawful orders in their own words, about 25% of respondents wrote about their duty to disobey orders that were “obviously wrong,” “obviously criminal” or “obviously unconstitutional.”

Another 8% spoke of immoral orders. One respondent wrote that “orders that clearly break international law, such as targeting non-combatants, are not just illegal — they’re immoral. As military personnel, we have a duty to uphold the law and refuse commands that betray that duty.”

Just over 40% of respondents listed specific examples of orders they would feel compelled to disobey.

The most common unprompted response, cited by 26% of those surveyed, was “harming civilians,” while another 15% of respondents gave a variety of other examples of violations of duty and law, such as “torturing prisoners” and “harming U.S. troops.”

One wrote that “an order would be obviously unlawful if it involved harming civilians, using torture, targeting people based on identity, or punishing others without legal process.”

An illustration of responses such as 'I'd disobey if illegal' and 'I'd disobey if immoral.'
A tag cloud of responses to UMass-Amherst’s Human Security Lab survey of active-duty service members about when they would disobey an order from a superior.
UMass-Amherst’s Human Security Lab, CC BY

Soldiers, not lawyers

But the open-ended answers pointed to another struggle troops face: Some no longer trust U.S. law as useful guidance.

Writing in their own words about how they would know an illegal order when they saw it, more troops emphasized international law as a standard of illegality than emphasized U.S. law.

Others implied that acts that are illegal under international law might become legal in the U.S.

“Trump will issue illegal orders,” wrote one respondent. “The new laws will allow it,” wrote another. A third wrote, “We are not required to obey such laws.”

Several emphasized the U.S. political situation directly in their remarks, stating they’d disobey “oppression or harming U.S. civilians that clearly goes against the Constitution” or an order for “use of the military to carry out deportations.”

Still, the percentage of respondents who said they would disobey specific orders – such as torture – is lower than the percentage of respondents who recognized the responsibility to disobey in general.

This is not surprising: Troops are trained to obey and face numerous social, psychological and institutional pressures to do so. By contrast, most troops receive relatively little training in the laws of war or human rights law.

Political scientists have found, however, that having information on international law affects attitudes about the use of force among the general public. It can also affect decision-making by military personnel.

This finding was also borne out in our survey.

When we explicitly reminded troops that shooting civilians was a violation of international law, their willingness to disobey increased 8 percentage points.

Drawing the line

As my research with another scholar showed in 2020, even thinking about law and morality can make a difference in opposition to certain war crimes.

The preliminary results from our survey led to a similar conclusion. Troops who answered questions on “manifestly unlawful orders” before they were asked questions on specific scenarios were much more likely to say they would refuse those specific illegal orders.

When asked if they would follow an order to drop a nuclear bomb on a civilian city, for example, 69% of troops who received that question first said they would obey the order.

But when the respondents were asked to think about and comment on the duty to disobey unlawful orders before being asked if they would follow the order to bomb, the percentage who would obey the order dropped 13 points to 56%.

While many troops said they might obey questionable orders, the large number who would not is remarkable.

Military culture makes disobedience difficult: Soldiers can be court-martialed for obeying an unlawful order, or for disobeying a lawful one.

Yet between one-third to half of the U.S. troops we surveyed would be willing to disobey if ordered to shoot or starve civilians, torture prisoners or drop a nuclear bomb on a city.

The service members described the methods they would use. Some would confront their superiors directly. Others imagined indirect methods: asking questions, creating diversions, going AWOL, “becoming violently ill.”

Criminologist Eva Whitehead researched actual cases of troop disobedience of illegal orders and found that when some troops disobey – even indirectly – others can more easily find the courage to do the same.

Whitehead’s research showed that those who refuse to follow illegal or immoral orders are most effective when they stand up for their actions openly.

The initial results of our survey – coupled with a recent spike in calls to the GI Rights Hotline – suggest American men and women in uniform don’t want to obey unlawful orders.

Some are standing up loudly. Many are thinking ahead to what they might do if confronted with unlawful orders. And those we surveyed are looking for guidance from the Constitution and international law to determine where they may have to draw that line.

Zahra Marashi, an undergraduate research assistant at the University of Massachusetts Amherst, contributed to the research for this article.

The Conversation

Charli Carpenter directs Human Security Lab which has received funding from University of Massachusetts College of Social and Behavioral Sciences, the National Science Foundation, and the Lex International Fund of the Swiss Philanthropy Foundation.

Geraldine Santoso and Laura K Bradshaw-Tucker do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. 4 out of 5 US troops surveyed understand the duty to disobey illegal orders – https://theconversation.com/4-out-of-5-us-troops-surveyed-understand-the-duty-to-disobey-illegal-orders-261929

Where America’s CO emissions come from – what you need to know, in charts

Source: The Conversation – USA (2) – By Kenneth J. Davis, Professor of Atmospheric and Climate Science, Penn State

Vehicles, energy production and industry are the largest emissions sources in the U.S. David McNew/Getty Images

Earth’s atmosphere contains carbon dioxide, which is good for life on Earth – in moderation. Plants use CO2 as the source of the carbon they build into leaves and wood via photosynthesis. In combination with water vapor, CO2 insulates the Earth, keeping it from turning into a frozen world. Life as we know it on Earth would not exist without CO2 in the atmosphere.

Since the industrial revolution began, however, humans have been adding more and more carbon dioxide to the Earth’s atmosphere, and it has become a problem.

The atmospheric concentration of CO2 has risen by more than 50% since industries began burning coal and other fossil fuels in the late 1700s, reaching concentrations that haven’t been found in the Earth’s atmosphere in at least a million years. And the concentration continues to rise.

A line chart shows atmospheric carbon dioxide concentrations mostly stable for hundreds of years and then rising with the start of the industrial revolution, and accelerating their rise starting in the mid-1900s.

Chart from Scripps Institution of Oceanography at UC San Diego, CC BY

Excess CO2 drives global warming

Who cares? Everyone should.

More CO2 in the air means temperatures at the Earth’s surface rise. As temperature rises, the water cycle accelerates, leading to more floods and droughts. Glaciers melt, and warmer ocean water expands, raising sea levels.

We are living with an increasing frequency or intensity of wildfires, heat waves, flooding and hurricanes, all influenced by increasing CO2 concentrations in the atmosphere.

The ocean also absorbs some of that CO2, making the water increasingly acidic, which can harm species crucial to the marine food chain.

Where is this additional CO2 coming from?

The biggest source of additional CO2 is the combustion of fossil fuels – oil, natural gas and coal – to power vehicles, electricity generation and industries. Each of these fuels consists of hydrocarbons built by plants that grew on the Earth over the past few hundred million years.

These plants took CO2 out of the planet’s atmosphere, died, and their biomass was buried in water and sediments.

Today, humans are reversing hundreds of millions of years of carbon accumulation by digging these fuels out of the Earth and burning them to provide energy.

Let’s dig a little deeper.

Where do CO2 emissions come from in the US?

The Environmental Protection Agency has tracked U.S. greenhouse gas emissions for years.

The U.S. emitted 5,053 million metric tons of CO2 into the atmosphere in 2022, the last year for which a complete emissions inventory is available. We also emit other greenhouse gases, including methane, from natural gas production and animal agriculture, and nitrous oxide, created when microbes digest nitrogen fertilizer. But carbon dioxide is about 80% of all U.S. greenhouse gas emissions.

Of those 5,053 million metric tons of CO2 emitted by the U.S. in 2022, 93% came from the combustion of fossil fuels.

More specifically: about 35% of the CO2 emissions were from transportation, 30% from the generation of electric power, and 16%, 7% and 5% from on-site consumption of fossil fuels by industrial, residential and commercial buildings, respectively. Electric power generation served industrial, residential and commercial buildings roughly equally.

What fossil fuels are being burned?

Transportation is dominated by petroleum products, or oil – think gasoline and diesel fuel.

Nationwide, power plants consume roughly equal fractions of coal and natural gas. Natural gas use has been increasing and coal decreasing in this sector, with this trend driven by the rapid expansion of the shale gas industry in the U.S.

U.S. forests are removing CO2 from the atmosphere, but not rapidly enough to offset human emissions. U.S. forests removed and stored about 920 million metric tons of CO2 in 2022.

How US CO2 emissions have changed

Emissions from the U.S. peaked around 2005 at 6,217 million metric tons of CO2. Since then, emissions have been decreasing slowly, largely driven by the replacement of coal by natural gas in electricity production.

Some additional notable trends will impact the future:

First, the U.S. economy has become more energy efficient over time, increasing productivity while decreasing emissions.

Second, solar and wind energy generation, while still a modest fraction of total energy production, has grown steadily in recent years and emits essentially no CO2 into the atmosphere. If the nation increasingly relies on renewable energy sources and reduces burning of fossil fuels, it will dramatically reduce its CO2 emissions.

Solar and wind energy became cheaper as a new energy source than natural gas and coal, but the Trump administration is cutting federal support for renewable energy and is doubling down on subsidies for fossil fuels. The growth of data centers is also expected to increase demand for electricity. How the U.S. meets that demand will impact national CO2 emissions in future years.

How US emissions compare globally

The U.S. ranked second in CO2 emissions worldwide in 2022, behind China, which emitted about 12,000 million metric tons of CO2. China’s annual CO2 emissions surpassed U.S. emissions in 2005 or 2006.

Added up over time, however, the U.S. has emitted more CO2 into the atmosphere than any other nation, and we still emit more CO2 per person than most other industrialized nations. Chinese and European emissions are both roughly half of U.S. emissions on a per capita basis.

Greenhouse gases in the atmosphere mix evenly around the globe, so emissions from industrialized nations affect the climate in developing countries that have benefited very little from the energy created by burning fossil fuels.

The takeaway

There have been some promising downward trends in U.S. CO2 emissions and upward trends in renewable energy sources, but political winds and increasing energy demands threaten progress in reducing emissions.

Reducing emissions in all sectors is needed to slow and eventually stop the rise of atmospheric CO2 concentrations. The world has the technological means to make large reductions in emissions. CO2 emitted into the atmosphere today lingers in the atmosphere for hundreds to thousands of years. The decisions we make today will influence the Earth’s climate for a very long time.

The Conversation

Kenneth J. Davis does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Where America’s CO emissions come from – what you need to know, in charts – https://theconversation.com/where-americas-co-sub-2-sub-emissions-come-from-what-you-need-to-know-in-charts-258904