Hacking the bomb? What Claude Mythos AI reveals about the gamble of nuclear deterrence

Source: The Conversation – France – By Thomas Fraise, Postdoctoral research fellow, University of Copenhagen; Sciences Po

Frontier AI-first cybersecurity platforms like OpenAI’s “Daybreak” and Anthropic’s newest Claude “Mythos” model are at the forefront of artificial intelligence but their advanced capabilities in offensive cybersecurity are a source of both fascination and concern.

Gguy/Shutterstock

In 1983, the film WarGames imagined a teenager who accidentally accessed a Pentagon computer system and triggered a simulation program, subsequently interpreted as the prelude to a nuclear war. The film made such an impression on Ronald Reagan that he asked his advisers whether such an intrusion into America’s most sensitive systems was possible. A week later, the answer came: “Mr. President, the problem is far worse than you think.”

Nuclear weapons policies are based on a series of bets, often far-reaching, on the future of nuclear deterrence. First, nuclear-armed countries gamble that the fear of retaliation will always be enough to prevent an adversary from striking first, and that they will always have the expertise and luck necessary to prevent accidental explosions. They bet that possessing nuclear weapons will remain a source of security rather than insecurity in decades to come.

However, as my colleagues Sterre van Buuren and Benoît Pelopidas and myself demonstrate, there are several plausible future scenarios in which possessing nuclear weapons will generate more real costs than potential benefits in a world that has warmed by several degrees. Maintaining a credible and safe arsenal will require budgetary choices at the expense of other urgent spending made necessary by the climate crisis.

The universe of existential risks that could justify the use of nuclear weapons may also be expanding. For example, experts worry that water shortages in Pakistan and India could become fertile ground for a conflict leading to nuclear escalation.

But there is another, more implicit bet involved here: that nuclear arsenals, which are complex, highly digitalised technological systems, offer no cyber vulnerabilities that could be exploited by an actor seeking to disrupt their normal functioning.

The recent breakthrough of Anthropic’s latest AI model Claude Mythos reveals just how much the conditions of that bet could change in the long term.

Mythos and the future of cybersecurity

“Mythos” was launched on April 7 2026 by the public benefit corporation Anthropic – which markets the Claude series of large language models (LLMs). This model, which has not been commercially released but made available to a restricted working group composed of around a dozen major American tech giants (Google, Microsoft, Apple, NVIDIA, Amazon Web Services, etc.), reportedly achieves an unprecedented success rate in detecting vulnerabilities in computer systems.

Mythos reportedly succeeded in detecting “zero-day” vulnerabilities in various web browsers, software, and operating systems with an impressive success rate.

A “zero-day” vulnerability is a critical security flaw in an information system for which no protection yet exists, making attacks possible with effectively “zero days” available to respond. According to Anthropic, Mythos managed to develop methods for exploiting these vulnerabilities in record time – likely in less than a day – with a success rate of 72.4%.

Although this information comes from the company itself – which has every incentive to exaggerate its results – some public evidence has been provided.

Sylvestre Ledru, Mozilla’s engineering director responsible for the Firefox browser, stated that Mythos helped uncover an “absolutely staggering” number of vulnerabilities in their software. For example, a nearly twenty-seven-year-old security flaw which had survived numerous audits was discovered in an open-source operating system widely used by cybersecurity services, OpenBSD.

Mythos sheds light on a larger phenomenon: that the increase in offensive capabilities – not only among states but also private actors such as cybercriminals – in the cyberspace could be accelerated by AI development, while uncertainty is emerging about whether defensive actors can react quickly enough to patch existing vulnerabilities.

Even if Mythos does not fully live up to the announced performance levels, the development of LLMs since the early 2020s has shown how rapidly their capabilities improve. We are therefore facing an acceleration in the development of offensive capabilities and their diffusion to a broader range of actors. This means a potentially rising probability of successful cyberattacks, as well as an increase in their absolute number.

The vulnerability of nuclear arsenals

To understand the vulnerability of nuclear weapons to cyberattacks, one must remember that a “nuclear arsenal” means far more than a stockpile of warheads. The normal operation of modern nuclear arsenals depends on a vast configuration of technologies: nuclear warheads, the missiles capable of delivering them, communication technologies ensuring that orders are transmitted from the President to the operator responsible for launching the weapons, as well as early warning systems designed to monitor the skies for signs of a potential enemy nuclear strike. These elements must communicate with one another to ensure control over the weapons.

And there are more of those than one might think. As Herbert Lin, a Stanford University researcher and author of a study on cyber threats and nuclear weapons, notes, the “nuclear button” metaphor is oversimplified: once the president presses it, a whole series of “cyber-buttons” must also be pressed to trigger and manage nuclear operations – each representing another point where cyberattacks could interfere, for example by preventing critical information from arriving.

The President might not receive enough information – or any at all – to determine that an attack is underway. Or he might be unable to communicate launch orders to submarine forces. Worse still, the nightmare scenario imagined since the 1950s could occur: a false launch order could be transmitted to missile operators.

The scenarios do not even need to be that extreme: the order might be transmitted with delays, or not transmitted to all forces, resulting in weaker retaliation than intended. The retaliation itself might be blocked: in 2010, an American command center lost communication with around fifty nuclear missiles for nearly an hour. An adversary could exploit such weaknesses.

Alternatively, a large-scale cyberattack carried out by non-state actors could create the impression that an adversary is targeting our nuclear arsenal, creating a risk of inadvertent escalation. Similarly, an attack on command-and-control systems related to conventional operations could be interpreted as endangering a state’s nuclear arsenal if those systems happened to be integrated.

One can also imagine cyber operations targeting the weapons themselves – the hardware rather than the software of the arsenal. Of course, nuclear security actors are not simply waiting for attacks to happen. They continuously develop and test defensive capabilities. The problem is that the complexity of existing systems makes it impossible to state with certainty that no vulnerabilities exist.

As James Gosler, formerly in charge of the security of American nuclear systems at Sandia National Laboratories, explains, beginning in the 1980s, the exponential increase in the complexity of components inside nuclear weapons meant that:

“you could no longer make the statement that any of these micro-controlled systems [used to ensure the functioning of the detonation mechanism] were vulnerability-free.”

That does not mean vulnerabilities necessarily exist. But it does mean that no actor can know for certain whether they do. So, should we fear that nuclear arsenals could one day be “hacked”?

In truth, we do not know. Such scenarios are possible: no large, complex information system can be guaranteed with total certainty to be completely reliable. The evolution of cyberattack tools, and their potential diffusion among a wide range of state and non-state actors, makes this kind of future scenario potentially more likely and, in any case, plausible.

A new bet on the future

Mythos highlights a new dimension to the nuclear gamble, born from the development of new technologies and their integration into nuclear arsenals.

First, we are betting on the absence of vulnerabilities within these systems – even though it is impossible to measure that probability with certainty. It changes over time as systems are updated, replaced, and connected to others. If vulnerabilities nevertheless exist, we then bet that advances in offensive cyber capabilities will always be matched, and matched in time, by advances in defensive capabilities – even in the age of artificial intelligence. Once again, that probability cannot be determined, because defensive capability development is often reactive: it depends on our knowledge of offensive capabilities and existing vulnerabilities, both of which are inherently uncertain.

We are therefore betting that our defences against cyberattacks – and those of other nuclear-armed states – will be enough. Otherwise, we are betting that luck will remain on our side and that existing vulnerabilities will not be discovered – like the one that existed for 27 years in OpenBSD’s code. It is a gamble on luck because, in this scenario, what saves us is the adversary’s inability or unwillingness, over which we have no control, to develop effective capabilities.

The ability of existing control practices to fulfil their role has become more uncertain with the arrival of large AI models capable of detecting vulnerabilities and designing cyberattacks on a massive and automated scale. Choosing a security policy based on nuclear weapons therefore amounts to betting that, in the future just as in the past, luck will always remain on our side.

The Conversation

This work has been supported by the European Research Council Consolidator Grant no. 101043468, RITUAL DETERRENCE. Views and opinions expressed are, however, those of the author only and do not necessarily reflect those of the European Union or the European Research Council.

ref. Hacking the bomb? What Claude Mythos AI reveals about the gamble of nuclear deterrence – https://theconversation.com/hacking-the-bomb-what-claude-mythos-ai-reveals-about-the-gamble-of-nuclear-deterrence-282614

Suspending federal gas tax wouldn’t save drivers as much as they might hope – here’s what goes into the price of a gallon of gas

Source: The Conversation – USA (2) – By Robert I. Harris, Assistant Professor of Economics, Georgia Institute of Technology

Gas taxes – federal and state – make up only a small piece of the price of a gallon of gas. AP Photo/Jenny Kane

With gasoline prices still high – averaging over US$4.50 a gallon in mid-May 2026 – President Donald Trump said he wanted Congress to suspend the federal gas tax, which is 18.4 cents a gallon for gasoline and 24.3 cents a gallon for diesel. A bill has been introduced in the Senate, and one is expected to follow in the House, according to Politico, but their fate is unclear.

States also charge their own taxes, ranging from 70.9 cents a gallon for gas in California to 8.95 cents in Alaska. Indiana, Georgia and Utah have suspended their gas taxes for at least some of 2026, and other states are considering similar measures.

As an energy economist, I have seen how suspending those taxes does reduce prices, but not as much as politicians – or drivers – might hope. Research on past gas tax holidays has found that consumers get about 79% of the reduction in gas taxes. That means oil companies and fuel retailers keep about one-fifth of the tax cut for themselves rather than passing that savings to the public.

Suspending the federal gas tax, which would require Congress to pass a law, wouldn’t help consumers much anyway. Even if oil companies passed on the whole savings to consumers, national average gas and diesel prices would drop only about 4%. The percentage reduction in high-cost states such as California would be even smaller.

Gas taxes are just one part of what drives gas prices. Overall, the price of a retail gallon of gas is the sum of four things: the cost of crude oil, refining, distribution and marketing, and taxes.

In nationwide figures from January 2026, crude oil accounted for about 51% of the pump price, refining roughly 20%, distribution and marketing about 11% and taxes about 18%. That mix shifts with conditions: When crude oil prices spike, that can drive more than 60% of the price; when the price drops, taxes and logistics are larger shares of the cost.

Crude oil is the biggest ingredient

Because the price of crude oil is the largest element, most of the price at the pump is derived from the global oil market.

Usually, big swings in crude prices come mainly from shifts in global demand and expectations – not from supply disruptions, according to widely cited research in 2009 by the economist Lutz Kilian.

But what is happening in early 2026 with the war in Iran is one of the exceptions: a classic supply shock. Severe disruptions to shipping through the Strait of Hormuz and attacks on Middle East oil infrastructure have taken millions of barrels a day off the global market.

Most drivers generally can’t quickly reduce how much they drive or how much gas they use when prices rise, so gasoline demand doesn’t change much in the short run. That means a jump in crude costs tends to result in people paying more rather than driving less.

Refining, regulations and the California puzzle

Refining turns crude into gasoline at industrial scale. The U.S. doesn’t have a single gasoline market, though. Roughly a quarter of U.S. gasoline is a cleaner-burning blend of petroleum-derived chemicals called “reformulated gasoline,” which is required in urban areas across 17 states and the District of Columbia to reduce smog.

California uses an even stricter formulation that few out-of-state refineries make. California is also geographically isolated: No pipelines bring gasoline in from other U.S. refining regions.

California’s gasoline prices have long run above the national average, explained in part by higher state taxes and stricter environmental rules. But since a refinery fire in Torrance, California, in 2015 reduced production capacity, the state’s prices have been about 20 to 30 cents a gallon higher than what those factors would indicate.

Energy economist and University of California, Berkeley, professor Severin Borenstein has called this the “mystery gasoline surcharge” and attributes it to the fact that there isn’t as much competition between refineries or gas stations in California as in other states. California’s own Division of Petroleum Market Oversight says the surcharge cost the state’s drivers about $59 billion from 2015 to 2024. It’s not exactly clear who is getting that money, but it could be gas stations themselves or refineries, through complex contracts with gas stations.

A person stands near a long metal truck in front of a gas station.
A tanker truck delivers fuel to a gas station.
AP Photo/Erin Hooley

Getting the gas into your car

The distribution and marketing category covers the costs of everything involved in getting the gasoline from the refinery gate to your tank.

Gasoline moves by pipeline, ship, rail and truck to wholesale terminals, and then by local delivery truck to service stations.

At the retailer’s end, the key factors are station rent and labor, the cost to buy gasoline in bulk to be able to sell it, credit card fees of as much as 6 to 10 cents a gallon at current prices, and franchise fees paid to the national brand, such as Sunoco or ExxonMobil, for permission to put their branding on the gas station.

Most gas station operators net only a few cents per gallon on fuel itself – which is why many gas stations are really convenience stores with pumps out front. Borenstein and some of his collaborators have also documented that retail gas prices rise quickly when wholesale costs climb but fall slowly when wholesale costs drop.

The question of gas tax holidays

Gas tax holidays reduce funding for what the taxes are designed to pay for, typically roads and bridges. That pushes road and bridge upkeep costs onto future drivers and general taxpayers.

There is an additional problem, too: Taxes on gasoline are supposed to charge drivers for some of the costs their driving imposes on everyone else – carbon emissions, local air pollution, congestion and crashes. But Borenstein has found that U.S. fuel tax levels are already far below the true cost to society. Removing the tax on drivers effectively raises the costs for everyone else.

A fisherman holds a pole in the foreground as an oil tanker sails by at sunset
Suspending the Jones Act allows foreign-based oil tankers to sail between U.S. ports.
AP Photo/Eric Gay

The Jones Act: A small number that adds up

The 1920 Jones Act is a federal law that requires cargo moving between U.S. ports to travel on vessels built and registered in the U.S., owned by U.S. citizens, and crewed primarily by U.S. citizens and permanent residents. Of the world’s 7,500 oil tankers, only 54 meet this requirement. Only 43 of these can transport refined fuels such as gasoline.

So, despite significant refining capacity on the Gulf Coast, some U.S. gasoline is exported overseas even as the Northeast imports fuel, in part reflecting the relatively high cost of moving fuel between U.S. ports.

Economists Ryan Kellogg and Rich Sweeney estimate that the law raises East Coast gasoline prices by about a penny and a half per gallon on average, costing drivers roughly $770 million a year. In light of the war’s effect on gas prices, the Trump administration has temporarily suspended the Jones Act requirements – an action more commonly taken when hurricanes knock out Gulf Coast refineries and pipeline networks.

What moves the number

The result of all these factors is that the price that drivers see at the pump mostly reflects the global price of crude, plus a stack of domestic costs, only some of which are inefficient.

Tax holidays give a partial, short-lived rebate. Jones Act waivers trim pennies, though permanent repeal may cause more fundamental changes, such as reduced rail and truck transport of all goods, which could lower costs, emissions and infrastructure damage associated with cargo transportation. Harmonizing fuel blends across states and seasons may lower prices somewhat, but likely at the expense of increased emissions.

Ultimately, the best protection against oil price shocks is a more efficient gas-burning vehicle, or one that doesn’t burn gasoline at all. In the meantime, the best I can offer as an economist is clarity about what that $4.50 actually buys.

This article includes material previously published on May 1, 2026.

The Conversation

Robert I. Harris does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Suspending federal gas tax wouldn’t save drivers as much as they might hope – here’s what goes into the price of a gallon of gas – https://theconversation.com/suspending-federal-gas-tax-wouldnt-save-drivers-as-much-as-they-might-hope-heres-what-goes-into-the-price-of-a-gallon-of-gas-282702

Is your AI chatbot manipulating you? Subtly reshaping your opinions?

Source: The Conversation – Canada – By Richard Lachman, Director, Zone Learning & Professor, Digital Media, Toronto Metropolitan University

A billboard tries to sell you something. So does a used car salesman. But no matter how smooth the pitch, you’re quite aware of the profit motive, and you can walk away at any time.

What if that pitch is invisible, plays to your unique fears and vanities, and is delivered in a voice that sounds like a trusted friend? Generative AI has changed the equation of persuasion entirely: chatbots can now deliver a personalized, adaptive and targeted message, informed by the most intimate details of your life.

Large language models (LLMs) can hyper-target messages by drawing from your social media posts and photos. They can mine hundreds of previous chatbot conversations in which you asked for relationship advice, discussed your parenting fails and shared your health concerns and financial woes. They can also learn from each interaction, refining their manipulation in real time, targeting your unique and individual tastes, preferences and vulnerabilities.

Studies show this kind of personalized content to be 65 per cent more persuasive than messages from humans or from non-personalized AI. It is four times as effective at changing political opinions as advertising. It could be a powerful tool for social change — used for the good, or for nefarious purposes.

This makes one feature especially troubling: Each conversation is private. It is not monitored, never audited and doesn’t happen in the public eye.

This isn’t advertising. It’s something we don’t have words for yet, and we’re living inside it.

Convincing arguments

In my book Digital Wisdom: Searching for Agency in the Age of AI, I explore how large language models introduce a new frontier in persuasion — one where AI systems can draw upon a huge amount of data about the world, language and you to tailor a highly personalized pitch.

Consider how this might work: You’re a nurse. Through your employer’s AI platform, you’ve shared your sleep problems, burnout and the financial stress of a recent divorce. Now the hospital is short-staffed and offering shifts at a reduced rate calculated by software they license.

You ask the AI chatbot whether you should take them. It knows you’re exhausted. It knows you’re behind on bills. It knows exactly which argument could convince you one way or the other. Who is it working for in that moment?

As companies like Meta and IBM explore how AI can hyper-personalize ads for specific audiences, the dividing line between tools that help users find what they genuinely want, and those that manipulate them against their interests, becomes increasingly important.

Friend or stranger?

Let’s look at another example. Imagine the following messages from your favourite AI chatbot or companion:

I noticed your sleep patterns haven’t been great lately, averaging only 5.4 hours, with lots of restless periods. That’s common when dealing with relationship stress. Your partner just went back to work and 76 per cent of couples experience strain during career transitions.

A new sleep medication has shown effectiveness for relationship-linked insomnia. Your insurance would cover it with just a $15 contribution. Would you like me to schedule a telehealth appointment for tomorrow at 2 p.m.? I see you have a break in your schedule.

This might feel great, like advice from a thoughtful friend who knows you well. It might also feel terrifying, as if a manipulative stranger has read your diary.

Given that people are increasingly turning to AI for medical or mental health advice, despite studies showing this advice to be problematic almost 50 per cent of the time, a manipulative stranger could cause real harm.

The danger here isn’t just the precision of the targeting. This content is also impossible to police. What you view can’t be tracked by watchdogs, since you’re the only person who ever sees it.

While governments don’t typically police the content of political ads, beyond transparency about their funding, we often rely on public outcry and the media to expose campaigns that spread falsehoods. If an AI personalizes every message for an individual, there is no trace left behind.

Reshaping our worldview

Perhaps most concerning is that these systems could gradually reshape our worldview over time.

Scholars have long argued that the algorithms used by social networking sites and search engines create filter bubbles, in which we are fed well-crafted text, video and audio content that either reinforces our worldview or exerts influence towards someone else’s.

The text 'Meet your thinking partner' is displayed on a dark computer screen with the Claude logo.
Are AI chatbots like Claude, ChatGPT, Gemini and DeepSeek helping you think, or subtly shaping your thoughts?
(Unsplash)

By controlling what information we see and how it’s presented, AI systems could slowly shift how we think about and interpret the world around us, and even change our understanding of reality itself.

This capability becomes particularly concerning when combined with emotional manipulation. Vendors suggest their AI systems can gauge a user’s emotional state through text analysis, voice patterns or facial expressions, and adjust their persuasive strategies accordingly.

Are you feeling vulnerable? Lonely? Angry? The system could modify its approach to exploit those emotional states. Even more troubling, it could deliberately cultivate certain emotional states to make its persuasion more effective.

Preliminary research shows that AI models tend to flatter users, affirming their users’ actions 50 per cent more than other humans do, even when the actions involve potential harms. Further research shows that chatbots use deliberate emotional manipulation strategies — such as “guilt appeals” and “fear-of-missing-out hooks” — to keep us chatting when we try to say goodbye.

There have also been cases of AI chatbots allegedly endangering users, encouraging suicidal thoughts or giving detailed advice on how a user could harm themselves.

The guardrails set up by corporations to protect users from harm have also proven surprisingly easy to bypass.

Design matters

Persuasion is not a side effect of technology — it’s often the point. Every interface, every notification, every design decision carries with it an intent to influence behaviour.

Sometimes that influence is welcome: reminders to take medication, encouragement to exercise or nudges to donate blood that reinforce values we already hold. But sometimes persuasion serves someone else’s agenda — nudging us to buy, to scroll, to work harder or to give up privacy.

The same persuasive techniques can empower or exploit, depending on who controls the system, what goals they pursue and whether they have meaningful consent.

Design matters. Whether in public health, the workplace or daily life. We must ask hard questions about intent, agency and power. Who benefits from a design? Who is being persuaded and do they know it?

The technologies we build should support reflective choice, not undermine it. As AI continues to shape how we think, feel and act, our ethical obligations grow sharper: to create systems that are transparent, that prioritize user dignity and that reinforce our capacity for independent judgment. We don’t just need innovation — we need wisdom.

The Conversation

Richard Lachman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is your AI chatbot manipulating you? Subtly reshaping your opinions? – https://theconversation.com/is-your-ai-chatbot-manipulating-you-subtly-reshaping-your-opinions-280800

How structural inequality fuels Black youth recruitment into cycles of violence

Source: The Conversation – Canada – By Marycarmen Lara Villanueva, PhD Candidate, Department of Social Justice Education, Ontario Institute for Studies in Education, University of Toronto

What would it take to stop Black boys from disappearing into drug trafficking networks across northern Ontario? Not more policing, argues prison abolitionist and scholar Ruth Wilson Gilmore, but more safe housing, funded schools and community spaces where youth can gather safely.

That is what a growing body of Black community leaders is arguing in response to a crisis that The Fifth Estate documentary Missing Black Boys brought into national view in January: Black boys as young as 14 are lured into gangs and sent to remote parts of the province to sell drugs.

Youth gang recruitment is not a matter of individual choice or criminality, but one shaped by inequality, institutional neglect and racialized perceptions. And punishment alone cannot solve it.

Indigenous youth from northern reserves, where some communities have declared a state of emergency, are also part of this troubling reality. The same conditions that leave Black boys vulnerable to recruitment into exploitative and violent economies leave Indigenous youth vulnerable too.

Anishinaabe journalist and author Tanya Talaga has described this as an insidious web of drug-related violence in which Indigenous and Black youth are disproportionately impacted.

Black leaders respond

In recent months, community leaders, educators and public workers have come together to ask what makes Black youth vulnerable to recruitment, and what kinds of structural interventions can prevent it?

Black boys are not just going missing. They are being drawn into exploitative economies and transnational and intercity webs of violence. Recruitment often begins on social media, where older youth lure boys with promises of fast money.

Until recently, media outlets did not pay enough attention to these cases, reflecting broader racialized ideas about violence, innocence and vulnerability.

And if the problem is not straightforward, neither is the solution.

Shana McCalla, founder of Find Ontario Missing Black Boys, and Camille Dundas, who in 2025 authored a three-part investigative series, have been instrumental in bringing this issue into public view.

Recently, McCalla submitted a brief to Ontario Solicitor General Michael Kerzner outlining 15 recommendations to address the crisis of Black boys being groomed into drug trafficking networks. Like other Black leaders, she insists that boys recruited into criminal activity should be treated as victims of exploitation and human trafficking, not as criminal offenders.

This means being connected to victim services, trauma-informed care and culturally relevant support. In their advocacy, these leaders have pointed to education, media and lack of opportunities as some areas that need urgent attention.

Classrooms and courtrooms

Anti-Blackness in education is well-documented. The treatment of Black youth as adults when they make mistakes starts in school, often leading to disproportionate suspensions.

But when rethinking the school-to-prison pipeline, Black studies scholar rosalind hampton notes that practices of control found in prisons were established earlier within public education, bringing our attention to the carceral connections between schools and prisons.




Read more:
How to curb anti-Black racism in Canadian schools


How is this racialized perception produced? For race scholars, the answer is complex.

My research suggests that visual cultures of everyday institutions, schools, media and digital platforms play a vital role and influence how children and youth are seen and how they come to see themselves.

Masculinity, money and risk

Images shape how we understand the world and our place within it. Cultural theorist Nicholas Mirzoeff, for example, explains that we live in a visual global environment where the connection between images and how we think of race has a long history.

Youth are immersed in visual cultural production circulating across digital platforms, from social media to music videos, influencing how they see themselves and how they want to be seen.

Bizz Loc, a Toronto rapper featured in The Fifth Estate documentary, is currently serving a 7.5-year sentence for his involvement with the Eglinton West Crips street gang. In music videos of his like “I’m Bacc Crodie,” imagery of youth flashing gang signs, mimicking gun gestures and referencing rivalries circulates a version of Black masculinity tied to risk, conflict and money.

Transfeminist philosopher and essayist Sayak Valencia’s concept of gore capitalism helps explain how, in contexts of inequality, violence can be turned into something that attracts attention and generates value.

In Bizz Loc’s case, masculinity is constructed through proximity to risk and money, offering young men a way to be seen and valued when other opportunities are limited. This visual language is part of a broader web that helps sustain violence through its aestheticization.

At the same time, as American sociologist and author Tricia Rose notes, hip-hop doesn’t just describe street life shaped by chronic Black joblessness, it also educates, critiques injustice and pushes for safer, more just communities.

Yet the versions that are most visible today often narrow these stories.

An abolitionist approach

Dundas raises a pressing question: if it costs close to $97,000 a year to keep a youth in custody, how might those resources be better invested in supporting young people?

Precise figures vary and remain difficult to calculate. There isn’t clear and up-to-date data on governments’ spending on the youth justice system.

What is clear, however, is that Black and Indigenous youth are disproportionately represented within it. Whether it’s $57,000 a year or over $1,400 a day, provincial governments spend heavily on incarcerating youth.

Abolition, Ruth Wilson Gilmore explains, is about the presence of the conditions that sustain life like food security, secure employment, parks and access to nature, clean water and clean air.

In the absence of these conditions for Black and Indigenous youth, other systems step in.

Or as Fred Moten and Stefano Harney, Black studies and critical theorists, put it, the target of abolition work is not prisons, but a society that makes prisons necessary. Rather than punishment, the abolitionist question is how do we build communities where fewer young people are vulnerable to recruitment before they encounter violence at all.

The Conversation

Marycarmen Lara Villanueva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How structural inequality fuels Black youth recruitment into cycles of violence – https://theconversation.com/how-structural-inequality-fuels-black-youth-recruitment-into-cycles-of-violence-280516

Africa has the world’s greatest genetic diversity, yet it’s missing from research: we’re filling the gap

Source: The Conversation – Africa (2) – By Michele Ramsay, Director of the Sydney Brenner Institute for Molecular Bioscience, Professor in the Division of Human Genetics, University of the Witwatersrand

Throughout history, most of the world’s genomic research has relied on DNA data from people of European ancestry.

A genome is the full DNA code of about three billion (a thousand million) bases, including all the chromosomes. Each person has two genomes: one from their mother and the other from their father.

Well resourced environments favour European-based research generating hundreds of thousands of whole human genomes with associated health data. Yet modern humans, our species, evolved on the African continent. African populations therefore contain the deepest branches of human genetic history and the greatest genetic diversity on the planet. Yet the continent remains strikingly underrepresented in global genomic databases.

The African continent is populated by people from over 2,000 ethnolinguistic groups, yet genetic data exist for fewer than a hundred groups. This is akin to having a GPS map of a city with only 5% of the streets marked and the rest left blank.

This bias has profoundly shaped modern medicine, from disease prediction tools to ancestry testing. And it’s why researchers increasingly recognise that studying African genomes has the potential to reveal insights and health-related biological pathways never observed before.

As a team of researchers we were involved in identifying under-represented groups in nine African countries for human whole-genome sequencing. Our multidisciplinary team involved in the Assessing Genetic Diversity in Africa project (AGenDA) has worked out ethical ways to obtain, record and share genetic material and to add to global databases.

The AGenDA dataset alone is expected to uncover millions of previously unknown genetic variants and analyses are underway. These discoveries will inform research into diseases that affect populations in African and worldwide. They include diabetes, heart disease, cancer and neurological or mental health conditions.

This is only a first step. Capturing the full scope of African genomic diversity will require hundreds of thousands of genomes. The project aims to bridge some of the most obvious gaps rather than fully map the continent’s diversity.

But expanding African genomic data is not only important for Africa. It will strengthen global biomedical science.

What it takes

Modern genomic science relies on large databases of DNA sequences to understand disease risk, ancestry and human evolution. These databases underpin a wide range of scientific and medical tools. They are used in medical research, disease prediction, drug development, ancestry testing and increasingly in artificial intelligence models that analyse health data.

When a population is absent from a reference database, a library of whole genome sequences, science simply cannot detect it. Genetic algorithms work by comparing individuals to reference populations. In the absence of a specific reference population, the algorithms will assign the closest available match.

This problem becomes particularly visible in ancestry testing. This is a form of genetic testing often used to learn more about biological heritage. Because African reference data remain incomplete, people with African ancestry may receive vague or misleading results about their origins.

Without more African genomic data the assignment of specific ancestry may be incorrect. In addition, disease risk predictions would be misleading. For example it has been shown that standard doses for medications like warfarin (a blood thinner) or efavirenz (an HIV medication) could be ineffective or toxic for people who harbour specific variants that are more common in African populations.

Prior knowledge of the distribution of such variants in a population could be key to deciding the suitability of a drug for patients from that population.

Filling some of the gaps

The AGenDA project was designed to begin addressing some of the gaps in genome data and African representation. This project involved large multi-country scientific collaborations across the continent. It also required co-ordinating research across multiple ethics committees, regulatory frameworks and institutions. Scientists collaborated with research partners in Angola, the Democratic Republic of Congo, Kenya, Libya, Mauritius, Rwanda, Tunisia and Zimbabwe.

The aim was not simply to increase the number of African genomes in global databases. Instead, the team carefully selected populations to address major geographic and ethnolinguistic gaps in genomic data.

But generating large genomic databases requires careful community engagement and consent from participants to share their data. Biological samples for DNA extraction must be collected and the sequencing performed one base at a time.

We therefore built community engagement and culturally appropriate consent processes into the project from the beginning.

More than 1,000 whole genomes were sequenced from communities that had rarely been included in previous genetic studies. These included:

  • hunter-gatherer populations

  • Nilo-Saharan-speaking communities

  • Afro-Asiatic speakers

  • understudied Bantu-speaking populations

  • communities from north Africa and the Indian Ocean islands.

Selecting samples required careful consideration of what African diversity actually represents.

Genetic diversity does not map neatly onto modern national borders. Instead, researchers considered a range of additional factors. These included:

  • poorly represented geographic regions in genomic databases

  • major ancestral population histories

  • languages spoken and self-identified ethnic groups

  • recent patterns of migration.

In some cases, neighbouring communities may appear close due to geographic proximity but have distinct genetic histories that reflect population separations thousands of years ago.

Why studying African genomes benefits science everywhere

African genomes contain more genetic variation than populations on any other continent. This diversity provides a powerful resource for scientific discovery. When researchers study more diverse populations they are better able to achieve a number of things.

Firstly, they can identify new genetic variants.

Secondl,y they can investigate evolutionary forces, like natural selection, that have shaped the genomes of people in different parts of the world.

And thirdly, they can pinpoint variants that influence health and disease.

More inclusive genomic datasets are also essential as genomics becomes integrated with artificial intelligence systems that analyse medical data and predict health outcomes. Future medical technologies could be biased to work best for whoever is represented in the data.

Ultimately, expanding African genomic representation will help ensure that the benefits of genomic medicine are shared more equitably. At the same time, it will improve the accuracy and depth of understanding in global genetic science.

The Conversation

Michele Ramsay is the South African Research Chair in Genomics and Bioinformatics of African populations. Funding for this work was from the National Institutes of Health (USA) and it was done in partnership with Illumina.

Ananyo Choudhury receives funding from the National Institutes of Health, USA, the South African Medical Research Council, and the Science for Africa Foundation

ref. Africa has the world’s greatest genetic diversity, yet it’s missing from research: we’re filling the gap – https://theconversation.com/africa-has-the-worlds-greatest-genetic-diversity-yet-its-missing-from-research-were-filling-the-gap-278809

Why is the US so obsessed with controlling Cuba?

Source: The Conversation – Global Perspectives – By Deborah Shnookal, Research fellow, Department of Spanish and Latin American Studies, The University of Melbourne

For months, US President Donald Trump has been fixated on Cuba. He’s issued threats and imposed additional sanctions on the island. The US military has conducted dozens of intelligence-gathering flights off the coast in recent weeks, suggesting a prelude to an invasion.

The Cuban government has indicated a readiness to negotiate with the Trump administration on some issues, such as migration, drug trafficking and investment openings for Cuban-Americans. But Cuba’s sovereignty is not negotiable.

After interviewing Cuban President Miguel Díaz-Canel last month, US journalist Kristen Welker seemed to catch on:

Nothing gets under [Cubans’] skin more than the notion that the United States can tell the Cuban government who should lead it or what it should be doing, how it should be governing, because that challenges the very idea of the sovereignty of the country.

This US obsession with controlling, influencing and coercing Cuba long predates Trump and even the Cold War. This is how President Theodore Roosevelt described the island in 1906:

I am so angry with that infernal little Cuban republic that I would like to wipe its people off the face of the earth. All we have wanted from them was that they would behave themselves and be prosperous and happy so that we would not have to interfere. And now, lo and behold, they have started an utterly unjustifiable and pointless revolution.

Understanding the current impasse between the two adversarial neighbours requires looking at this full arc of history. While the 1823 Monroe Doctrine sought to establish US predominance in the entire American continent, Cuba has always been a particular focus of Washington’s attention.




Read more:
Cuba has survived 66 years of US-led embargoes. Will Trump’s blockade break it now?


‘Americanisation’ of the island

From the moment the 13 American colonies declared independence from Britain, Americans assumed Cuba would become part of the union. Successive US administrations sought to purchase, annex or otherwise control Cuba, claiming this was inevitable by virtue of the laws of gravity and geography. It was also seen as part of a self-proclaimed “civilising mission”.

When the Cubans eventually defeated their Spanish colonial masters in 1898, the United States stepped in and occupied the island to thwart its independence.

At the time, at least one third of Cubans were former slaves or of mixed race. The US governor of Cuba, Leonard Wood, argued they were not ready for self-government.

Illustration shows Uncle Sam talking to a young boy labelled ‘Cuba’ on a beach, from a 1901 publication.
Library of Congress

Certainly, the US – especially the Southern former slave holders – didn’t want another Haiti in its neighbourhood. Haitian slaves had seized control of their island nation from the French in a violent rebellion in 1804, echoing the cries of the French revolution for liberty, fraternity and equality.

The US military occupation of Cuba ended in 1902 and Cuba formally declared independence – albeit with provisions. These allowed for future US intervention whenever Washington thought the Cuban people needed a guiding hand (which turned out to be fairly often).

In the decades that followed, US business interests deeply penetrated every sector of Cuba’s economy and had complete sway over Cuban governments.

On a cultural level, Cuba rapidly became “Americanised” through a new US-style education system. Travel to the island picked up, too. The popular Terry’s Guide to Cuba reassured US visitors in the 1920s they would feel right at home because “thousands [of Cubans] act, think, talk and look like Americans”.

Castro’s mission

All of this changed with the rise of Fidel Castro.

During the Cuban Revolution, Castro announced in April 1959 that the revolutionary government would be “Cubanising Cuba”. This might seem “paradoxical”, he explained, but Cubans “undervalued” everything Cuban. They had become “imbued with a type of complex of self-doubt” in the face of the overwhelming US influence on the island’s culture, politics and economy.

US journalist Elizabeth Sutherland similarly observed at the time that Cubans suffered from a “cultural inferiority complex typical of colonised peoples”.

For North Americans, however, Castro’s blunt statement seemed at best to reflect ingratitude, and at worst, an insult. As the US broadcaster Walter Cronkite recalled:

The rise of Fidel Castro in Cuba was a terrible shock to the American people. This brought communism practically to our shores. Cuba was a resort land for Americans […] we considered it part of the United States.

At the heart of Cuba’s revolutionary project has been an assertion of Cuba’s sovereignty, independence and national identity. The drive has been to create a new, united and socially just Cuban nation, as envisioned by its great national hero and poet, José Martí.

So, for Cubans it’s a matter of history. For North Americans, it’s a matter of self-image. They had “convinced themselves,” writes historian Louis A. Pérez, of the “beneficent purpose […] from which [the US] derived the moral authority to presume power over Cuba”.

When the Obama administration finally resumed relations with Cuba in 2014, it felt like a historic shift was taking place. The US might finally respect Cuban sovereignty and engage with Cuba on equal terms.

As President Barack Obama said at the time:

It does not serve America’s interests, or the Cuban people, to try to push Cuba toward collapse. […] We can never erase the history between us, but we believe that you should be empowered to live with dignity and self-determination.

Trump has now reverted to Washington’s traditional neo-colonialist view of Cuba, proclaiming he can do what he likes with the island. Perhaps it is time to try a new approach. As the spectacular debacle of the US-backed Bay of Pigs invasion showed 65 years ago, Cubans remain ready to defend their independence and their right to determine their own future.

The Conversation

Deborah Shnookal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why is the US so obsessed with controlling Cuba? – https://theconversation.com/why-is-the-us-so-obsessed-with-controlling-cuba-280729

Many of the Caribbean’s most important reefs are going unprotected

Source: The Conversation – USA (2) – By Sara M. Melo Merino, Postdoctoral Fellow in Marine Science, Smithsonian Institution

A researcher checks on corals in Banco Chinchorro, off Quintana Roo, Mexico. Lorenzo Alvarez-Filip

Living by the sea in the tropics means being exposed to some of nature’s most powerful forces. Hurricanes can bring storm surges, flooding and destructive waves that threaten homes, infrastructure and livelihoods.

For many communities, coral reefs are a natural first line of defense against these storms. The reefs’ rugged structures break the incoming waves, reducing the waves’ energy by as much as 97%. Globally, reefs prevent about US$4 billion a year in storm damage. Without them, studies suggest, the damage would double.

Yet, these vital ecosystems are under increasing pressure. Rising ocean temperatures, pollution and coastal development are driving the loss of reef-building corals – the species that create the physical structure of coral reefs and underpin their ability to protect coastlines and provide habitat for marine life.

Protecting key coral reefs from these human-caused stresses could help the reefs continue to reduce future storm damage.

But which reefs should be prioritized?

An aerial view of a reef just off shore.
Reefs visible just offshore protect the coastline of Puerto Morelos, Mexico, in part by breaking waves during storms.
Lorenzo Alvarez-Filip

We study coral reefs and marine environments. In a new research paper, we examined the likely impact that future warming will have on reefs across the Caribbean over the coming decades, including which reefs are most likely to persist under rising temperatures. Then we looked at which reefs were likely providing the greatest protective benefits for coastlines based on their functional characteristics.

The results show that about half of all the reefs with the greatest potential to continue to protect coastlines as the oceans warm are currently unprotected from human harms.

The Caribbean’s hidden coastal defenders

The value of coral reefs is evident along the Mexican Caribbean coast, where tourism is a major economic driver and the main source of income for local communities. The tourism industry there can generate up to $15 billion in a single year. Much of that value depends directly or indirectly on healthy coral reefs.

Losing the reefs would not only affect fish that rely on coral structures for habitat, and the livelihoods of people who depend on them, it would also cost millions of dollars in increased storm damage. An estimated 105,800 people, along with buildings and other infrastructure worth $858 million, are located in coastal areas protected by reefs in the Mexican Caribbean alone.

An overhead view of a dense coral reef.
Elkhorn corals (Acropora palmata) are among the most important corals in the Caribbean. They can form dense clusters that are highly effective at taking the energy out of waves.
Lorenzo Alvarez-Filip

The role of reefs becomes especially clear during extreme events.

In 2005, Hurricane Wilma, a Category 5 storm, struck the coast of Quintana Roo in the Yucatán Peninsula, Mexico. Near the small town of Puerto Morelos, the coral reefs broke the waves, helping lower the wave height that had reached nearly 36 feet (11 meters) offshore to less than 6 feet (2 meters) near the coast. The reefs near Puerto Morelos are part of a protected national park where public access to the reefs is heavily regulated.

Not all reefs protect the coast equally

However, not all reefs provide the same level of protection for coastlines. Our research shows that the differences depend on the reef engineers – the coral species that built the reef.

Reefs dominated by large, complex and rigid corals, such as thickets of elkhorn corals, create rough, elevated structures that can break and slow incoming waves, providing the greatest protection. In contrast, reefs made up of smaller or flatter species offer less resistance.

Knowing which reefs deliver the greatest structural protection can help countries and communities prioritize protecting them from human pressures, such as pollution and ship traffic.

We found that of the highest-priority reefs – based both on functionality and how well they are expected to survive rising water temperatures by midcentury – only 54% were protected. In the Caribbean’s western, southwestern and Florida ecoregions, priority reefs were most likely to be in formal marine protected areas, while the Greater Antilles and Bahamas had several unprotected reefs.

The Bahamas, Puerto Rico, Turks and Caicos, and Cuba have many high-value reefs that remain unprotected, meaning there are opportunities to increase protection on these important reefs. The reefs that we identified as important for conservation based on their physical functionality have also been reported to support high levels of biological diversity.

A coral reef with large groups of corals.
Reefs dominated by complex and rigid structures are often the most functional for protecting coastlines. They also provide important habitat for fish.
Lorenzo Alvarez-Filip

While a large percentage of coral reefs off Belize, Honduras and Puerto Rico are protected, we found that several reefs with the greatest potential for protecting coastlines were not within marine protected areas.

Why does this matter in a warming world?

Ocean warming is driving more severe and frequent coral bleaching events. When water temperatures rise too high, corals expel zooxanthellae – the algae that live in their tissues, provide them with energy and give corals their color. If heat stress is too intense or prolonged, many corals won’t recover.

As corals die, the reef structures they built break down and lose complexity over time. The coastal defenses they provide disappear.

At the same time, high-intensity hurricanes are becoming more frequent.

This creates a dangerous combination: stronger storms hitting coastlines that are less protected.

Protecting coral reefs is essential, not only for the sake of marine biodiversity, but for safeguarding coastal communities, their economies and the millions of people who live there.

The Conversation

Sara M. Melo Merino received a scholarship from Secretaría de Ciencia, Humanidades, Tecnología e Innovación
(Secihti No. 246257).

Lorenzo Alvarez-Filip and Steven Canty do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Many of the Caribbean’s most important reefs are going unprotected – https://theconversation.com/many-of-the-caribbeans-most-important-reefs-are-going-unprotected-281490

Button-pushing explorers: How to grasp that AI agents can do amazing things while knowing nothing

Source: The Conversation – USA – By Ji Y. Son, Professor of Psychology, California State University, Los Angeles

The simple process of taking an action, assessing what happens and adjusting can lead to smart-seeming behavior. Westend61 via Getty Images

The nonprofit ARC Prize Foundation on May 1, 2026, released the results of a new benchmark: a test of an AI system’s ability to solve a game. The results were striking – humans scored 100%, while the most advanced AI systems scored under 1%.

At first glance, this may be surprising to users of AI who are impressed by its polished essays, codebases and multistep projects generated in seconds. How can these brilliant AI systems struggle with these simple Tetris-shape puzzles?

That confusion points to a risk: AI is becoming integrated into everyday life faster than people can make sense of it.

We are cognitive psychologists who study how to teach difficult concepts. To recognize the limits and risks of today’s AI agent systems, it’s important for people to grasp that the systems can both accomplish superhuman feats and make mistakes few humans would. To that end, we propose a new way to think about AIs: as button-pushing explorers.

Mental models for AI

We teach college students, a group rapidly incorporating AI tools into their daily routines. That gives us regular opportunities to ask what they think is going on with AI. The answers vary widely. One student said that someone at OpenAI or Anthropic is reading and approving every response the system generates. Another, more succinctly, said, “It’s magic.”

These responses illustrate two tempting ways of making sense of AI. At one extreme, AI is treated as an inscrutable black box – a powerful but ultimately mysterious force. At another, people explain it using the same assumptions they use to understand other humans: that its outputs reflect reasoning or judgment.

The worry is that these misinterpretations don’t go away as users gain more experience interacting with AI, and they might get reinforced. When AI performs well, its output can feel like evidence of understanding or confirmation that it really is something like magic. That apparent success makes it harder to question what the system is actually doing. Biases can seem logical or inevitable; harmful behavior can look like a deliberate choice or even fate, as if it could not have gone any other way.

Cognitive scientist Anil Seth explains why AIs don’t have – and won’t have – consciousness.

Saying that AI models are shaped by patterns in data, training processes and system design is true, but that’s too abstract to tell people when to trust the systems’ outputs or when they might fail. To help people avoid misplaced trust in AI, AI literacy efforts will need to include some mechanistic understanding of what produces their behavior – explanations that are perhaps not perfectly accurate but useful. Statistician George Box once wrote, “All models are wrong, but some are useful.”

Researchers have come up with several mental models for large language models. One is “stochastic parrot,” which shows that the models use statistical methods – stochastic refers to probabilities – to mimic responses with no understanding of meaning. Another is “bag of words,” which emphasizes that the models are collections of words – for example, all English words found on the internet – with a mechanism for giving you the best set of words based on your prompt.

These ways of thinking about large language models were never meant to be complete accounts of the systems. But the metaphors serve an important cognitive purpose: They push back against the idea that fluent language is necessarily caused by humanlike understanding.

But as the AI systems people use are increasingly powerful agents capable of stringing together actions on their own, it’s important for people to have a different kind of mental model: one that explains how they act. One place to find such a model is in earlier research on AI systems that learned to play Atari 2600 games. These systems didn’t understand the games the way humans do, but they still managed to rack up a lot of points.

The simple loop: Act, observe, adjust

Imagine a neural network, a relatively simple kind of AI model, placed into a video game it has never seen before. It does not “understand” the game like a human would. It has no idea whether it’s shooting space invaders or navigating an ancient pyramid. It doesn’t know the goals or rules.

Instead, it learns to play through a simple loop: Take an action – move left, jump, shoot – observe what changes, and then adjust. If an action leads to a good outcome, such as gaining points, it adjusts to become more likely to take similar actions in similar situations. If it leads to a bad outcome, such as losing a life, it adjusts in the opposite direction.

Even this simple mechanism can produce surprisingly capable behavior. Over time, by repeating this loop, the neural networks learned to play a wide range of Atari games – but not all games.

There is one game that famously stumped these early neural networks: Montezuma’s Revenge. To make progress, a player must carry out a long sequence of actions – climbing ladders, avoiding obstacles, retrieving keys – before receiving any reward at all. Unlike simpler games, most actions offer very little immediate feedback. The game required something like goal-directed, long-term planning.

Early neural networks would try a few actions, receive no reward and fail to make further progress through Montezuma’s underground pyramid. From the system’s perspective, all actions looked equally useless. But researchers made a breakthrough by changing the feedback signal. Instead of rewarding only success, they also rewarded the system for doing something new. The rewards were for visiting parts of the game it had not seen before or trying actions it had not previously taken. This tweak encouraged exploration.

In 2016, Google DeepMind rewarded its AI model for exploration – try something, see what happens, adjust – while playing the Atari 2600 game Montezuma’s Revenge, which dramatically improved the AI’s performance on the game that’s notoriously difficult for AIs.

With that change, performance improved dramatically. The neural network began navigating obstacles, taking multiple steps toward goals and adapting when things went wrong. From the outside, this kind of behavior can look like planning or problem-solving. But what looks like planning was not caused by sophisticated planning abilities. The underlying mechanism is still the same simple loop: act, observe, adjust.

This kind of system isn’t a stochastic parrot or a bag of words. It’s closer to a button-pushing explorer: something that doesn’t understand the world in a human sense but moves forward by pushing buttons, seeing what happens and adjusting what it does next.

From video games to modern AI agents

Today’s AI systems can do far more than play games like Montezuma’s Revenge. They can coordinate tools, write and run code, and carry out multistep projects. The range of possible actions is much larger, and the environments in which they operate are increasingly complex.

But these agents are still fundamentally button-pushing explorers. The behavior can be sophisticated, but the process that produces it is not. Humans can often infer how a new environment works after just a few observations. Systems that rely on these feedback loops cannot. They need to try many actions and see what happens before they can make progress.

This helps explain both the strengths of these AI systems and some of their most concerning failures. What these agents learn depends on what is being rewarded. And in real-world systems, those reward signals are often imperfect.

AI systems that conduct negotiations aim to maximize their client’s interests, sometimes with deceptive tactics. Rental pricing software used by landlords ends up price fixing. Marketing tools generate persuasive but misleading reviews.

These systems aren’t trying to be evil or greedy. They are adjusting to the signals they are given. From the button-pushing explorer perspective, these failures are downright predictable.

Effective AI literacy means holding two ideas at once: These systems can do surprisingly complex things, and they are not doing them the way humans do. If AI is seen as humanlike or magical, its outputs feel authoritative. But if it is understood, even imperfectly, as a button-pushing explorer shaped by feedback, people are likely to ask better questions: Why is it doing this? What shaped this behavior? What might it be missing?

That’s the difference between being impressed by AI and being able to reason about it.

The Conversation

Ji Y. Son receives research funding from the Gates Foundation and Valhalla Foundation.

Alice Xu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Button-pushing explorers: How to grasp that AI agents can do amazing things while knowing nothing – https://theconversation.com/button-pushing-explorers-how-to-grasp-that-ai-agents-can-do-amazing-things-while-knowing-nothing-281498

You can change your emotions – but it’s a 2-step process that takes some effort

Source: The Conversation – USA – By Christian Waugh, Professor of Psychology, Wake Forest University

You don’t need to be stuck on a negative feeling. RealPeopleGroup/E+ via Getty Images

Picture Gigi, having a chat with her boss, when the meeting takes a sharp turn. Gigi’s boss tells her that her work has been lacking recently and that maybe she needs to stay late a couple of evenings to make it up. Surprised by her boss’s remarks, she feels the rumblings of anxiety rising in her mind and body. Psychology research suggests that Gigi feels anxious because she interpreted her boss’s remarks as something threatening that perhaps she can’t handle.

Just as Gigi starts frantically looking online for new jobs, she spies the “employee of the month” plaque on her desk from last year. She thinks to herself that maybe she can get back to her old form. She has changed her initial view of the situation (need to run away from a threat) to a new one (let’s rise to the challenge), causing her anxiety to subside. Psychologists call this process reappraisal.

Studies show that reappraising emotional situations is a powerful way to change how you feel. When you find the silver linings in bad situations or give others and yourself the benefit of the doubt, it can help you feel better.

I’m a psychology researcher who’s interested in how people change their emotions. Gigi may feel a little less anxious in the moment, but does she truly believe that she can make up the work on time and regain her former glory? My colleagues and I set out to investigate whether it’s possible to start the process of reappraisal without going all the way through with it. Are people getting the full benefit from trying to think differently about their emotions?

Reappraisal has multiple steps

When my colleague Kateri McRae and I first started thinking about what it means to fully reappraise emotional experiences, we were struck by something we saw in the emotion regulation research. Almost all of the studies treated reappraisal as a one-step process. Researchers would ask participants to “reappraise this to make yourself feel better” and then measure the effects.

Man with downcast eyes sits with elbows on knees and fists to temples
Intentionally finding a new way to think about how you’re feeling can help you start changing your emotions.
Maskot via Getty Images

However, theories about how people regulate their emotions suggest that, like any effortful psychological process, reappraisal involves multiple steps.

When you want to change how you’re feeling, you first generate a reappraisal. You bend and stretch your mind to come up with some alternative way to look at the situation. For Gigi, seeing the employee of the month plaque helped. She could have also thought of her boss’s previous compliments or how it felt to get projects done early.

After you generate a reappraisal, it might seem like you’re done, but you’re not. That alternative interpretation is fragile and must compete with your original take that’s driving your emotion. Somehow you need to strengthen that reappraisal so it can stick.

We call this implementation – when you focus and elaborate on that reappraisal to really change your mind about the situation. For Gigi, she may continue to think about all the ways that she can be a great employee so that it lodges firmly in her mind and makes her anxiety truly disappear.

We tested this idea in a study. We showed 89 undergraduate participants images of negative situations and asked them to first just generate a reappraisal of the image that could help them feel better about it. For example, they might see a picture of a frail man in a hospital bed and tell themselves that the man is getting good treatment and will be better soon. Then, we showed them the image again and asked them to focus and elaborate in their mind on their reappraisal.

Participants felt a little better after generating a reappraisal, but they felt much better after implementing it by focusing and fleshing out the details. In a follow-up study, we showed that these emotional boosts persisted when viewing the images later.

Choosing to commit to feeling better

So we experimentally showed that people reappraise their feelings in two steps. So what? That’s probably what everyone does naturally, anyway, right?

This was the next question we sought to answer. We conducted a study with 52 undergraduate participants like the earlier one, but with a twist. This time, after participants generated a reappraisal, we gave them a choice to continue the reappraisal process by implementing it or to stop the process by distracting themselves.

Participants chose to continue reappraising their emotions only about half the time. Even though reappraisal made participants feel better about the emotional images, there were still many times when they stopped the process prematurely and did not enjoy its full benefits.

Young woman looks out window holding tablet and pen
Successfully reappraising your emotions calls for not giving up on the process too soon.
whitebalance.space/E+ via Getty Images

In real life

These studies showing the benefits of fully following through on emotional reappraisals are lab experiments, but they have implications for how people try to help themselves feel better in real life.

First, it’s hard to intentionally change how you think about something, and people tend to dislike continuing to do hard things. Indeed, in our choice study, people opted to give up on reappraising when they weren’t feeling its benefits early on. Knowing this human tendency might give you the best chance of continuing reappraisal even when it doesn’t feel like it’s working or is hard.

Second, people often get reappraisals from others, and it’s tempting to think that hearing a new perspective is all you need. Indeed, we have unpublished data that shows that participants feel pretty good when receiving a reappraisal from someone else about their own situation. But other people cannot change your mind for you. You must do that yourself if you want to truly feel better.

Next time you’re in an unpleasant situation like Gigi’s, don’t just cursorily think that you can rise to the challenge. Really think through the situation and let your new perspective become your only one.

The Conversation

Christian Waugh receives funding from National Institutes of Health.

ref. You can change your emotions – but it’s a 2-step process that takes some effort – https://theconversation.com/you-can-change-your-emotions-but-its-a-2-step-process-that-takes-some-effort-280000

Genome sequencing is rewriting the history of disease outbreaks – but without social context, it can tell only part of the story

Source: The Conversation – USA – By Marc Zimmer, Professor of Chemistry, Connecticut College

A pathogen’s genome acts as a biological record of where it came from and how it spread. Westend61/Getty Images

Fingerprinting transformed police investigations by making it possible to place a suspect at a crime scene with physical evidence. Similarly, genome sequencing has changed how disease detectives study outbreaks by allowing them to read a pathogen’s genes as a biological record of where it came from and how it spread.

One way to think about sequencing is to imagine a virus or bacteria’s genome as a recipe book. Each gene is a recipe for making a protein. When scientists sequence a pathogen, they read the order of the genetic letters in those recipes.

Over time, small changes appear in the recipes as the pathogen mutates. By comparing those changes in samples collected from different places and times, researchers can determine which infections are related and estimate when and where the pathogen entered a population.

Scientists have used sequencing in this way to track outbreaks of COVID-19, Ebola, mpox and foodborne illnesses. This information helps public health investigators connect cases that might otherwise seem unrelated.

Genomic sequencing helps researchers keep track of virus variants.

Still, genomic sequencing has limits. It can show that different pathogen strains are related, but it cannot fully explain why an outbreak began in one place, why it spread in a particular direction, or how human behavior shaped its course. Answering those questions requires combining genomic data with historical records, archaeological artifacts, trade records and epidemiological investigations.

I am a chemist and the author of “Diseases Without Borders: Plagues, Pandemics, and Beyond,” a book for young adults on infectious disease and the ways it has shaped human history. In my research, I’ve found that while the genome can help researchers trace the evolutionary trail of a pathogen, other fields are needed to explain the environmental conditions that allowed this trail to become an outbreak.

Ancient DNA tells only part of the story

Advances in DNA sequencing and extraction over the past decade have made it possible to recover fragments of ancient DNA from bones and teeth. Researchers can use these genomes to study a metaphorical molecular fossil record of microbial evolution.

The Black Death, one of the deadliest pandemics in history, shows both the power and the limits of sequencing.

The infectious disease behind the Black Death, plague, is caused by the bacterium Yersinia pestis. DNA recovered from the teeth of people buried more than 5,000 years ago in what is now Sweden revealed the existence of an ancestral form of Y. pestis that had not yet adapted to fleas.

About 2,000 years later, the bacterium made an important evolutionary shift: It gained the ability to survive in fleas and pass back and forth between humans, rats and other mammals via flea bites. That change made the pathogen far more dangerous and helped pave the way for three great plague pandemics that followed: the Justinianic Plague from the sixth to eighth century; the Black Death and later waves from the 1300s into the 1700s; and the third pandemic from the 19th to mid-20th centuries.

But how and why did plague emerge and move through human societies with such devastating results? Genetic results alone are not enough to answer these questions.

When gravestones become genetic evidence

Geneticists needed archaeologists, paleoclimatologists and historians to complete the picture of the plague pandemics. The genome revealed the lineage. Other disciplines supplied the historical and environmental context.

Two 14th-century graveyards in what is now Kyrgyzstan provide a striking example of how historical evidence can guide genetic investigations into the origins of a pandemic.

Historian Philip Slavin noticed archival records pointing to an unusual number of gravestones from 1338 and 1339. Some of those tombstones explicitly referred to a pestilence as the cause of death.

That clue led to the next stage of the investigation, where archaeologist Maria Spyrou and her team extracted and sequenced ancient DNA from the skeletal remains of seven people buried in the graves and found genetic traces of Yersinia pestis in three of the skeletons. These strains were close precursors of the strain linked to the Black Death and ancestors of several modern Y. pestis lineages.

Map of locations of the Kara-Djigach and Burana archaeological sites in Kyrgyzstan, a map of graves color-coded by age and presence of Y. pestis, a horizontal bar chart of grave numbers over time, and a tombstone with a pestilence-associated inscription
The top map shows the locations of the gravesites in modern-day Kyrgyzstan, with regions of Y. pestis outbreaks shaded in blue. The map on the bottom left shows tombstones, burial dates and evidence of Y. pestis infection in a part of Kara-Djigach cemetery. The map on the bottom right shows annual numbers of tombstones from the archaeological sites of Kara-Djigach and Burana. And the artifact is a tombstone from the Kara-Djigach cemetery, part of the inscription reading ‘This is the tomb of the believer Sanmaq. [He] died of pestilence.’
Spyrou et al./Nature, CC BY-SA

This major finding was still not the whole story. It could explain where the Black Death pandemic began but not how the disease spread across Asia to Europe. Researchers found a potential answer to this question in artifacts buried at the site, which included pearls from the Indian Ocean, Mediterranean coral and foreign coins. Those objects suggested that the region was connected to long-distance trade networks.

Once the gravestones, skeletal remains, written records and trade goods were considered together, a richer picture emerged. Researchers could place the pathogen in a specific time and place and connect it to the networks of human movement that may have carried plague westward.

Sequencing provided the biological clue, revealing the pathogen’s identity and ancestry. History and archaeology turned that clue into a plausible narrative.

From ancient DNA to modern outbreaks

Genomic sequencing isn’t limited to examining outbreak cold cases. It is also researchers’ tool of choice for understanding new diseases.

When the first reported COVID-19 cases emerged in 2019, researchers quickly sequenced the virus and found that it was closely related to the virus that caused the 2002 SARS outbreak. This placed the new virus within a known family of pathogens.

Later genomic sequencing helped reveal the scale of a major superspreading event: the 2020 Biogen conference in Boston.

The biotech company Biogen brought together about 175 European and American executives at a moment when COVID-19 was only beginning to spread in the United States. In Europe, COVID-19 was also escalating, with northern Italy reporting locally transmitted clusters just days before the meeting. After the meeting, many Massachusetts cases were linked to the conference.

A 2020 Biogen conference in Boston is considered a superspreader event for COVID-19.

Researchers then analyzed thousands of viral genomes from patients in Massachusetts and elsewhere. One viral genome carried a unique genetic signature traceable to a European attendee at the conference. It matched viruses circulating in Europe but also had an additional mutation that appeared to have arisen during the attendee’s travel to Boston or early in the conference.

Because that altered sequence appeared only in people with direct or indirect ties to the meeting, it served as a genetic marker for the COVID-19 strain originating at the Biogen conference. By comparing it with other viral sequences in national databases, researchers tracked the strain associated with the conference to 29 states and several other countries.

Interviews and contact tracing alone couldn’t have made that chain of infection so clear because people may not know exactly when they were exposed, especially when infections spread through brief encounters, via travel or large meetings.

When genomes join the investigation

Genome sequencing has rewritten the history of disease by giving scientists a way to read a pathogen’s own record of change.

It can link ancient graves to later pandemics and trace a modern outbreak from one conference room to cases across a continent.

But the greatest strength of genome sequencing lies in partnership. Sequencing does not replace history, archaeology or public health investigation. It gives them a new molecular partner.

Combining work from these fields produces a fuller and more accurate account of how disease moves through the world.

The Conversation

Marc Zimmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Genome sequencing is rewriting the history of disease outbreaks – but without social context, it can tell only part of the story – https://theconversation.com/genome-sequencing-is-rewriting-the-history-of-disease-outbreaks-but-without-social-context-it-can-tell-only-part-of-the-story-279963