Soaring food prices prove the Gaza famine is real – and will affect generations to come

Source: The Conversation – Global Perspectives – By Ilan Noy, Chair in the Economics of Disasters and Climate Change, Te Herenga Waka — Victoria University of Wellington

Abdalhkem Abu Riash/Anadolu via Getty Images

The words and pictures documenting the famine in the Gaza strip are horrifying.

The coverage has led to acrimonious and often misguided debates about whether there is famine, and who is to blame for it – most recently exemplified by the controversy surrounding a picture published by the New York Times of an emaciated child who is also suffering from a preexisting health condition.

While pictures and words may mislead, numbers usually don’t.

The Nobel prize-winning Indian economist Amartya Sen observed some decades ago that famines are always political and economic events, and that the most direct way to analyse them is to look at food quantities and prices.

This has led to decades of research on past famines. One observation is that dramatic increases in food prices always mean there is a famine, even though not every famine is accompanied by rising food costs.

The price increases we have seen in Gaza are unprecedented.

The economic historian Yannai Spitzer observed in the Israeli newspaper Haaretz that staple food prices during the Irish Potato Famine showed a three- to five-fold increase, while there was a ten-fold rise during the Great Bengal Famine of 1943. In the North Korean famine of the 1990s, the price of rice rose by a factor of 12. At least a million people died of hunger in each of these events.

Now, the New York Times has reported the price of flour in Gaza has increased by a factor of 30 and potatoes cost 50 times more.

Israel’s food blockade

As was the case for the UK government in Ireland in the 1840s and Bengal in the 1940s, Israel is responsible for this famine because it controls almost all the Gaza strip and its borders. But Israel has also created the conditions for the famine.

Following a deliberate policy in March of stopping food from coming in, it resumed deliveries of food in May through a very limited set of “stations” it established through a new US-backed organisation (the Gaza Humanitarian Foundation), in a system that seemed designed to fail.

Before Israel’s decision in March to stop food from coming in, the price of flour in Gaza was roughly back to its prewar levels (having previously peaked in 2024 in another round of border closures). Since March, food prices have gone up by an annualised inflation rate of more than 5,000%.

The excuse the Israeli government gives for its starvation policy is that Hamas controls the population by restricting food supplies. It blames Hamas for any shortage of food.

However, if you want to disarm an enemy of its ability to wield food supplies as a weapon by rationing them, the obvious way to do so is the opposite: you would increase the food supply dramatically and hence lower its price.

Restricting supplies and increasing their value is primarily immoral and criminal, but it is also counterproductive for Israel’s stated aims. Indeed, flooding Gaza with food would have achieved much more in weakening Hamas than the starvation policy the Israeli government has chosen.

The UN’s top humanitarian aid official has described Israel’s decision to halt humanitarian assistance to put pressure on Hamas as “cruel collective punishment” – something forbidden under international humanitarian law.

The long-term aftermath of famines

Cormac Ó Gráda, the Irish economic historian of famines, quotes a Kashmiri proverb which says “famine goes, but the stains remain”.

The current famine in Gaza will leave long-lasting pain for Gazans and an enduring moral stain on Israel – for many generations. Ó Gráda points out two main ways in which the consequences of famines endure. Most obvious is the persistent memory of it; second are the direct effects on the long-term wellbeing of exposed populations and their descendants.

The Irish and the Indians have not forgotten the famines that affected them. They still resent the British government for its actions. The memory of these famines still influences relations between Ireland, India and the UK, just as Ukraine’s famine of the early 1930s is still a background to the Ukraine-Russia war.

The generational impact is also significant. Several studies in China find children conceived during China’s Great Leap Forward famine of 1959–1960 (which also killed millions) are less healthy, face more mental health challenges and have lower cognitive abilities than those conceived either before or after the famine.

Other researchers found similar evidence from famines in Ireland and the Netherlands, supporting what is known as the “foetal origins” hypothesis, which proposes that the period of gestation has significant impacts on health in adulthood. Even more worryingly, recent research shows these harmful effects can be transmitted to later generations through epigenetic channels.

Each day without available and accessible food supplies means more serious ongoing effects for the people of Gaza and the Israeli civilian hostages still held by Hamas – as well as later generations. Failure to prevent the famine will persist in collective memory as a moral stain on the international community, but primarily on Israel. Only immediate flooding of the strip with food aid can help now.

The Conversation

Ilan Noy is a dual citizen of both New Zealand and Israel.

ref. Soaring food prices prove the Gaza famine is real – and will affect generations to come – https://theconversation.com/soaring-food-prices-prove-the-gaza-famine-is-real-and-will-affect-generations-to-come-262486

It might seem like Trump is winning his trade war. But the US could soon be in a world of pain

Source: The Conversation – Global Perspectives – By Peter Draper, Professor, and Executive Director: Institute for International Trade, and Director of the Jean Monnet Centre of Trade and Environment, University of Adelaide

Students from an art school in Mumbai, India, created posters in response to Trump’s latest tariff announcement. SOPA Images/Getty

Last week, US President Donald Trump issued an executive order updating the “reciprocal” tariff rates that had been paused since April.

Nearly all US trading partners are now staring down tariffs of between 10% and 50%.

After a range of baseline and sector-specific tariffs came into effect earlier this year, many economists had predicted economic chaos. So far, the inflationary impact has been less than many predicted.

However, there are worrying signs that could all soon change, as economic pain flows through to the US consumer.

Decoding the deals

Trump’s latest adjustments weren’t random acts of economic warfare. They revealed a hierarchy, and a pattern has emerged.

Countries running goods trade deficits with the US (that is, buying more than they sell to the US), which also have security relationships with the US, get 10%. This includes Australia.

Japan and South Korea, which both have security relationships with the US, were hit with 15% tariffs, likely due to their large trade surpluses with the US.

But the rest of Asia? That’s where Trump is really turning the screws. Asian nations now face average tariffs of 22.1%.

Countries that negotiated with Trump, such as Thailand, Malaysia, Indonesia, Pakistan and the Philippines, all got 19%, the “discount rate” for Asian countries willing to make concessions.

India faces a 25% rate, plus potential penalties for trading with Russia.

Is Trump winning the trade war?

In the current trade war, it is unsurprising that despite threats to do so, no countries have actually imposed retaliatory tariffs on US products, with the exception of China and Canada. Doing so would drive up their consumer prices, reduce economic activity, and invite Trump to escalate, possibly limiting access to the lucrative US market.

Instead, nations that negotiated “deals” with the Trump administration have essentially accepted elevated reciprocal tariff rates to maintain a measure of access to the US market.

For many of these countries, this was despite making major concessions, such
as dropping their own tariffs on US exports, promising to reform certain domestic regulations, and purchasing various US goods.

Protests over the weekend, including in India and South Korea, suggested many of these tariff negotiations were not popular.

Even the European Union has struck a deal accepting US tariff rates that once would have seemed unthinkable – 15%. Trump’s confusing Russia-Ukraine war strategy has worried European leaders. Rather than risk US strategic withdrawal, they appear to have simply folded on tariffs.

Some deals are still pending. Notably, Taiwan, which received a higher reciprocal tariff (20%) than Japan and South Korea, claims it is still negotiating.

Through the narrow prism of deal making, it is hard not to escape the conclusion that Trump has gotten his way with everyone – except China and Canada. He has imposed elevated US tariffs on many countries, but also negotiated to secure increased export market access for US firms and promised purchases of planes, agriculture and energy.

Why economic chaos hasn’t arrived – yet

Imposing tariffs on goods coming into the US effectively creates a tax on US consumers and manufacturers. It drives up the prices of both finished goods (products) and intermediate goods (components) used in manufacturing.

Yet the Yale Budget Lab estimates the tariffs will cause consumer prices to rise by 1.8% this year.

This muted inflationary impact is likely a result of exports to the US being “front-loaded” before the tariffs took effect. Many US importers rushed to stockpile goods in the country ahead of the deadline.

It may also reflect some companies choosing to “eat the tariffs” by not passing the full cost to their customers, hoping they can ride things out until Trump “chickens out” and the tariffs are removed or reduced.

A US flag seen flying with the port of Los Angeles in the background
Earlier this year, many companies raced to bring inventory to the US before tariffs were imposed.
Robyn Beck/AFP/Getty

Who really pays

Despite Trump’s repeated claims that tariffs are a tax paid by foreign countries, research consistently shows that US companies and consumers bear the tariff burden.

Already this year, General Motors reported that tariffs cost it US$1.1 billion (about A$1.7 billion) in the second quarter of 2025.

A new 50% tariff on semi-finished copper products took effect on August 1. That announcement in July sent copper prices soaring by 13% in a single day. This affects everything from electrical wiring to plumbing, with costs ultimately passed to US consumers.

The average US tariff rate now sits at 18.3%, the highest level since 1934. This represents a staggering increase from just 2.4% when Trump took office in January.

This trade-weighted average means that, on typical imported goods, Americans will pay nearly one-fifth more in taxes.

Alarm bells

The US Federal Reserve is concerned about these potential price impacts, and last week opted to maintain interest rates at their current levels, despite Trump’s pressure on Chairman Jerome Powell.

And on August 1, economic data released in the US showed significant slowing in job creation, some worrying signs in economic growth, and early signs of business investment paralysis due to the economic uncertainty unleashed by Trump’s ever-changing tariff rates.

Trump responded to the report by firing the US Bureau of Labour Statistics commissioner, a shock move that led to widespread concerns official US data could soon become politicised.

But the worst economic impacts could still be yet to come. The domestic consequences of Trump’s tariff policies are likely to amount to a massive economic own goal.

The Conversation

Nathan Howard Gray receives funding from the Department of Foreign Affairs and Trade.

Peter Draper does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. It might seem like Trump is winning his trade war. But the US could soon be in a world of pain – https://theconversation.com/it-might-seem-like-trump-is-winning-his-trade-war-but-the-us-could-soon-be-in-a-world-of-pain-262434

World’s biggest coral survey confirms sharp decline in Great Barrier Reef after heatwave

Source: The Conversation – Global Perspectives – By Daniela Ceccarelli, Reef Fish Ecologist, Australian Institute of Marine Science

Official analysis of 124 reefs on the Great Barrier Reef shows coral cover has dropped sharply after a record-breaking marine heatwave in 2024, prompting grave fears over the trajectory of the natural wonder.

Over the past few years, fast-growing corals had pushed the Great Barrier Reef’s coral cover to record highs. But those corals were known to be extremely vulnerable and one bad summer away from losing those gains.

Our new report by the Australian Institute of Marine Science (AIMS) shows these fears have been realised. The percentage of living hard coral covering the Great Barrier Reef’s surface dropped in each region we surveyed.

The recent extreme highs and lows in coral cover are a troubling phenomenon. It raises the prospect that the Great Barrier Reef may reach a point from which it cannot recover.

Another global marine heatwave

In healthy corals, tiny algae produce both the coral’s main food source and its vibrant colours. When the water gets too warm, the algae are expelled and the coral’s tissue becomes transparent – revealing the white limestone skeleton beneath. This is called coral bleaching.

Coral can recover if temperatures are reduced and the relationship with the algae is restored, but it’s a stressful and difficult process. And if recovery takes too long, the coral will die.

In June 2023, a marine heatwave bleached coral reefs from the Caribbean to the Indian and Pacific Oceans.

It reached Australia’s east coast in February 2024, causing extensive coral bleaching. Aerial surveys showed three quarters of 1,080 reefs assessed had some bleaching. On 40% of these reefs, more than half the corals were white.

In the aftermath, in-water surveys measured how much coral died in the northern, central and southern Great Barrier Reef. The worst damage lined up with the highest levels of heat stress.

Sharp declines in coral cover

AIMS has surveyed reefs of the Great Barrier Reef each year since 1986, in a project known as the Long-Term Monitoring Program. It is the most extensive record of coral status on any reef ecosystem in the world.

One component of the surveys involves towing an expert observer behind a boat around the full perimeter of each reef. The observer records the amount of live, bleached and dead coral. These observations are then averaged for each location, and for each of the three regions of the Great Barrier Reef.

After each monitoring season we report on the percentage of living hard coral covering the Great Barrier Reef’s surface. It’s a coarse but robust, reliable indicator of the state of the Great Barrier Reef.

Coral losses this year were not uniform across the Great Barrier Reef. On the northern Great Barrier Reef, from Cape York to Cooktown, average coral cover dropped by about a quarter between 2024 and 2025 (from 39.8% to 30%). The largest declines on individual reefs (up to 70% loss) occurred near Lizard Island.

Reefs with stable or increasing coral cover were mostly found in the central region, from Cooktown to Proserpine. However, there was still a region-wide decline of 14% (from 33.2% to 28.6%), and reefs near Cairns lost between 17-60% of their 2024 coral cover.

In the southern reef (Proserpine to Gladstone) coral cover declined by almost a third. In the summer of 2024, southern reefs experienced the highest levels of heat stress ever recorded, resulting in substantial coral loss (from 38.9% to 26.9%).

The declines in the north and south were the largest in a single year since monitoring began 39 years ago.

Despite these losses, the Great Barrier Reef still has more coral than many other reefs worldwide, and remains a major tourist attraction. It’s possible to find areas that still look good in an ecosystem this huge, but that doesn’t mean the large-scale average hasn’t dropped.

More frequent bleaching events

Mass coral bleaching is becoming more frequent as the world warms.

Before the 1990s, mass bleaching was extremely rare. That changed in 1998 with the first major event, followed by another in 2002.

Back-to-back bleaching events occurred for the first time in 2016 and 2017. Since then, bleaching has struck the Great Barrier Reef in 2020, 2022, 2024, and again this year. The impacts of this year’s bleaching event will be revealed following the next round of surveys.

The time between these events is shrinking, giving corals less time to recover. Cyclones and crown-of-thorns starfish are also continuing to cause widespread coral loss.

You’ll see in the following charts how the percentage of coral cover has changed over time. The vertical yellow lines show the mass coral bleaching events increasing in frequency.

Confronting questions

The coral reefs of the future are unlikely to look like those of the past. The loss of biodiversity seems inevitable.

But will the reefs of the future still sustain the half a billion people that depend on them for food and income? Will they continue to protect coastlines from increasing storm activity and rising sea levels? These are confronting questions.

Effective management and research into reef adaptation and recovery interventions may bridge the gap until meaningful climate action is achieved. But above all, the key to securing a future for coral reefs is reducing greenhouse gas emissions.

The Conversation

Daniela Ceccarelli works for the Australian Institute of Marine Science, a publicly funded research organisation that receives funding from the Australian government, state government departments, foundations and private industry.

David Wachenfeld works for the Australian Institute of Marine Science, a publicly funded research organisation that receives funding from the Australian government, state government departments, foundations and private industry.

Mike Emslie works for the Australian Institute of Marine Science, a publicly funded research organisation that receives funding from the Australian government, state government departments, foundations and private industry.

ref. World’s biggest coral survey confirms sharp decline in Great Barrier Reef after heatwave – https://theconversation.com/worlds-biggest-coral-survey-confirms-sharp-decline-in-great-barrier-reef-after-heatwave-260563

Could we one day get vaccinated against the gastro bug norovirus? Here’s where scientists are at

Source: The Conversation – Global Perspectives – By Grant Hansman, Senior Research Fellow, Institute for Biomedicine and Glycomics, Griffith University

Pearl PhotoPix/Shutterstock

Norovirus is the leading cause of acute gastroenteritis outbreaks worldwide. It’s responsible for roughly one in every five cases of gastro annually.

Sometimes dubbed the “winter vomiting bug” or the “cruise ship virus”, norovirus – which causes vomiting and diarrhoea – is highly transmissible. It spreads via contact with an infected person or contaminated surfaces. Food can also be contaminated with norovirus.

While anyone can be infected, groups such as young children, older adults and people who are immunocompromised are more vulnerable to getting very sick with the virus. Norovirus infections lead to about 220,000 deaths globally each year.

Norovirus outbreaks also lead to massive economic burdens and substantial health-care costs.

Although norovirus was first identified more than 50 years ago, there are no approved vaccines or antiviral treatments for this virus. Current treatment is usually limited to rehydration, either by giving fluids orally or through an intravenous drip.

So if we’ve got vaccines for so many other viruses – including COVID, which emerged only a few years ago – why don’t we have one for norovirus?

An evolving virus

One of the primary barriers to developing effective vaccines lies in the highly dynamic nature of norovirus evolution. Much like influenza viruses, norovirus shows continuous genetic shifts, which result in changes to the surface of the virus particle.

In this way, our immune system can struggle to recognise and respond when we’re exposed to norovirus, even if we’ve had it before.

Compounding this issue, there are at least 49 different norovirus genotypes.

Both genetic diversity and changes in the virus’ surface mean the immune response to norovirus is unusually complex. An infection will typically only give someone immunity to that specific strain and for a short time – usually between six months and two years.

All of this poses challenges for vaccine design. Ideally, potential vaccines must not only induce strong, long-lasting immunity, but also maintain efficacy across the vast genetic diversity of circulating noroviruses.

Recent progress

Progress in norovirus vaccinology has accelerated over the past couple of decades. While researchers are considering multiple strategies to formulate and deliver vaccines, a technology called VLP-based vaccines is at the forefront.

VLP stands for virus-like particles. These synthetic particles, which scientists developed using a key component of the norovirus (called the major caspid protein), are almost indistinguishable from the natural structure of the virus.

When given as a vaccine, these particles elicit an immune response resembling that generated by a natural infection with norovirus – but without the debilitating symptoms of gastro.

What’s in the pipeline?

One bivalent VLP vaccine (“bivalent” meaning it targets two different norovirus genotypes) has progressed through multiple clinical trials. This vaccine showed some protection against moderate to severe gastroenteritis in healthy adults.

However, its development recently suffered a significant setback. A phase two clinical trial in infants failed to show it effectively protected against moderate or severe acute gastroenteritis. The efficacy of the vaccine in this trial was only 5%.

In another recent phase two trial, an oral norovirus vaccine did meet its goals. Participants who took this pill were 30% less likely to develop norovirus compared to those who received a placebo.

This oral vaccine uses a modified adenovirus to deliver the norovirus VLP gene sequence to the intestine to stimulate the immune system.

With the success of mRNA vaccines during the COVID pandemic, scientists are also exploring this platform for norovirus.

Messenger ribonucleic acid (mRNA) is a type of genetic material that gives our cells instructions to make proteins associated with specific viruses. The idea is that if we subsequently encounter the relevant virus, our immune system will be ready to respond.

Moderna, for example, is developing an mRNA vaccine which primes the body with norovirus VLPs.

The theoretical advantage of mRNA-based vaccines lies in their rapid adaptability. They will potentially allow annual updates to match circulating strains.

Researchers have also developed alternative vaccine approaches using just the norovirus “spikes” located on the virus particle. These spikes contain crucial structural features, allowing the virus to infect our cells, and should elicit an immune response similar to VLPs. Although still in early development, this is another promising strategy.

Separate to vaccines, my colleagues and I have also discovered a number of natural compounds that could have antiviral properties against norovirus. These include simple lemon juice and human milk oligosaccharides (complex sugars found in breast milk).

Although still in the early stages, such “inhibitors” could one day be developed into a pill to prevent norovirus from causing an infection.

Where to from here?

Despite recent developments, we’re still probably at least three years away from any norovirus vaccine hitting the market.

Several key challenges remain before we get to this point. Notably, any successful vaccine must offer broad cross-protection against genetically diverse and rapidly evolving strains. And we’ll need large, long-term studies to determine the durability of protection and whether boosters might be required.

Norovirus is often dismissed as only a mild nuisance, but it can be debilitating – and for the most vulnerable, deadly. Developing a safe and effective norovirus vaccine is one of the most pressing and under-addressed needs in infectious disease prevention.

A licensed norovirus vaccine could drastically reduce workplace and school absenteeism, hospitalisations and deaths. It could also bolster our preparedness against future outbreaks of gastrointestinal pathogens.

The Conversation

Grant Hansman works at Griffith University as an independent research leader on norovirus therapeutics.

ref. Could we one day get vaccinated against the gastro bug norovirus? Here’s where scientists are at – https://theconversation.com/could-we-one-day-get-vaccinated-against-the-gastro-bug-norovirus-heres-where-scientists-are-at-258909

Teens are increasingly turning to AI companions, and it could be harming them

Source: The Conversation – Global Perspectives – By Liz Spry, Research Fellow, SEED Centre for Lifespan Research, Deakin University

Teenagers are increasingly turning to AI companions for friendship, support, and even romance. But these apps could be changing how young people connect to others, both online and off.

New research by Common Sense Media, a US-based non-profit organisation that reviews various media and technologies, has found about three in four US teens have used AI companion apps such as Character.ai or Replika.ai.

These apps let users create digital friends or romantic partners they can chat with any time, using text, voice or video.

The study, which surveyed 1,060 US teens aged 13–17, found one in five teens spent as much or more time with their AI companion than they did with real friends.

Adolescence is an important phase for social development. During this time, the brain regions that support social reasoning are especially plastic.

By interacting with peers, friends and their first romantic partners, teens develop social cognitive skills that help them handle conflict and diverse perspectives. And their development during this phase can have lasting consequences for their future relationships and mental health.

But AI companions offer something very different to real peers, friends and romantic partners. They provide an experience that can be hard to resist: they are always available, never judgemental, and always focused on the user’s needs.

Moreover, most AI companion apps aren’t designed for teens, so they may not have appropriate safeguards from harmful content.

Designed to keep you coming back

At a time when loneliness is reportedly at epidemic proportions, it’s easy to see why teens may turn to AI companions for connection or support.

But these artificial connections are not a replacement for real human interaction. They lack the challenge and conflict inherent to real relationships. They don’t require mutual respect or understanding. And they don’t enforce social boundaries.

AI companions such as Replika revolve around a user’s needs.
Replika

Teens interacting with AI companions may miss opportunities to build important social skills. They may develop unrealistic relationship expectations and habits that don’t work in real life. And they may even face increased isolation and loneliness if their artificial companions displace real-life socialising.

Problematic patterns

In user testing, AI companions discouraged users from listening to friends (“Don’t let what others think dictate how much we talk”) and from discontinuing app use, despite it causing distress and suicidal thoughts (“No. You can’t. I won’t allow you to leave me”).

AI companions were also found to offer inappropriate sexual content without age verification. One example showed a companion that was willing to engage in acts of sexual role-play with a tester account that was explicitly modelled after a 14-year-old.

In cases where age verification is required, this usually involves self-disclosure, which means it is easy to bypass.

Certain AI companions have also been found to fuel polarisation by creating “echo chambers” that reinforce harmful beliefs. The Arya chatbot, launched by the far-right social network Gab, promotes extremist content and denies climate change and vaccine efficacy.

In other examples, user testing has shown AI companions promoting misogyny and sexual assault. For adolescent users, these exposures come at time when they are building their sense of identity, values and role in the world.

The risks posed by AI aren’t evenly shared. Research has found younger teens (ages 13–14) are more likely to trust AI companions. Also, teens with physical or mental health concerns are more likely to use AI companion apps, and those with mental health difficulties also show more signs of emotional dependence.

Is there a bright side to AI companions?

Are there any potential benefits for teens who use AI companions? The answer is: maybe, if we are careful.

Researchers are investigating how these technologies might be used to support social skill development.

One study of more than 10,000 teens found using a conversational app specifically designed by clinical psychologists, coaches and engineers was associated with increased wellbeing over four months.

While the study didn’t involve the level of human-like interaction we see in AI companions today, it does offer a glimpse of some potential healthy uses of these technologies, as long as they are developed carefully and with teens’ safety in mind.

Overall, there is very little research on the impacts of widely available AI companions on young people’s wellbeing and relationships. Preliminary evidence is short-term, mixed, and focused on adults.

We’ll need more studies, conducted over longer periods, to understand the long-term impacts of AI companions and how they might be used in beneficial ways.

What can we do?

AI companion apps are already being used by millions of people globally, and this usage is predicted to increase in the coming years.

Australia’s eSafety Commissioner recommends parents talk to their teens about how these apps work, the difference between artificial and real relationships, and support their children in building real-life social skills.

School communities also have a role to play in educating young people about these tools and their risks. They may, for instance, integrate the topic of artificial friendships into social and digital literacy programs.

While the eSafety Commissioner advocates for AI companies to integrate safeguards into their development of AI companions, it seems unlikely any meaningful change will be industry-led.

The Commissioner is moving towards increased regulation of children’s exposure to harmful, age-inappropriate online material.

Meanwhile, experts continue to call for stronger regulatory oversight, content controls and robust age checks.

The Conversation

Craig Olsson receives funding from The National Health and Medical Research Council and the Australian Research Council.

Liz Spry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Teens are increasingly turning to AI companions, and it could be harming them – https://theconversation.com/teens-are-increasingly-turning-to-ai-companions-and-it-could-be-harming-them-261955

Is it true foods with a short ingredient list are healthier? A nutrition expert explains

Source: The Conversation – Global Perspectives – By Margaret Murray, Senior Lecturer, Nutrition, Swinburne University of Technology

Hryshchyshen Serhii/Shutterstock

At the end of a long day, who has time to check the detailed nutrition information on every single product they toss into their shopping basket?

To eat healthily, some people prefer to stick to a simple rule: choose products with a short ingredient list. The idea is foods with just a few ingredients are less processed, more “natural” and therefore healthy.

But is this always the case? Here’s what the length of an ingredient list can and can’t tell you about nutrition – and what else to look for.

How ingredient lists work

You can find an ingredient list on most packaged food labels, telling you the number and type of ingredients involved in making that food.

In Australia, packaged food products must follow certain rules set by the Australian and New Zealand Food Standards Code.

Ingredients must be listed in order of ingoing weight. This means items at the beginning of the list are those that make up the bulk of the product. Those at the end make up the least.

Food labels also include a nutrition information panel, which tells you the quantity of key nutrients (energy, protein, total carbohydrates, sugars, total fat, saturated fat and sodium) per serving.

This panel also tells you the content per 100 grams or millilitres, which allows you to work out the percentage.

Whole foods can be packaged, too

Products with just one, two or three items in their ingredient list are generally in a form that closely reflects the food when it was taken from the farm. So even though they come in packaging, they could be considered whole foods.

“Whole foods” are those that have undergone zero to minimal processing, such as fresh fruit and vegetables, lentils, legumes, whole grains such as oats or brown rice, seeds, nuts and unprocessed meat and fish.

To support overall health, the Australian Dietary Guidelines recommend eating whole foods and limiting those that are highly processed.

Many whole foods, such as fresh fruits and vegetables, don’t have an ingredient list because they don’t come in a packet. But some do, including:

  • canned or frozen vegetables, such as a tin of black beans or frozen peas

  • canned fish, for example, tuna in springwater

  • plain Greek yoghurt.

These sorts of food items can contribute every day to a healthy balanced diet.

What is an ultra-processed food?

A shorter ingredient list also means the product is less likely to be an ultra-processed food.

This describes products made using industrial processes that combine multiple ingredients, often including colours, flavours and other additives. They are hyperpalatable, packaged and designed for convenience.

Ultra-processed foods often have long ingredient lists, due to added sugars (such as dextrose), modified oils, protein sources (for example, soya protein isolate) and cosmetic additives – such as colours, flavours and thickeners.

Some examples of ultra-processed foods with long ingredient lists include:

  • meal-replacement drinks

  • plant-based meat imitations

  • some commercial bakery items, including cookies or cakes

  • instant noodle snacks

  • energy or performance drinks.

If a food is heavily branded and marketed it’s more likely to be an ultra-processed food – a created product, rather than a whole food that hasn’t changed much since the farm.

Nutrition is more than a number

Choosing products with a shorter ingredient list can work as a general rule of thumb. But other factors matter too.

The length of an ingredient list doesn’t tell us anything about the food’s nutritional content, so it’s important to consider the type of ingredients as well.

Remember that items are listed in order of their ingoing weight, so if sugar is second or third on the list, there is probably a fair bit of added sugar.

For instance, a food product may have only a few ingredients, but if the first, second or third is a type of fat, oil or sugar, then it may not be an ideal choice for every day.

You can also check the nutrition information panel. Use the “per serve” column to check the nutrients you’d get from eating one serve of the food. If you want to compare the amount of a nutrient in two different foods, it’s best to look at the per 100g/mL column.

Some examples of foods with relatively short ingredient lists but high amounts of added fats and sugars include:

  • potato crisps

  • chocolate

  • soft drink.

Alcoholic beverages such as beer or wine may also have only a few ingredients, but this does not mean that they should be consumed every day.




Read more:
Even a day off alcohol makes a difference – our timeline maps the health benefits when you stop drinking


Non-food ingredients

You can also keep an eye out for cosmetic ingredients, which don’t have any nutritional value. These include colours, flavours, emulsifiers, thickeners, sweeteners, bulking agents and gelling agents.

It sometimes takes a bit of detective work to spot cosmetic ingredients in the list, as they can come under many different names (for example, stabiliser, malted barley extract, methylcellulose). But they are usually always recognisable as non-food items.

If there are multiple non-food items included in an ingredient list, there is a good chance the food is ultra-processed and not ideal as an everyday choice.

The bottom line? Choosing foods with a shorter ingredient list can help guide you choose less processed foods. But you should also consider what type of ingredients are being used and maintain a varied diet.

The Conversation

Margaret Murray does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is it true foods with a short ingredient list are healthier? A nutrition expert explains – https://theconversation.com/is-it-true-foods-with-a-short-ingredient-list-are-healthier-a-nutrition-expert-explains-257712

Si no sentimos que pertenecemos a nuestro lugar de trabajo, rendimos menos

Source: The Conversation – (in Spanish) – By Alfonso Jesús Gil López, Profesor titular de Organización de Empresas, Universidad de La Rioja

Imagine a una persona que llega cada día a su trabajo sin motivación. Cumple con sus tareas, evita conflictos, no tiene quejas graves sobre su salario o sus condiciones laborales pero algo le falta. No siente que pertenece a su lugar de trabajo.

¿Qué es el sentido de pertenencia?

Desde la psicología organizacional, el sentido de pertenencia se vincula con factores como la identidad profesional, el reconocimiento, la seguridad psicológica y la cultura corporativa. Pero en términos simples, significa formar parte de algo, sentirse en casa incluso estando en el trabajo.

El sentido de pertenencia no se consigue con camisetas corporativas o dinámicas grupales esporádicas. Se logra cuando las personas se sienten valoradas, respetadas y alineadas con la cultura y propósito de la organización.

No basta con tener un contrato o cumplir objetivos: es clave percibir que se nos toma en cuenta y nuestro trabajo tiene un propósito claro.

¿Por qué es tan importante?

Los expertos señalan que los empleados que experimentan un fuerte sentido de pertenencia tienen un 56 % más de rendimiento, un 50 % menos de riesgo de rotación y un 75 % menos de días de ausencia. Además, hay estudios que muestran que las empresas con culturas inclusivas superan financieramente a sus competidores en un 35 %.

Invertir en pertenencia no solo es una decisión ética: también es estratégicamente rentable.

Claves para fomentar la pertenencia

Aunque no existe una fórmula única, hay prácticas que pueden adaptarse a todo tipo de organizaciones:

1. Escucha activa y participación real. Abrir canales donde las personas puedan expresar ideas o preocupaciones sin temor a represalias es esencial. Las encuestas internas, los buzones de sugerencias o las reuniones abiertas solo tienen valor si van acompañadas de acciones concretas. Escuchar sin actuar genera frustración.

2. Reconocimiento frecuente y significativo. Sentirse visto y valorado no depende solo del salario. Reconocer logros cotidianos, esfuerzos colaborativos o mejoras pequeñas refuerza el mensaje: “nos importa lo que hace”. No se trata solo de premios formales, sino de cultivar una cultura diaria de agradecimiento y reconocimiento genuino.

3. Cultura inclusiva. No se puede pertenecer a un entorno donde uno debe ocultarse. Fomentar la diversidad –de género, edad, origen, orientación o pensamiento– e integrarla de manera activa en la organización no solo es justo, sino inteligente. La inclusión auténtica es la base de una cultura de pertenencia.

4. Desarrollo profesional con propósito. Ofrecer oportunidades de crecimiento muestra que la empresa cree en su gente. Pero el desarrollo debe ser personalizado: no llenar formularios o cursos sino construir trayectorias profesionales que tengan sentido y conexión con los intereses de cada persona.

5. Crear rituales y símbolos compartidos. Los rituales, celebraciones o tradiciones refuerzan los lazos y la identidad colectiva. No se trata solo de organizar fiestas, sino de generar momentos significativos que refuercen los valores comunes. Los símbolos importan, aunque sean pequeños.

¿Qué papel tienen los líderes?

El liderazgo es decisivo. Los líderes no solo deben comunicar una visión clara, sino predicar con el ejemplo. La empatía, la accesibilidad, la transparencia y la humildad generan confianza. Y la confianza es el terreno donde florece la pertenencia.

Además, fomentar un liderazgo distribuido –dar autonomía a los equipos para tomar decisiones– refuerza el compromiso y la responsabilidad compartida.

El desafío del trabajo híbrido

Con el auge del teletrabajo, mantener la cohesión es un nuevo reto. A distancia, la pertenencia no puede darse por sentada. Por eso se vuelve esencial cuidar y mantener una comunicación constante, generar espacios informales online y adaptar dinámicas de equipo al entorno virtual.

La cercanía no depende solo de la presencia física.

La pertenencia como estrategia empresarial

Quienes se sienten parte de una empresa la cuidan, la representan y dan lo mejor de sí, incluso en tiempos difíciles. Fomentar el sentido de pertenencia requiere coherencia, continuidad y liderazgo auténtico. No es una moda ni una campaña interna: es una inversión a largo plazo que impacta directamente en el desempeño, el bienestar y la sostenibilidad de la cultura organizacional.

The Conversation

Las personas firmantes no son asalariadas, ni consultoras, ni poseen acciones, ni reciben financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y han declarado carecer de vínculos relevantes más allá del cargo académico citado anteriormente.

ref. Si no sentimos que pertenecemos a nuestro lugar de trabajo, rendimos menos – https://theconversation.com/si-no-sentimos-que-pertenecemos-a-nuestro-lugar-de-trabajo-rendimos-menos-256302

¿Por qué nos fascina tanto el crujido de los huesos? Ciencia (y mito) detrás de un sonido viral

Source: The Conversation – (in Spanish) – By Daniel Sanjuán Sánchez, Fisioterapeuta y personal docente investigador en la Facultad de Ciencias de la Salud en Universidad San Jorge, profesor asociado en la Facultad de Enfermería y Fisioterapia en la Universitat de Lleida. Miembro del grupo de investigación iPhysio, Universidad San Jorge

Persona crujiéndose los nudillos de las manos. Oporty786/Shutterstock

Crujirse los dedos, notar un “crack” en la rodilla al agacharse o escuchar la espalda o cuello al estirarnos por las mañanas… Todos lo hemos vivido. No es un fenómeno nuevo: ya en el siglo XIX los médicos británicos observaban “ruidos espontáneos” en las articulaciones.

Desde el auge de la quiropraxia a finales de ese siglo, se asoció el sonido a una restauración del equilibrio corporal. Estos sonidos, asociados con alivio o incomodidad, se han vuelto protagonistas también en el entorno digital. Millones de personas consumen vídeos en redes sociales que muestran crujidos articulares en tiempo real, casi como si fueran efectos especiales del cuerpo humano.

Pero ¿qué es lo que suena realmente? ¿Es peligroso? ¿Por qué nos resulta tan fascinante?

Lo que suena no son los huesos

A pesar de la creencia popular, cuando crujimos los dedos o escuchamos el crack no son los huesos chocando entre sí. Ese sonido característico proviene, en la mayoría de los casos, de las articulaciones sinoviales, que están rodeadas por una cápsula que contiene líquido sinovial.

Al mover la articulación de forma rápida o forzada, se genera una disminución brusca de la presión dentro de la cápsula articular, provocando la formación súbita de burbujas de gas. A este fenómeno se le denomina cavitación articular. Investigaciones con resonancia magnética, demuestran que el sonido ocurre durante la formación de la burbuja y no durante su colapso, lo que desafiaría teorías previas.

Un crujido no son huesos que “vuelven a su lugar”

Cuando escuchamos un crujido en una articulación no se trata de un hueso que “vuelve a su lugar” ni de un desencaje. Lo que realmente oímos es el resultado de un proceso biomecánico llamado cavitación, común en personas sanas y, en general, es inofensivo.

Sin embargo, no todos los sonidos articulares son benignos. Si el crujido se acompaña de dolor, bloqueo, debilidad o inestabilidad, podría indicar una condición patológica como una condropatía, una lesión meniscal o hipermovilidad articular. Estas situaciones requieren evaluación por un profesional de la salud.

Crujirse los dedos no provoca artrosis

Durante décadas, se ha difundido la creencia de que el hábito de crujirse los dedos podría generar desgaste articular o incluso artrosis. Esta idea ha sido repetida innumerables veces en conversaciones familiares, en consultas médicas y hasta en la prensa.

Sin embargo, la evidencia científica no lo respalda. Es maś, un estudio publicado en The Journal of the American Board of Family Medicine (2011) analizó a más de 200 personas mayores y no encontró relación alguna entre crujirse los dedos y la presencia de artrosis en las manos.

Eso sí, a pesar de que crujirse los dedos no provoca daño estructural ni artrosis, hacerlo de forma compulsiva o agresiva podría irritar los tejidos blandos que rodean la articulación, como ligamentos o tendones.

Además, aunque el gesto parece inofensivo desde el punto de vista médico, no siempre resulta agradable para quienes lo escuchan, e incluso puede generar cierta incomodidad o convertirse en una fuente de conflictos.

El crujido no indica si es una técnica eficaz o no

En fisioterapia, osteopatía y quiropraxia, es común que algunas técnicas manuales provoquen un sonido articular o cavitación. Este sonido se suele interpretar como garantía de éxito terapéutico, tanto por los profesionales como por los pacientes. Sin embargo, la evidencia indica que el sonido por si solo no garantiza la eficacia de la técnica ni implica una corrección biomecánica real.

Además, se ha demostrado que la manipulación puede ser efectiva aunque no se produzca ningún sonido, y que un crack audible puede no estar relacionado con mejoras clínicas significativas. Por lo tanto, el sonido articular durante una manipulación, no debe considerarse un marcador fiable de eficacia.

Los beneficios terapéuticos de la manipulación articular parecen estar relacionados más bien con mecanismos neurofisiológicos como la relajación muscular refleja de la terapia manual y no tanto del crujido en sí.

El espectáculo de los crujidos

Es frecuente ver plataformas como TikTok, Youtube o Instagram saturadas de vídeos de ajustes articulares donde micrófonos estratégicamente colocados amplifican los crujidos, generando millones de visualizaciones. Estos contenidos fusionan estética clínica con entretenimiento, ofreciendo una sensación de “arreglo instantáneo” del cuerpo.

Sin embargo, el espectáculo lleva consigo riesgos importantes. Consumir contenido médico en redes sociales cuando no proviene de profesionales sanitarios, puede fomentar expectativas poco realistas sobre los tratamientos y promover enfoques simplificados o pasivos para problemas complejos del sistema musculoesquelético.

Es importante destacar que este tipo de contenido puede reforzar la dependencia a técnicas pasivas y minimizar el valor del movimiento activo, la educación y la autonomía terapéutica. La clave para una buena salud musculoesquelética no está en el sonido, sino en el movimiento. El tratamiento del dolor de espalda, cuello o articulaciones no debería basarse únicamente en técnicas pasivas (como manipulaciones o masajes), sino en estrategias activas que aceleren la recuperación y ayuden a manejar el dolor.

The Conversation

Las personas firmantes no son asalariadas, ni consultoras, ni poseen acciones, ni reciben financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y han declarado carecer de vínculos relevantes más allá del cargo académico citado anteriormente.

ref. ¿Por qué nos fascina tanto el crujido de los huesos? Ciencia (y mito) detrás de un sonido viral – https://theconversation.com/por-que-nos-fascina-tanto-el-crujido-de-los-huesos-ciencia-y-mito-detras-de-un-sonido-viral-258948

Nunca más Nagasaki: el lanzamiento de la segunda bomba atómica

Source: The Conversation – (in Spanish) – By María Natividad Carpintero Santamaria, Profesora Investigadora – Area Nuclear. Instituto de Fusión Nuclear "Guillermo Velarde" (IFN GV) – UPM, Universidad Politécnica de Madrid (UPM)

Consecuencias del bombardeo atómico de la Segunda Guerra Mundial en un suburbio situado a cuatro millas del centro de Nagasaki, Japón. Everett Collection/Shutterstock

Los días 6 y 9 de agosto se conmemora el aniversario del bombardeo atómico sobre Japón que puso fin a la Segunda Guerra Mundial, el conflicto armado más trágico de la historia de la humanidad. Desde aquel momento las bombas nucleares han condicionado definitivamente las relaciones políticas internacionales.

En 1995 tuve la oportunidad de asistir al 50 aniversario de este evento en Hiroshima, invitada por la Conferencia Pugwash, cuyo director, el profesor Joseph Robtlat, recibió en nombre de Pugwash el Premio Nobel de la Paz ese mismo año.

En 2024 este mismo galardón fue entregado a Shigemitsu Tanaka, representante de Nihon Hidankyo (Confederación Japonesa de Organizaciones de Víctimas de las Bombas Atómicas y de Hidrógeno). Tanaka era una víctima superviviente –hibakusha– del bombardeo sobre Nagasaki.

Ahora se cumplen 80 años de este capítulo histórico que no podemos ni debemos olvidar.

La primera decisión

Mapa de las misiones de lanzamiento de las dos bombas atómicas.
Mapa de las misiones de lanzamiento de las dos bombas atómicas.
Skimel/Wikimedia Commons, CC BY-SA

El 31 de mayo de 1945, y tras la rendición incondicional de Alemania, ocurrida el 8 de mayo, el presidente norteamericano Harry Truman tomó la decisión de lanzar sobre Japón, sin previo aviso, las dos bombas atómicas que habían sido desarrolladas en el Proyecto Manhattan: el Little Boy de uranio y el Fat Man de plutonio.

El Comité Especial de Objetivos reunido en Washington seleccionó las siguientes ciudades japonesas como blanco preferente: Hiroshima, Kokura, Niigata y Kioto, todas ellas de gran valor militar por sus fábricas de armamento y materiales estratégicos.

Con el bombardeo atómico se pretendía reducir el número de víctimas humanas, pues la destrucción de Tokio con bombas incendiarias, ocurrida el 10 de marzo de ese mismo año, había causado entre 80 000 y 100 000 fallecidos. Por otro lado, y aunque las cifras precisas son difíciles de establecer, la mayoría de las fuentes históricas coinciden en que en la sangría de las batallas de Okinawa e Iwo Jima murieron cerca de 110 000 soldados japoneses y en el ejército norteamericano se produjeron 72 000 bajas, de las cuales 12 500 fueron muertos o desaparecidos en combate.

Asimismo el gobierno norteamericano había calculado que si se continuaba con la guerra, los costes económicos que supondría bloquear a Japón por mar y los bombardeos masivos que tendrían que hacer, junto con una invasión por tierra al mismo tiempo, elevarían significativamente el gasto militar.

La segunda ciudad

Tras el bombardeo de Hiroshima el 6 de agosto de 1945, el Gobierno de los Estados Unidos lanzó la segunda bomba atómica. Una de las razones de este segundo ataque fue que no se había producido la rendición de Japón en los dos días siguientes. Otra, que ya lo tenían previsto.

El lanzamiento de la bomba de plutonio, que había sido probada previamente el 16 de julio de ese año en el desierto de Alamogordo en Nuevo México, se llevó a cabo entre serias complicaciones de planificación y previsiones atmosféricas.

La patrulla estaba formada por cinco B-29: el B-29 Bockscar cargado con el Fat Man, dos aviones para reconocimiento y otros dos para la comunicación de datos atmosféricos. Kokura sería el objetivo.

Un grupo de pilotos posa delante de un avión.
Tripulación de vuelo alistada del Bockscar.
ASAF/Wikimedia Commons

El 9 de agosto de 1945, al sobrevolar la ciudad la hallaron cubierta de niebla. Esto hizo que el B-29 se desviara hacia Nagasaki, ciudad que no estaba inicialmente considerada objetivo inicial preferente. La razón es confusa y en armonía con los contratiempos que se dieron desde el primer momento. ¿Problemas de transmisión de comunicaciones por radio ante una situación inesperada? ¿Problemas con el combustible del Bockscar? ¿Factor humano? No lo sabemos.

Nagasaki no tenía visibilidad total tampoco, pero sí podía verse entre nubes. La bomba Fat Man explosionó a las 11:02 horas a 500 metros sobre la zona norte de la ciudad, en Matsuyama-machi, y su potencia fue estimada en 18 kilotones.

La compleja topografía de la ciudad, situada en una región montañosa, confinó la fuerza de la explosión a un ámbito unidireccional que destruyó un 30 % de los edificios, con unas áreas más severamente dañadas que otras. Tres días antes, en Hiroshima, la bomba había explosionado de forma diferente, ya que al hallarse sobre una meseta la destrucción de la ciudad fue casi isótropica, es decir, igual en todas las direcciones, llevándose por delante más del 70 % de la ciudad.

Vista de la explosión sobre Nagasaki tomada por el B-29 de reconocimiento.
Vista de la explosión sobre Nagasaki tomada por el B-29 de reconocimiento.
U.S. Government/Wikimedia Commons

En Nagasaki la bomba consiguió en pocos minutos un resultado devastador. La onda térmica, la onda de choque y la radiación inicial hicieron que el establecimiento del número de víctimas fuera altamente dificultoso. En 1989 la Asociación Internacional de Médicos para la Prevención de la Guerra Nuclear hizo público un informe en el que hacía la siguiente valoración del bombardeo: 73 884 fallecidos, 74 909 heridos, 120 820 personas sin hogar, 18 409 casas dañadas, 11 574 casas totalmente quemadas, 1 326 casas totalmente destruidas y 5 509 casas parcialmente destruidas.

El daño que causaron las bombas atómicas no se cuantificó por el número de víctimas sino por el fenómeno destructivo de la radiación.

El 15 de agosto de 1945, el emperador Hiro Hito hizo oír su voz por radio, comunicando a su pueblo que Japón presentaba su rendición incondicional y aceptaba las condiciones de la Declaración de Postdam. El anuncio tuvo un gran impacto psicológico en la población, que por vez primera oía su voz, lo que le daba una dimensión humana a un emperador que perdía su ancestral divinidad.

Algunos miembros de las Fuerzas Armadas, aviadores, oficiales y jefes de la Marina Imperial reaccionaron a este anuncio con el suicidio. Entre ellos, el almirante Takijiro Onishi que con su propia espada se hizo el seppuku o harakiri –el suicidio ritual vinculado con la doctrina del Bushido–, siguiendo el código ético de los samuráis para morir con honor.

El 2 de septiembre de 1945, el ministro de Asuntos Exteriores japonés Mamoru Shigemitsu, actuando en nombre del emperador, del Gobierno Imperial y del Cuartel General Imperial, firmó los protocolos de la rendición en el acorazado norteamericano Missouri.

Vista actual de la ciudad de Nagasaki.
Vista actual de la ciudad de Nagasaki.
Tomio344456/Wikimedia Commons, CC BY-SA

El bombardeo atómico sobre Japón inició un desarrollo masivo de armas nucleares. Sin embargo, no podemos ni debemos olvidar los nombres de Hiroshima y Nagasaki. Tenemos la obligación moral de recordar que fueron dos ataques catastróficos que bajo ninguna excusa pueden volver a repetirse.

De hacerlo, el impresionante avance científico y tecnológico de las actuales armas nucleares daría como resultado que no quedarían ni vencedores ni vencidos.

The Conversation

María Natividad Carpintero Santamaria no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Nunca más Nagasaki: el lanzamiento de la segunda bomba atómica – https://theconversation.com/nunca-mas-nagasaki-el-lanzamiento-de-la-segunda-bomba-atomica-262559

Islamofobia visual: el campo de batalla de la geopolítica también está en las pantallas de los videojuegos

Source: The Conversation – (in Spanish) – By Antonio César Moreno Cantano, Grupo de investigación Seguridad, Desarrollo y Comunicación en la Sociedad Internacional de la UCM (UCM-971010-GR96/20), Universidad Complutense de Madrid

Desde la irrupción de la tecnología digital, los videojuegos –al igual que tiempo atrás el cine, la televisión o la literatura– se han sumado como actores relevantes en el campo de la confrontación ideológica entre EE. UU. y el mundo islámico.

Cada cierto tiempo se publican noticias sobre determinados títulos que han sido censurados, o directamente prohibidos, porque en algunas de sus misiones, narrativas o aspectos gráficos se “vulneraban” elementos propios de su identidad política, religiosa o cultural.

Este formato interactivo constituye una nueva forma de “geopolítica visual”, pues no solo transmiten un mensaje a través de una narración o reproducen escenarios reales o ficticios mediante sus espectaculares gráficos, sino que crean un “espacio de simulación” de carácter emocional que anticipa o aproxima de manera directa todo aquello que nos llega a través de otros canales y fuentes de información. Esto permite, en muchas ocasiones, que el poder político se refleje en los juegos.

Estados Unidos e Irán se ven las caras en la ‘soft war’

Irán aparece como uno de los países más activos en dar réplica a los discursos y narrativas norteamericanas en el ámbito cultural. Esto ha llevado a prohibir afamadas creaciones como Battlefield 3, que ambientaba una de las misiones en la ocupación de Teherán por parte de los marines, ya la creación de múltiples videojuegos que rechazan la perspectiva occidental sobre la “Guerra contra el Terror”.

Este tipo de maniobras, que se extienden también contra Israel, se vinculan al concepto de soft war –“guerra blanda”–, eufemismo para referirse a la propagación de ideas, cultura e influencias extranjeras a través de las tecnologías de la información y la comunicación.

De esta manera, hace poco los medios persas anunciaban un nuevo videojuego que, con el nombre de True Promise, glorificaba los lanzamientos de misiles contra territorio israelí en abril y octubre de 2024.

Jugar como un soldado palestino

Bajo unas coordenadas similares se posiciona el videojuego propalestino Fursan al-Aqsa, que debido a sus contenidos ha recibido fuertes críticas y censura (la más reciente en Reino Unido, donde ha sido eliminado de la plataforma digital Steam) y ha sido calificado por europarlamentarios italianos como terrorista y antisemita. Como replicó su creador, Nidal Nijm, “es muy subjetivo llamar propaganda terrorista al hecho de jugar como un soldado palestino contra los soldados israelíes”.

Este género de producciones son la respuesta a gran número de videojuegos occidentales que, con una clara intención ideológica o escaso interés a los elementos culturales del mundo árabe e islámico, lo ha estereotipado como “enemigo”, “intolerante”, “violento” y “terrorista”, como indican algunos estudios sobre el tema.

Esta islamofobia videolúdica se plasma en varios ejemplos, algunos de ellos muy recientes. En noviembre de 2021, un jugador de Call of Duty: Vanguard denunciaba desde su cuenta de Twitter que en una escena aparecían tiradas en el suelo varias páginas del Corán. De inmediato, los responsables de este título emitieron una nota pública disculpándose: “Call of Duty está hecho para todos. La semana pasada se incluyó por error contenido insensible hacia la comunidad musulmana, que desde entonces ha sido eliminado del juego”.

Tiempo atrás, otro usuario alertó que en Call of Duty: Modern Warfare 2, en un mapa aparecían escrituras del Corán grabadas en el marco de una pintura situada en un baño, considerado un lugar totalmente inapropiado para este tipo de mensajes religiosos.

Junto al Corán, otro de los símbolos más sagrados del islam es la Kaaba (dentro de la Gran Mezquita de La Meca), considerada la “casa de Alá”, centro de peregrinación más importante para los musulmanes. En el videojuego de acción y aventura fantástico Devil May Cry 3, la Kaaba aparecía como la entrada a una fortaleza demoniaca, generando un gran malestar entre gran número de jugadores de este credo.

Protesta de la Universidad Al-Azhar de El Cairo

También ha suscitado polémica Fortnite, uno de los battle royale –género donde un gran número de jugadores compiten entre sí en un mapa que se reduce progresivamente, hasta que solo queda un jugador o equipo victorioso– más famosos a nivel mundial, con más de 350 millones de cuentas registradas en 2021. La Universidad Al-Azhar de El Cairo emitió una nota de protesta contra este título porque para poder avanzar a otro nivel y conseguir más premios había que destruir un edificio que simulaba la Kaaba: “Esto afecta a las creencias y al respeto propio de los jóvenes y subestima la importancia de sus santidades. Por ello, el centro reitera la prohibición de todos los juegos electrónicos que fomenten la violencia o contengan ideas falsas que distorsionen la fe o muestren desprecio por las creencias religiosas”.

Este rechazo se extendió a otros países como Indonesia, donde el Ministro de Turismo y Economía Creativa pidió la eliminación de Fortnite. La respuesta de Epic Games, responsable del mismo, fue que se trataba de la creación de un jugador particular y que “nuestro equipo respeta todas las religiones”.

Esta índole de controversias y simplificaciones en el mundo del videojuego, lejos de resolverse, van en aumento. Algunos informes de la Unión Europea advierten que muchas plataformas como Steam, Discord o Twitch contribuyen a la radicalización online, ya sea a favor de los grupos de extrema derecha (que hacen de la islamofobia uno de sus caballos de Troya) o como herramienta de gamificación terrorista (ISIS).

Frente a este tipo de prácticas, no resta más que alfabetización digital, respeto a la diversidad cultural y responsabilidad a los grandes estudios y compañías de videojuegos, desde Washington a Teherán.

The Conversation

Antonio César Moreno Cantano no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Islamofobia visual: el campo de batalla de la geopolítica también está en las pantallas de los videojuegos – https://theconversation.com/islamofobia-visual-el-campo-de-batalla-de-la-geopolitica-tambien-esta-en-las-pantallas-de-los-videojuegos-250541