When you buy a new electronic appliance, shoes, medicines or even some food items, you often find a small paper sachet with the warning: “silica gel, do not eat”.
What exactly is it, is it toxic, and can you use it for anything?
The importance of desiccants
That little sachet is a desiccant – a type of material that removes excess moisture from the air.
It’s important during the transport and storage of a wide range of products because we can’t always control the environment. Humid conditions can cause damage through corrosion, decay, the growth of mould and microorganisms.
This is why manufacturers include sachets with desiccants to make sure you receive the goods in pristine condition.
The most common desiccant is silica gel. The small, hard and translucent beads are made of silicon dioxide (like most sands or quartz) – a hydrophilic or water-loving material. Importantly, the beads are porous on the nano-scale, with pore sizes only 15 times larger than the radius of their atoms.
These pores have a capillary effect, meaning they condense and draw moisture into the bead similar to how trees transport water through the channelled structures in wood.
In addition, sponge-like porosity makes their surface area very large. A single gram of silica gel can have an area of up to 700 square metres – almost four tennis courts – making them exceptionally efficient at capturing and storing water.
Is silica gel toxic?
The “do not eat” warning is easily the most prominent text on silica gel sachets.
According to health professionals, most silica beads found in these sachets are non-toxic and don’t present the same risk as silica dust, for example. They mainly pose a choking hazard, which is good enough reason to keep them away from children and pets.
However, if silica gel is accidentally ingested, it’s still recommended to contact health professionals to determine the best course of action.
Some variants of silica gel contain a moisture-sensitive dye. One particular variant, based on cobalt chloride, is blue when the desiccant is dry and turns pink when saturated with moisture. While the dye is toxic, in desiccant pellets it is present only in a small amount – approximately 1% of the total weight.
Indicating silica gel with cobalt chloride – ‘fresh’ on the left, ‘used’ on the right. Reza Rio/Shutterstock
Desiccants come in other forms, too
Apart from silica gel, a number of other materials are used as moisture absorbers and desiccants. These are zeolites, activated alumina and activated carbon – materials engineered to be highly porous.
Another desiccant type you’ll often see in moisture absorbers for larger areas like pantries or wardrobes is calcium chloride. It typically comes in a box filled with powder or crystals found in most hardware stores, and is a type of salt.
Kitchen salt – sodium chloride – attracts water and easily becomes lumpy. Calcium chloride works in the same way, but has an even stronger hygroscopic effect and “traps” the water through a hydration reaction. Once the salt is saturated, you’ll see liquid separating in the container.
Closet and pantry dehumidifiers like this one typically contain calcium chloride which binds water. Healthy Happy/Shutterstock
I found something that doesn’t seem to be silica gel – what is it?
Some food items such as tortilla wraps, noodles, beef jerky, and some medicines and vitamins contain slightly different sachets, labelled “oxygen absorbers”.
These small packets don’t contain desiccants. Instead, they have chemical compounds that “scavenge” or bond oxygen.
Their purpose is similar to desiccants – they extend the shelf life of food products and sensitive chemicals such as medicines. But they do so by directly preventing oxidation. When some foods are exposed to oxygen, their chemical composition changes and can lead to decay (apples turning brown when cut is an example of oxidation).
There is a whole range of compounds used as oxygen absorbers. These chemicals have a stronger affinity to oxygen than the protected substance. They range from simple compounds such as iron which “rusts” by using up oxygen, to more complex such as plastic films that work when exposed to light.
Some of the sachets in your products are oxygen absorbers, not desiccants – but they may look similar. Sergio Yoneda/Shutterstock
Can I reuse a desiccant?
Although desiccants and dehumidifiers are considered disposable, you can relatively easily reuse them.
To “recharge” or dehydrate silica gel, you can place it in an oven at approximately 115–125°C for 2–3 hours, although you shouldn’t do this if it’s in a plastic sachet that could melt in the heat.
After dehydration, silica gel sachets may be useful for drying small electronic items (like your phone after you accidentally dropped it into water), keeping your camera dry, or preventing your family photos and old films from sticking to each other.
Kamil Zuber does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
We all like to imagine we’re ageing well. Now a simple blood or saliva test promises to tell us by measuring our “biological age”. And then, as many have done, we can share how “young” we really are on social media, along with our secrets to success.
While chronological age is how long you have been alive, measures of biological age aim to indicate how old your body actually is, purporting to measure “wear and tear” at a molecular level.
The appeal of these tests is undeniable. Health-conscious consumers may see their results as reinforcing their anti-ageing efforts, or a way to show their journey to better health is paying off.
But how good are these tests? Do they actually offer useful insights? Or are they just clever marketing dressed up to look like science?
How do these tests work?
Over time, the chemical processes that allow our body to function, known as our “metabolic activity”, lead to damage and a decline in the activity of our cells, tissues and organs.
Biological age tests aim to capture some of these changes, offering a snapshot of how well, or how poorly, we are ageing on a cellular level.
Our DNA is also affected by the ageing process. In particular, chemical tags (methyl groups) attach to our DNA and affect gene expression. These changes occur in predictable ways with age and environmental exposures, in a process called methylation.
Research studies have used “epigenetic clocks”, which measure the methylation of our genes, to estimate biological age. By analysing methylation levels at specific sites in the genome from participant samples, researchers apply predictive models to estimate the cumulative wear and tear on the body.
What does the research say about their use?
Although the science is rapidly evolving, the evidence underpinning the use of epigenetic clocks to measure biological ageing in research studies is strong.
Studies have shown epigenetic biological age estimation is a better predictor of the risk of death and ageing-related diseases than chronological age.
Epigenetic clocks also have been found to correlate strongly with lifestyle and environmental exposures, such as smoking status and diet quality.
In addition, they have been found to be able to predict the risk of conditions such as cardiovascular disease, which can lead to heart attacks and strokes.
Taken together, a growing body of research indicates that at a population level, epigenetic clocks are robust measures of biological ageing and are strongly linked to the risk of disease and death
But how good are these tests for individuals?
While these tests are valuable when studying populations in research settings, using epigenetic clocks to measure the biological age of individuals is a different matter and requires scrutiny.
For testing at an individual level, perhaps the most important consideration is the “signal to noise ratio” (or precision) of these tests. This is the question of whether a single sample from an individual may yield widely differing results.
A study from 2022 found samples deviated by up to nine years. So an identical sample from a 40-year-old may indicate a biological age of as low as 35 years (a cause for celebration) or as high as 44 years (a cause of anxiety).
While there have been significant improvements in these tests over the years, there is considerable variability in the precision of these tests between commercial providers. So depending on who you send your sample to, your estimated biological age may vary considerably.
Another limitation is there is currently no standardisation of methods for this testing. Commercial providers perform these tests in different ways and have different algorithms for estimating biological age from the data.
As you would expect for commercial operators, providers don’t disclose their methods. So it’s difficult to compare companies and determine who provides the most accurate results – and what you’re getting for your money.
A third limitation is that while epigenetic clocks correlate well with ageing, they are simply a “proxy” and are not a diagnostic tool.
In other words, they may provide a general indication of ageing at a cellular level. But they don’t offer any specific insights about what the issue may be if someone is found to be “ageing faster” than they would like, or what they’re doing right if they are “ageing well”.
So regardless of the result of your test, all you’re likely to get from the commercial provider of an epigenetic test is generic advice about what the science says is healthy behaviour.
Are they worth it? Or what should I do instead?
While companies offering these tests may have good intentions, remember their ultimate goal is to sell you these tests and make a profit. And at a cost of around A$500, they’re not cheap.
While the idea of using these tests as a personalised health tool has potential, it is clear that we are not there yet.
For this to become a reality, tests will need to become more reproducible, standardised across providers, and validated through long-term studies that link changes in biological age to specific behaviours.
So while one-off tests of biological age make for impressive social media posts, for most people they represent a significant cost and offer limited real value.
The good news is we already know what we need to do to increase our chances of living longer and healthier lives. These include:
We don’t need to know our biological age in order to implement changes in our lives right now to improve our health.
Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Bats are often cast as the unseen night-time stewards of nature, flitting through the dark to control pest insects, pollinate plants and disperse seeds. But behind their silent contributions lies a remarkable and underappreciated survival strategy: seasonal fattening.
Much like bears and squirrels, bats around the world bulk up to get through hard times – even in places where you might not expect it.
In a paper published today in Ecology Letters, we analysed data from bat studies around the world to understand how bats use body fat to survive seasonal challenges, whether it’s a freezing winter or a dry spell.
The surprising conclusion? Seasonal fattening is a global phenomenon in bats, not just limited to those in cold climates.
Even bats in the tropics, where it’s warm all year, store fat in anticipation of dry seasons when food becomes scarce. That’s a survival strategy that’s been largely overlooked. But it may be faltering as the climate changes, putting entire food webs at risk.
Climate shapes fattening strategies
We found bats in colder regions predictably gain more weight before winter.
But in warmer regions with highly seasonal rainfall, such as tropical savannas or monsoonal forests, bats also fatten up. In tropical areas, it’s not cold that’s the enemy, but the dry season, when flowers wither, insects vanish and energy is hard to come by.
The extent of fattening is impressive. Some species increased their body weight by more than 50%, which is a huge burden for flying animals that already use a lot of energy to move around. This highlights the delicate balancing act bats perform between storing energy and staying nimble in the air.
In colder climates, female bats used their fat reserves more sparingly than males – a likely adaptation to ensure they have enough energy left to raise young when spring returns. Since females typically emerge from hibernation to raise their young, conserving fat through winter can directly benefit their reproductive success.
Interestingly, this sex-based difference vanished in warmer climates, where fat use by males and females was more similar, likely because more food is available in warmer climates. It’s another clue that climate patterns intricately shape behaviour and physiology.
Climate change is shifting the rules
Beyond the biology, our study points to a more sobering trend. Bats in warm regions appear to be increasing their fat stores over time. This could be an early warning sign of how climate change is affecting their survival.
Climate change isn’t just about rising temperatures. It’s also making seasons more unpredictable.
Bats may be storing more energy in advance of dry seasons that are becoming longer or harder to predict. That’s risky, because it means more foraging, more exposure to predators and potentially greater mortality.
The implications can ripple outward. Bats help regulate insect populations, fertilise crops and maintain healthy ecosystems. If their survival strategies falter, entire food webs could feel the effects.
Fat bats, fragile futures
Our study changes how we think about bats. They are not just passive victims of environmental change but active strategists, finely tuned to seasonal rhythms. Yet their ability to adapt has limits, and those limits are being tested by a rapidly changing world.
By understanding how bats respond to climate, we gain insights into broader ecosystem resilience. We also gain a deeper appreciation for one of nature’s quiet heroes – fattening up, flying through the night and holding ecosystems together, one wingbeat at a time.
Nicholas Wu was the lead author of a funded Australian Research Council Linkage Grant awarded to Christopher Turbill at Western Sydney University.
Political language is sometimes used to describe the orientations of the Vatican. When the late Pope Francis defended migrants, it was suggested that he was a “left-wing” pope. Today, people are wondering whether Pope Leo XIV will adopt a “progressive” path or, on the contrary, a philosophy on immigration different from that of Francis.
To answer this question, it is helpful to look at what successive popes have said about welcoming foreigners. We can see that they have defended not only migrants but also a right of immigration. Their approach has been universalist and it rejected all discrimination. Could it change?
Supporting the right of immigration
During the period between the second world war and the election of Leo XIV, the Vatican had six popes. The first, Pius XII (1939-1958), seems to have been more in favour of immigration than the United Nations. In 1948, when the UN adopted the Universal Declaration of Human Rights, emigration was enshrined as a fundamental right: “Everyone has the right to leave any country, including his own.”
This wording does not mention the right to enter a country that is not one’s own, and Pius XII called this vagueness into question. In his 1952 Christmas message, he argued that it resulted in a situation in which “the natural right of every person not to be prevented from emigrating or immigrating is practically annulled, under the pretext of a falsely understood common good.”
Pius XII believed that immigration was a natural right, but linked it to poverty. He therefore asked governments to facilitate the migration of workers and their families to “regions where they could more easily find the food they needed.” He deplored the “mechanisation of minds” and called for a softening “in politics and economics, of the rigidity of the old framework of geographical boundaries.”
In the Apostolic Constitution on the Exiled Family, also in 1952, he wrote about why migration was essential for the Church.
Pope John XXIII (1958-1963) extended this argument in two encyclicals: Mater et magistra in 1961 and Pacem in terris in 1963. Whereas Pius XII had thought that the natural right to emigrate only applied to people in need, John XXIII included everyone: “Among man’s personal rights we must include his right to enter a country in which he hopes to be able to provide more fittingly for himself and his dependents.” (Pacem in terris 106.)
A refusal of discrimination
For Paul VI (1963-1978), the Christian duty to serve migrant workers must be fulfilled without discrimination. In a 1965 encyclical, he maintained that “a special obligation binds us to make ourselves the neighbour of every person without exception and of actively helping him when he comes across our path, whether he be an old person abandoned by all, a foreign labourer unjustly looked down upon, a refugee…” He also stated the requirement “to assist migrants and their families.” (Gaudium et spes.)
John Paul II (1978-2005) made numerous statements in favour of immigration. For example, his speech for World Migration Day in 1995 was devoted to undocumented migrants. He wrote:
“The Church considers the problem of illegal migrants from the standpoint of Christ, who died to gather together the dispersed children of God (cf Jn 11:52), to rehabilitate the marginalized and to bring close those who are distant, in order to integrate all within a communion that is not based on ethnic, cultural or social membership.”
Benedict XVI (2005-2013) acknowledged the “feminization of migration” and the fact that “female emigration tends to become more and more autonomous. Women cross the border of their homeland alone in search of work in another country.” (Message, 2006.)
“The Church encourages the ratification of the international legal instruments that aim to defend the rights of migrants, refugees and their families.” (Message 2007.)
Pope Francis (2013-2025) embraced this globally inclusive tradition. His encyclical on “Fraternity and Social Friendship” calls for “recognizing that all people are our brothers and sisters, and seeking forms of social friendship that include everyone.” (Fratelli tutti, 2020.)
He insisted that “for a healthy relationship between love of one’s native land and a sound sense of belonging to our larger human family, it is helpful to keep in mind that global society is not the sum total of different countries, but rather the communion that exists among them.” (Fratelli tutti, 2020.)
On the question of migration, Francis maintained that “our response to the arrival of migrating persons can be summarized by four words: welcome, protect, promote and integrate.” (Fratelli tutti, 2020.)
Not a political preference
It appears that the pontificate of Leo XIV will reflect a similar commitment. However, this cannot be explained by political preference, or by personal and family history (the US-born pope is the grandson of immigrants and became a naturalized citizen of Peru). Popes do not defend immigrants because they are left-wing or progressive, but because they are at the head of an institution whose raison d’être is “to act in continuity with the mission of Christ.”
For Christians, welcoming foreigners is meant to be a fundamental duty, a condition of salvation. In the gospel, Matthew has Jesus say that this is one of the criteria for the Last Judgement. Those who welcome the stranger will receive the kingdom of God “as an inheritance.” Others will receive eternal punishment:
“For I was hungry and you gave me no food, I was thirsty, and you gave me no drink, I was a stranger, and you did not welcome me, naked and you did not clothe me, sick and in prison and you did not visit me.” [Matthew, 25:42-43]
The stranger is at the heart of the New Testament revolution. Of course, the imperatives of hospitality are found in both the Old and New Testaments. It is a hospitality that is demanding
“You shall treat the stranger who sojourns with you as the native among you, and you shall love him as yourself, for you were strangers in the land of Egypt.” [Leviticus 19:34]
and unconditional
“Show hospitality without complaining.” [Peter 4:9]
But the New Testament revolution endows Christianity with a universal aspiration: human beings, by virtue of their origin, all become brothers. Belonging to Christianity itself is reflected by faith in this universality:
“We know that we love the children of God when we love God.” [John 5:2]
With this message, Christianity blurs the distinction between strangers and relatives:
“You are no longer strangers and foreigners, but fellow citizens with the saints and members of God’s household.” [Ephesians 2:19]
“They reside each in his own country, but as dwelling strangers. Every foreign land is a homeland to them, and every homeland is a foreign land to them.”
In his very first homily, Leo XIV suggested that the Christian faith might seem “absurd, reserved for the weak or the less intelligent.” But the institution of which he declared himself a “faithful administrator” has been preaching “universal mercy” for over 2,000 years.
Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.
Predicting whether or not companies will be successful is crucial for guiding investment decisions and designing effective economic policies. However, past research on high-growth firms – enterprises thought to be key for driving economic development – has typically shown low predictive accuracy, suggesting that growth may be largely random. Does this assumption still hold in the AI era, in which vast amounts of data and advanced analytical methods are now available? Can AI techniques overcome difficulties in predicting high-growth firms? These questions were raised in a chapter I co-authored in the De Gruyter Handbook of SME Entrepreneurship, which reviewed scientific contributions on firm growth prediction with AI methods.
According to the Eurostat-OECD (Organisation for Economic Co-operation and Development) definition, high-growth firms are businesses with at least 10 employees in the initial growth period and “average annualised growth greater than 20% per annum, over a three year period”. Growth can be measured by the firm’s number of employees or by its turnover. A subset of high-growth firms, known as “gazelles”, are young businesses – typically start-ups – that are up to five years old and experience fast growth.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
High-growth firms drive development, innovation and job creation. Identifying firms with high-growth potential enables investors, start-up incubators, accelerators, large companies and policymakers to spot potential opportunities for investment, strategic partnerships and resource allocation at an early stage. Forecasting outcomes for start-ups is more challenging than doing so for large companies due to limited historical data, high uncertainty, and reliance on qualitative factors like founder experience and market fit.
How random is firm growth?
Accurate growth forecasting is especially crucial given the high failure rate of start-ups. One in five start-ups fail in their first year, and two thirds fail within 10 years. Some start-ups can also contribute significantly to job creation: research analysing data from Spanish and Russian firms between 2010 and 2018 has shown that while “gazelles” represented only about 1-2% of all businesses in both countries, they were responsible for approximately 14% of employment growth in Russia and 9% in Spain.
In an effort to understand why some firms grow faster than others, researchers have looked into various factors including the personality of entrepreneurs, competitive strategy, available resources, market conditions and macroeconomic environment. These factors, however, only explained a small portion of the variation in firm growth and were limited in their practical application. This led to the suggestion that predicting the growth of new businesses is like playing a game of chance. Another viewpoint argued that the problem of growth prediction might stem from the methods employed, suggesting an “illusion of randomness”.
As firm growth is a complex, diverse, dynamic and non-linear process, adopting a new set of methods and approaches, such as those driven by big data and AI, can shed new light on the growth debate and forecasting.
AI offers new opportunities for predicting high-growth firms
AI methods are being increasingly adopted to forecast firm growth. For example, 70% of venture capital firms are adopting AI to increase internal productivity and facilitate and speed up sourcing, screening, classifying and monitoring start-ups with high potential. Crunchbase, a company data platform, claims that internal testing has shown that its AI models can predict start-up success with “95% precision” by analysing thousands of signals. These developments promise to fundamentally change how investors and businesses approach decision-making in private markets.
The advantages of AI techniques lie in their ability to process a far greater volume, variety and velocity of data about businesses and their environments compared to traditional statistical methods. For example, machine learning methods such as random forest (RF) and least absolute shrinkage and selection operator (LASSO) help identify key variables affecting business outcomes in datasets with a large number of predictors. A “fused” large language model has been shown to predict start-up success using both structured (organized in tables) fundamental information and unstructured (unorganized and more complex) textual descriptions. AI techniques help enhance the accuracy of firm growth predictions, identify the most important growth factors and minimize human biases. As some scholars have noted, the improved prediction indicates that perhaps firm growth is less random than previously thought. Furthermore, the ability to capture data in real time is especially valuable in fast-paced, dynamic environments, such as high-technology industries.
Challenges remain
Despite AI’s rapid progress, there is still considerable potential for advancement. Although the prediction of high-growth firms has been improved with modern AI techniques, studies indicate that it continues to be a challenge. For instance, start-up success often depends on rapidly changing and intangible factors that are not easily captured by data. Further methodological advances, such as incorporating a broader range of predictors, diverse data sources and more sophisticated algorithms, are recommended.
One of the main challenges for AI methods is their ability to offer explanations for the predictions they make. Predictions generated by complex deep learning models resemble a “black box”, with the causal mechanisms that transform input into output remaining unclear. Producing more explainable AI has become one of the key objectives set by the research community. Understanding what is explainable and what is not (yet) explainable with the use of AI methods can better guide practitioners in identifying and supporting high-growth firms.
While start-ups offer the potential for significant investment returns, they carry considerable risks, making careful selection and accurate prediction crucial. As AI models evolve, they will increasingly integrate diverse and unstructured data sources and real-time market signals to detect early indicators of potential success. Advancements are expected to further enhance the scalability, accuracy, speed and transparency of AI-driven predictions, reshaping how high-growth firms are identified and supported.
Tatiana Beliaeva ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Since the beginning of the century, the number of satellites orbiting Earth has increased more than 800%, from less than 1,000 to more than 9,000. This profusion has had a number of strange and disturbing repercussions. One of them is that companies are selling data from satellite images of parking lots to financial analysts. Analysts then use this information to help gauge a store’s foot traffic, compare a retailer to competitors and estimate its revenue.
This is just one example of the new information, or “alternative data”, that is now available to analysts to help them make their predictions about future stock performance. In the past, analysts would make predictions based on firms’ public financial statements.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
According to our research, the plethora of new sources of data has improved short-term predictions but worsened long-term analysis, which could have profound consequences.
Tweets, twits and credit card data
In a paper on alternative data’s effect on financial forecasting, we counted more than 500 companies that sold alternative data in 2017, a number that ballooned from less than 50 in 1996. Today, the alternative data broker Datarade lists more than 3,000 alternative datasets for sale.
In addition to satellite images, sources of new information include Google, credit card statistics and social media such as X or Stocktwits, a popular X-like platform where investors share ideas about the market. For instance, Stocktwits users share charts showing the evolution of the price of a given stock (e.g. Apple stock) and explanations of why the evolution predicts a price increase or decrease. Users also mention the launch of a new product by a firm and whether it makes them bullish or bearish about the firm’s stock.
Using data from the Institutional Brokers’ Estimate System (I/B/E/S) and regression analyses, we measured the quality of 65 million equity analysts’ forecasts from 1983 to 2017 by comparing analysts’ predictions with the actual earnings per share of companies’ stock.
We found, as others had, that the availability of more data explains why stock analysts have become progressively better at making short-term projections. We went further, however, by asking how this alternative data affected long-term projections. And we found that over the same period that saw a rise in accuracy of short-term projections, there was a drop in validity of long-term forecasts.
More data, but limited attention
Because of its nature, alternative data – information about firms in the moment – is useful mostly for short-term forecasts. Longer-term analysis – from one to five years into the future – is a much more important judgment.
Previous papers have proved the common-sense proposition that analysts have a limited amount of attention. If analysts have a large portfolio of firms to cover, for example, their scattered concentration begins to yield diminishing returns.
We wanted to know whether the increased accuracy of short-term forecasts and declining accuracy of long-term predictions – which we had observed in our analysis of the I/B/E/S data – was due to a concomitant proliferation of alternative sources for financial information.
To investigate this proposition, we analyzed all discussions of stocks on Stocktwits that took place between 2009 and 2017. As might be expected, certain stocks like Apple, Google or Walmart generated much more discussion than those of small companies that aren’t even listed on the Nasdaq.
We conjectured that analysts who followed stocks that were heavily discussed on the platform – and so, who were exposed to a lot of alternative data – would experience a larger decline in the quality of their long-term forecasts than analysts who followed stocks that were little discussed. And after controlling for factors such as firms’ size, years in business and sales growth, that’s exactly what we found.
We inferred that because analysts had easy access to information for short-term analysis, they directed their energy there, which meant they had less attention for long-term forecasting.
The broader consequences of poor long-term forecasting
The consequences of this inundation of alternative data may be profound. When assessing a stock’s value, investors must take into account both short- and long-term forecasts. If the quality of long-term forecasts deteriorates, there is a good chance that stock prices will not accurately reflect a firm’s value.
Moreover, a firm would like to see the value of its decisions reflected in the price of its stock. But if a firm’s long-term decisions are incorrectly taken into account by analysts, it might be less willing to make investments that will only pay off years away.
In the mining industry, for instance, it takes time to build a new mine. It’s going to take maybe nine, 10 years for an investment to start producing cash flows. Companies might be less willing to make such investments if, say, their stocks may be undervalued because market participants have less accurate forecasts of these investments’ impacts on firms’ cash flows – the subject of another paper we are working on.
The example of investment in carbon reduction is even more alarming. That kind of investment also tends to pay off in the long run, when global warming will be an even bigger issue. Firms may have less incentive to make the investment if the worth of that investment is not quickly reflected in their valuation.
Practical applications
The results of our research suggest that it might be wise for financial firms to separate teams that research short-term results and those that make long-term forecasts. This would alleviate the problem of one person or team being flooded with data relevant to short-term forecasting and then also expected to research long-term results. Our findings are also noteworthy for investors looking for bargains: though there are downsides to poor long-term forecasting, it could present an opportunity for those able to identify undervalued firms.
Thierry Foucault a reçu des financements du European Research Council (ERC).
Noor Bin Ladin, a right-wing influencer, stridently declares “I don’t want to eat the bugs” on a talk show hosted by a former adviser to US President Donald Trump. Laurent Duplomb, a senator from the conservative Les Républicains party in France, informs his colleagues that the French would be eating “insects without their knowledge”. Bartosz Kownacki, an MP from the nationalist Law and Justice party in Poland, suggests that opposition politicians write “instead of chicken, eat a worm” on their election materials, arguing that “this is their real election programme.” Thierry Baudet, a leader of the far-right Forum for Democracy party in the Netherlands, shouts “No way! No way!” while holding up a bag of mealworms in front of protesting farmers. Politicians in Lega, a far-right party in Italy, warn that the European Union is planning to “impose” the eating of insects on citizens in the bloc – and a Lega electoral campaign includes a billboard-sized image of a person popping an enormous cricket into their mouth, next to the caption, “Let’s change Europe before it changes us.”
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
During the 2020s, commentators and politicians across the right-wing political spectrum have amplified an Internet-based conspiracy theory that elite forces are conspiring to make us all eat insects. Often rallying under the slogan “I will not eat the bugs,” right-wing and far-right figures have come out in force against human consumption of insects. Many of these people assert that the EU is planning to force bug-eating on the general public while devastating traditional agriculture and meat consumption under the guise of the European Green Deal, the bloc’s plan to eliminate greenhouse gases by 2050 and decouple economic growth from resource use. Opposing insect-eating has become a symbolic way to protest EU environmental policies, express scepticism of and hostility toward Brussels, and villainize political opponents. Closer inspection reveals that the conspiracy theory underlying such opposition has much older and more sinister resonances.
“Spreading disinformation”
Insect eating (entomophagy) remains a minor practice in Europe and North America, although alternative protein sources do play a role in the EU’s move toward a sustainable future. So far, the European Commission has approved frozen, dried and powdered forms of Tenebrio molitor (yellow mealworm larva), Locusta migratoria (migratory locust), Acheta domesticus (house cricket) and Alphitobius diaperinus (the lesser mealworm larva) for human consumption. But the market for insect powder in foods like bread, pasta and sports bars remains small. Although insects are common food in many parts of the world, consumers in the West, where insects are more commonly used to provide protein in animal feed, are reluctant to eat bugs for historical reasons based in ideas of uncleanliness and primitiveness. So, based on the facts, there seems to be little to no reason for statements such as those made by Rumen Petkov of Bulgaria’s ABV party, who said that EU approval of insect consumption is a “crime against Europe” and that the European Commission is “prepared to kill our European children”.
What led to the rapid spread of this conspiracy theory? Noor Bin Ladin’s remarks give us a clue. During her talk show appearance, Bin Ladin described her words as a message for Klaus Schwab to take to his “masters”. Schwab is the founder and executive chair of the World Economic Forum. Early in the Covid pandemic, Schwab and the WEF produced a set of proposals titled “the Great Reset”, which called for an overhaul of various world systems to produce a stakeholder-driven capitalism that would lead to a more socially and environmentally responsible future. Conspiracists seized on and branded “the Great Reset” as a new iteration of a conspiracy theory known as the New World Order – an imagined global governance system meant to control the lives of everyone. Both the Great Reset and the New World Order lead back to much older and broader antisemitic conspiracy theories that hold that elite Jewish financiers run the world with their hands on invisible levers of power. All these narratives tap into feelings of futility and hopelessness about the future.
US right-wing media personality Tucker Carlson called a 2023 episode of his show, which included a heavy focus on Schwab and the WEF, “Let Them Eat Bugs”, a title that gestures at the remark allegedly made by Marie Antoinette, the last queen of France, when she heard about people suffering from a lack of bread before the French Revolution: “Let them eat cake”. With this title, Carlson is aiming to emphasize that the elite are hopelessly out of touch and have contempt for farmers and the average man, whom they want to force to eat bugs. Like the French bedbug scare in late 2023, right-wing alarm around insect-eating has connections to the spread of anti-EU Russian propaganda. Russian news outlets have suggested that Europeans are so poor and food deprived as a result of sanctions connected to the war in Ukraine that they have been reduced to eating insects. As the European Digital Media Observatory (EDMO) writes, insects are “delicious treats for actors with interest in spreading disinformation against the EU”.
Symbols for dehumanization
The desire to stir up fear about the minor level of European and US insect consumption is not based on the risk of rapid growth in the insect market, but on the power to arouse disgust and fear itself. Insects have long been used as symbols to stir revulsion and paint opponents as objects of physical and moral disgust. During times of political extremism, insects have featured repeatedly in efforts to distance, devalue and dehumanize minorities. Armenians were called locusts during the Armenian genocide, and Jews were compared to lice in Nazi Germany. In the period prior to the ethnic genocide of Tutsis in Rwanda, some Hutus repeatedly called Tutsis “cockroaches” on public radio. The right wing’s current fetishization of insect-eating serves as a narrative to cast political opponents as morally repulsive, even if not labelling them as bugs themselves.
For some figures on the right, insect consumption symbolizes the worst of Eurocentric liberalism – seen as a movement so void of a positive political vision that the only possible future it offers is one of impoverishment and bug-eating. They point to an elite who they claim will go on feasting on meat while forcing mealworms and fly larvae on the rest of us. It’s a potent image. At a moment in which people on the right and the left seem unable to imagine a better political future together, it becomes easier to demonize climate policy-minded leaders as a group of disgusting hypocrites plotting to create a society of contrived scarcity where the general population is reduced to eating bugs.
Meanwhile, since 2015, scientists have been releasing papers warning that the global food system shows risks of genuine structural problems. In a future of environmental disruption, trade wars and real risks of food shortages and famine, we may need all the calories we can get – insect-based or otherwise.
Out of curiosity, I bought a bag of cricket flour last fall. The crickets resulted in a delicious, nutty-flavoured cecina, well… crickcina. So far, none of my friends will try it. They’re missing out.
D. D. Moore ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Source: The Conversation – (in Spanish) – By Florian Bonnet, Démographe et économiste, spécialiste des inégalités territoriales, Ined (Institut national d’études démographiques)
The political decisions made during 2020 and 2021 to combat the Covid-19 pandemic profoundly altered daily life. Professionally, societies faced partial unemployment and widespread adoption of remote work; personally, individuals endured lockdowns and social distancing measures. These interventions aimed to reduce infection rates and ease pressure on healthcare systems, with the primary public health goal of minimizing deaths.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
More than five years after the pandemic began, what do we know about its impact on human longevity? Here’s a closer look.
A decline in global life expectancy
Initial assessments of the pandemic’s toll have been refined over time. According to a World Health Organization (WHO) report published in May 2024, global life expectancy declined by 1.8 years between 2019 and 2021, erasing a decade of progress. These estimates rely on “excess mortality”, a metric that measures the difference between observed mortality during the pandemic and expected mortality in its absence.
Excess mortality can be quantified using different indicators, such as the number of excess deaths. However, comparing this indicator between countries of different sizes and age structures can be challenging. Another informative metric is the loss of life expectancy at birth, calculated globally by organisations such as the WHO.
The regular calculation, publication and dissemination of excess mortality indicators are vital for comparing the pandemic’s impact across countries at the national level. However, it is important to recognise that the pandemic did not affect all areas within countries equally. Variability in the severity of the pandemic’s impact often stemmed from differing confinement strategies implemented to contain the virus.
This uneven distribution highlights the need to quantify these indicators at a more granular geographical level. Such localised analysis can reveal the regions most severely affected, providing valuable insights into the pandemic’s effects and enabling the development of targeted response strategies.
In 2020, significant declines in life expectancy were observed in northern Italy and Spain
Figure 1 illustrates the spatial distribution of estimated losses of life expectancy in 2020. These losses were highest in northern Italy and central Spain. In the Italian regions of Bergamo and Cremona, life expectancy dropped by nearly four years, while Piacenza experienced a decline of three and a half years. In Spain, the regions of Segovia, Ciudad Real, Cuenca and Madrid saw losses of approximately three years.
The losses were even more pronounced among men (data not presented here), who were disproportionately affected by the pandemic. In Cremona, the decline in life expectancy among men reached nearly five years, while in Bergamo, it was close to four and a half years.
Figure 1: Estimated loss of observed life expectancy at birth (e0) in 2020 across 569 regions in 25 European countries. Estimates are for both sexes combined. Fourni par l’auteur
Eastern Europe, particularly Poland, along with eastern Sweden and northern and eastern France, also experienced significant, though less severe, declines. In France, the Paris region and areas near the German border recorded the highest losses, ranging from 1.5 to 2 years.
In contrast, other regions saw much smaller impacts. This is particularly true for southern Italy, much of Scandinavia and Germany, southern parts of the United Kingdom, and western France. In these regions, observed life expectancy is close to what would have been expected in the absence of the pandemic. In France, the implementation of lockdown measures in March and November likely prevented the pandemic from spreading across the entire country from the initial clusters in the north and east.
In 2021, a shift in the pandemic toward Eastern Europe
Figure 2 shows the estimated losses of life expectancy in 2021. At a glance, the regions most affected by excess mortality during the Covid-19 pandemic differed significantly from those in 2020. The most substantial losses were concentrated in Eastern Europe.
Figure 2: Estimated loss of observed life expectancy at birth (e0) in 2021 across 569 regions in 25 European countries. Estimates are for both sexes combined. Fourni par l’auteur
Among regions where life expectancy declined by more than two years, 61 of Poland’s 73 regions, 12 of the Czech Republic’s 14 regions, all eight Hungarian regions, and seven of Slovakia’s eight regions were affected. In contrast, only one Italian region and one Spanish region experienced losses exceeding two years, despite these countries being heavily impacted in 2020.
Germany saw much greater losses in 2021 than in 2020, particularly in its eastern regions, where declines often exceeded 1.5 years. In southern Saxony, Halle and Lusatia, losses approached two years. Conversely, Spain and Scandinavia recorded the lowest declines in life expectancy.
In France, the losses were more uniform than in 2020, generally ranging from 0 to 1.5 years. The highest loss occurred in the Parisian suburbs, particularly Seine-Saint-Denis, where life expectancy fell by 1.5 years – or two years for men.
What is the overall assessment for these two years?
To determine the overall impact of 2020 and 2021 in terms of life expectancy loss, we used an indicator that sums up the years of life lost due to the pandemic over this two-year period. This method allows us to rank the 569 European regions.
The regions most affected were Pulawy, Bytom and Przemyski in southeastern Poland, along with Kosice and Presov in eastern Slovakia. Among the top 50 regions, Eastern Europe dominated, with 36 Polish regions, six Slovakian regions, two Czech regions, one Hungarian region, and both Lithuanian regions included. Italian regions such as Cremona, Bergamo and Piacenza also ranked high, falling between the 15th and 30th positions. In France, Seine-Saint-Denis ranked 81st, while all other French regions were outside the top 100.
It is crucial to analyse the impact of a crisis like the Covid-19 pandemic at a fine geographical scale, as within-country disparities can be significant. This was particularly evident in Italy in 2020, where the north was far more affected than the south, and in Germany in 2021, with stark differences between the west and the east.
Our study highlighted the severe impact of the pandemic in specific European regions, where life expectancy losses exceeded three years. The most affected regions shifted over time, moving from areas with traditionally high life expectancy (such as northern Italy, central Spain and the greater Paris region) in 2020 to regions with traditionally lower life expectancy (Eastern Europe) in 2021. France was relatively spared compared to the rest of Europe, with the notable exception of Seine-Saint-Denis.
The coming years will be critical in determining whether life expectancy levels can return to their long-term trajectories or if the pandemic has caused lasting structural changes in certain regions.
Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.
Source: The Conversation – (in Spanish) – By Małgorzata Zachara-Szymańska, Jean Monnet Fellow, Professor of International Relations, Jagiellonian University, European University Institute
The European Union will have to strike a deal with US President Donald Trump on tariffs, NATO, and the stationing of US troops in EU countries. A trade war with the US will further weaken the already modest growth prospects in EU countries. Europe also still lacks a clear plan for how to defend itself if the US were to withdraw from its security system. Turning NATO into a more “Europeanized” alliance will require the development of a homegrown European military-industrial complex, and these things take time.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
More broadly, Trump openly challenges the postwar international order – an order shaped jointly by the US and Europe. He disregards international trade rules, sees no purpose in most international organizations, and his calls to take over or annex the Panama Canal, Greenland and Canada violate the principles of self-determination and respect for international agreements. The EU can’t stop him, but it must choose: focus energy on resisting the erosion of international law and diplomacy or implement a pragmatic strategy of damage control. The latter demands leverage – bargaining chips – and sustained dialogue with Washington, however strained the politics.
This kind of strategic exercise is something Poland has been quietly mastering for years. It recently signed an agreement with a US firm to build its first nuclear power plant, and the Pentagon has approved the sale of state-of-the-art AIM-120D3 air-to-air missiles to Warsaw.
Just 35 years ago, Poland was a struggling, post-communist state plagued by corruption, lacking democratic traditions and having no experience in a market economy. Today, it is projected to have the fastest-growing European economy in the Organisation for Economic and Co-operative Development (OECD) in 2025. Its political institutions are far from flawless, but they have proven resilient. The key to Poland’s progress has been its ability to skilfully navigate the transatlantic space – strengthening military resilience through its ties with the United States, while also bolstering its economy with support from European Union cohesion funds.
Vito Corleone is wounded and furious
Poland has long been seen in Europe as the eager Atlanticist – sometimes as naive, sometimes as reckless. In 2003, when the continent was deeply divided over the Iraq War, Poland defied European opinion and sent troops to contribute to the US-led invasion. European leaders accused Warsaw of acting as Washington’s Trojan horse in European public debate. French president Jacques Chirac even described Poland’s stance as “infantile” and “dangerous”, famously declaring that Central and Eastern European countries had “missed a good opportunity to shut up”.
However, at the start of the 21st century, Warsaw was focused on strengthening its security and international standing. It got what it wanted, even at the cost of lost lives, a tarnished image, and bitter disappointments, as the expected lucrative contracts for Polish companies to participate in the reconstruction of Iraq never materialized. For the first time since World War II, a Polish contingent gained real combat experience. It became obvious that the army was in urgent need of modernization, and that modernization later occurred. It gives me no pleasure that Poland participated in an illegal war. But as an analyst, I can’t ignore the political and military benefits that followed.
In The Godfather Doctrine: A Foreign Policy Parable, published in 2009, political analysts John C. Hulsman and A. Wess Mitchell likened Poland to Enzo the baker, a character who is loyal and steady, standing guard for the Corleone family in the seminal 1972 film. In their allegory, the US is the wounded Don Vito Corleone, struggling to retain influence, while his sons scramble to save the family’s power.
While pop culture analogies have their limits, they often offer sharp insights. Western European countries now face a defining question: what kind of game do they want to play as their long-standing ally appears to spiral inward? Should they seize this moment to engage in confrontation – like rival mafia families in The Godfather trilogy – or secure what resources they can from a fading superpower to shore up their own vulnerabilities?
Two loyalties, one strategy
The reality is that Polish society is as pro-European as it is pro-American. It is also the case that the dual allegiance lost credibility when the populist Law and Justice party, in power from 2015 to 2023, adopted a combative stance toward Brussels and Berlin, isolating Poland diplomatically and weakening its position as a trustworthy European partner.
The European Commission accused the Law and Justice government of breaching EU treaty law on multiple fronts. Poland faced infringement procedures over its violations of environmental standards, its refusal to accept refugees under the bloc’s relocation mechanism, and its reforms of the common courts. What sparked outrage across Europe, and within Poland itself, was the dismantling of an already conservative abortion law, coupled with a brutal hate campaign targeting the LGBTQ+ community.
Yet even after years in power, Law and Justice failed to shift public opinion about being part of the EU: in 2022, a survey by the Public Opinion Research Centre (CBOS) showed that 92% of Poles expressed support for membership – the highest level recorded since 1994. Since joining the bloc in 2004, EU-funded investments have become permanent features of Poland’s landscape. They include new highways, restored historical landmarks, the Warsaw metro, the port of Szczecin, and widespread access to high-speed Internet. In late 2023, a democratic coalition won the national election, and former European Council president Donald Tusk returned to power for a third term as prime minister after previously serving in the role from 2007 to 2014.
Today, there are few illusions in Warsaw about Donald Trump’s negative impact on transatlantic relations: after his announcement of new tariffs on April 2, Tusk called them “a severe and unpleasant blow” coming “from our closest ally”. Nonetheless, Tusk has put forward a vision that appears to align with the US president’s expectations of Europe taking more responsibility for its own security. The potential missile deal with Washington is part of his strategy.
‘Secure Europe’
“Secure Europe” is the official theme of Poland’s current presidency of the Council of the European Union – unsurprising for a country whose historical memory teaches that without security, nothing else is possible. Situated between Germany and Russia, Poland has a long history of struggling against more powerful conquerors, often finding itself too weak to survive. As a result, it was absent from the map of Europe for over 100 years, divided between Prussia, Russia and the Austro-Hungarian empire. When it regained its statehood after the first world war, it began the difficult task of building a multiethnic, democratic society, but the second world war soon followed. Lacking powerful allies after the war, Poland saw German occupation replaced by Soviet domination, lasting almost half a century.
That’s why Polish troops fought in the NATO-led ISAF mission in Afghanistan and in the US-led invasion of Iraq, earning operational credibility and proving their reliability within the transatlantic alliance. Even before Trump’s first term, Poland was one of the few NATO countries meeting its 2% defence spending target. Today, it spends more than 4% – a higher share of GDP than even the United States. In 2014, when Russia illegally annexed Crimea, Poland had the ninth-largest armed forces in NATO. Today, it ranks third, behind only the US and Turkey, with over 200,000 personnel.
What Poles have long understood – and what much of Europe was slow to acknowledge – is that when Russia operates in imperial mode, it responds only to force. For years, Poland sought to act as Europe’s interpreter of the Russian psyche, but few were willing to listen. Preoccupied with lucrative energy deals and diplomatic overtures, German and French leaders dismissed Polish warnings as paranoia or Russophobia, brushing aside clear red flags.
Could Poland’s long-honed strategy of balancing loyalties across the Atlantic offer a new model for European foreign policy? In a world where old alliances are being tested and new rules are being written, its rationale might point to the pragmatic path forward. For Poles, the EU is more than just a political project – it was the fulfilment of a long-held dream of breaking free from the historical burden of constant threat and dependence. If Poland has been right about Russia all along, then perhaps it’s time to consider whether it might have something to tell us about the US, too.
Małgorzata Zachara-Szymańska ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Source: The Conversation – France – By Jeffrey Fields, Professor of the Practice of International Relations, USC Dornsife College of Letters, Arts and Sciences
Varias personas observan el fuego y el humo provocados por un ataque aéreo israelí contra un depósito de petróleo en Teherán, el 15 de junio de 2025.Stringer/Getty Images
En 1951, el Parlamento iraní eligió un nuevo primer ministro, Mossadegh, quien llevó a los legisladores a votar a favor de tomar el control de la Anglo-Iranian Oil Company, expulsar a los propietarios británicos de la empresa y declarar que querían convertir los beneficios del petróleo en inversiones para el pueblo iraní. Estados Unidos temía que se interrumpiera el suministro mundial de petróleo y le preocupaba que Irán cayera presa de la influencia soviética. Los británicos temían perder el petróleo barato iraní.
El presidente Dwight Eisenhower decidió que lo mejor era deshacerse de Mossadegh. La Operación Ajax, una acción conjunta de la CIA y el Reino Unido, convenció al sah, el monarca del país, para que destituyera a Mossadegh y lo expulsara del poder por la fuerza. Mossadegh fue sustituido por un primer ministro mucho más favorable a Occidente, elegido personalmente por la CIA.
Manifestantes en Teherán exigen el establecimiento de una república islámica. AP Photo/Saris
1979: Los revolucionarios derrocan al sha y toman rehenes
Tras más de 25 años de relativa estabilidad en las relaciones entre Estados Unidos e Irán, la población iraní estaba descontenta con las condiciones sociales y económicas que se desarrollaron bajo el régimen dictatorial del sah Mohammad Reza Pahlavi.
Estudiantes iraníes en la embajada de Estados Unidos en Teherán muestran a la multitud un rehén estadounidense con los ojos vendados en noviembre de 1979. AP Photo
En octubre de 1979, el presidente Jimmy Carter accedió a permitir que el sah viajara a Estados Unidos para recibir tratamiento médico avanzado. Estudiantes iraníes indignados asaltaron la embajada estadounidense en Teherán el 4 de noviembre, tomando como rehenes a 52 estadounidenses. Esto convenció a Carter de romper las relaciones diplomáticas con Irán el 7 de abril de 1980.
Dos semanas más tarde, el ejército estadounidense lanzó una misión para rescatar a los rehenes, pero fracasó y se estrellaron varios aviones, lo que causó la muerte de ocho militares estadounidenses.
El sah murió en Egipto en julio de 1980, pero los rehenes no fueron liberados hasta el 20 de enero de 1981, tras 444 días de cautiverio.
Un clérigo iraní, a la izquierda, y un soldado iraní llevan máscaras antigás para protegerse de los ataques con armas químicas iraquíes en mayo de 1988. Kaveh Kazemi/Getty Images
1980-1988: Estados Unidos se pone tácitamente del lado de Irak
En septiembre de 1980, Irak invadió Irán, lo que supuso una escalada de la rivalidad regional y las diferencias religiosas entre ambos países: Irak estaba gobernado por musulmanes suníes, pero su población era mayoritariamente musulmana chií; Irán estaba liderado y poblado en su mayoría por chiíes.
Estados Unidos temía que el conflicto limitara el flujo de petróleo de Oriente Medio y quería asegurarse de que no afectara a su estrecho aliado, Arabia Saudí.
Estados Unidos apoyó al líder iraquí Saddam Hussein en su lucha contra el régimen iraní antiamericano. Como resultado, Estados Unidos hizo en gran medida la vista gorda ante el uso de armas químicas por parte de Irak contra Irán.
Los funcionarios estadounidenses moderaron su habitual oposición a esas armas ilegales e inhumanas porque el Departamento de Estado de EE. UU. no “quería hacerle el juego a Irán” alimentando su propaganda contra Irak. En 1988, la guerra terminó en un empate. Murieron más de 500 000 militares y 100 000 civiles.
1981-1986: Estados Unidos vende armas en secreto a Irán
EE. UU. impuso un embargo de armas después de que Irán fuera designado Estado patrocinador del terrorismo en 1984. Esto dejó al ejército iraní, en plena guerra con Irak, desesperado por conseguir armas, aviones y piezas de vehículos para seguir luchando.
El último envío, de misiles antitanque, se realizó en octubre de 1986. En noviembre de ese año, una revista libanesa reveló el acuerdo. Esta revelación desató el escándalo Irán-Contra en Estados Unidos, al descubrirse que funcionarios de la administración Reagan habían recaudado dinero de Irán para comprar las armas y enviado ilegalmente esos fondos a rebeldes antisocialistas (la contra nicaragüense).
En el funeral multitudinario de 76 de las 290 personas fallecidas en el derribo del vuelo 655 de Iran Air, los dolientes sostienen un cartel que representa el incidente. AP Photo/CP/Mohammad Sayyad
1988: La Marina de los Estados Unidos derriba el vuelo 655 de Iran Air
Durante o justo después de ese intercambio de disparos, la tripulación del Vincennes confundió un avión civil de pasajeros Airbus que pasaba por allí con un caza F-14 iraní. Lo derribaron, matando a las 290 personas a bordo.
Estados Unidos lo calificó de “accidente trágico y lamentable”, pero Irán creyó que el derribo del avión fue intencionado. En 1996, Estados Unidos acordó pagar 131 millones de dólares en concepto de indemnización a Irán.
1997-1998: Estados Unidos busca el contacto
En agosto de 1997, un reformista moderado, Mohammad Khatami, ganó las elecciones presidenciales de Irán.
El presidente estadounidense Bill Clinton intuyó una oportunidad, y envió un mensaje a Teherán a través del embajador suizo en ese país en el que proponía conversaciones directas entre ambos gobiernos.
Poco después, a principios de enero de 1998, Jatamí concedió una entrevista a la CNN en la que expresó su “respeto por el gran pueblo estadounidense”, condenó el terrorismo y recomendó un “intercambio de profesores, escritores, académicos, artistas, periodistas y turistas” entre Estados Unidos e Irán.
Sin embargo, el líder supremo, el ayatolá Alí Jamenei, no estuvo de acuerdo, por lo que las gestiones mutuas no dieron muchos frutos cuando Clinton llegó al final de su mandato.
En su discurso sobre el estado de la Unión de 2002, el presidente George W. Bush calificó a Irán, Irak y Corea del Norte como un “Eje del Mal” que apoyaba el terrorismo y buscaba armas de destrucción masiva, lo que tensó aún más las relaciones.
Dentro de estos edificios de la instalación nuclear de Natanz, en Irán, los técnicos enriquecen uranio. AP Photo/Vahid Salemi
2002: El programa nuclear de Irán despierta la alarma
Esto constituía una violación de los términos del Tratado de No Proliferación Nuclear, que Irán había firmado y que exigía a los países revelar sus instalaciones relacionadas con la energía nuclear a los inspectores internacionales.
Una de esas instalaciones anteriormente secretas, Natanz, albergaba centrifugadoras para enriquecer uranio, que podía utilizarse en reactores nucleares civiles o enriquecerse aún más para fabricar armas.
A partir de 2005, ciberataques de los gobiernos de Estados Unidos e Israel se dirigieron contra las centrifugadoras de Natanz con un software malicioso creado a medida que se conoció como Stuxnet.
Un extracto del documento enviado desde Irán, a través del Gobierno suizo, al Departamento de Estado de EE. UU. en 2003, parece buscar conversaciones entre EE. UU. e Irán. Washington Post via Scribd
En mayo de 2003, altos funcionarios iraníes se pusieron en contacto discretamente con el Departamento de Estado a través de la embajada suiza en Irán, en busca de “un diálogo en el respeto mutuo” que abordara cuatro grandes cuestiones: las armas nucleares, el terrorismo, la resistencia palestina y la estabilidad en Irak.
Los partidarios de la línea dura del Gobierno de Bush no estaban interesados en ninguna reconciliación importante, aunque el secretario de Estado Colin Powell se mostraba a favor del diálogo y otros funcionarios se habían reunido con Irán para tratar el tema de Al Qaeda.
Cuando el radical iraní Mahmud Ahmadineyad fue elegido presidente de Irán en 2005, la oportunidad se esfumó. Al año siguiente, Ahmadineyad hizo su propia apertura a Washington en una carta de 18 páginas dirigida al presidente Bush. La carta fue ampliamente rechazada.
Tras una década de intentos infructuosos por frenar las ambiciones nucleares de Irán, la Administración Obama emprendió una vía diplomática directa a partir de 2013.
Irán, Estados Unidos, China, Francia, Alemania, Rusia y el Reino Unido firmaron el acuerdo en 2015. Este limitaba severamente la capacidad de Irán para enriquecer uranio y obligaba a inspectores internacionales a supervisar y hacer cumplir el acuerdo por parte de Irán.
A cambio, se concedió a Irán el levantamiento de las sanciones económicas internacionales y estadounidenses. Aunque los inspectores certificaron periódicamente que Irán cumplía los términos del acuerdo, el presidente Donald Trump se retiró del acuerdo en mayo de 2018.
2020: Drones estadounidenses matan al general iraní Qassem Soleimani
El 3 de enero de 2020, un dron estadounidense disparó un misil que mató al general Qassem Soleimani, líder de la Fuerza Quds de élite iraní. Los analistas consideraban a Soleimani el segundo hombre más poderoso de Irán, después del líder supremo, el ayatolá Jamenei.
En ese momento, la administración Trump afirmó que Soleimani estaba dirigiendo un ataque inminente contra activos estadounidenses en la región, pero las autoridades no han proporcionado pruebas claras que respalden esa afirmación.
El descarado ataque de Hamás contra Israel el 7 de octubre de 2023 provocó una temible respuesta militar por parte de Israel que continúa hoy en día y sirvió para debilitar gravemente a los aliados de Irán en la región, especialmente Hamás, autor de los ataques, y Hezbolá en el Líbano.
2025: Trump 2.0 e Irán
Trump vio la oportunidad de forjar un nuevo acuerdo nuclear con Irán y de buscar otros acuerdos comerciales con Teherán. Una vez investido para su segundo mandato, el presidente estadounidense nombró a Steve Witkoff, un inversor inmobiliario amigo del presidente, como enviado especial para Oriente Medio y para liderar las negociaciones.
Las negociaciones para alcanzar un acuerdo nuclear entre Washington y Teherán comenzaron en abril, pero los países no llegaron a un acuerdo. Estaban planeando una nueva ronda de conversaciones cuando Israel atacó Irán con una serie de ataques aéreos el 13 de junio, lo que obligó a la Casa Blanca a reconsiderar su posición.
En la madrugada del 22 de junio, Estados Unidos decidió actuar con contundencia en un intento de paralizar la capacidad nuclear de Irán, bombardeando tres instalaciones nucleares y causando lo que los responsables del Pentágono calificaron de “daños graves”. Irán prometió tomar represalias.
Este artículo ha sido actualizado para reflejar el bombardeo estadounidense de instalaciones nucleares iraníes el 22 de junio de 2025.
Reciba artículos como este cada día en su buzón. 📩 Suscríbase gratis a nuestro boletín diario para tener acceso directo a la conversación del día de The Conversation en español.
Jeffrey Fields recibe financiación de la Carnegie Corporation de Nueva York y Schmidt Futures.