Why government support for religion doesn’t necessarily make people more religious

Source: The Conversation – USA (3) – By Brendan Szendro, Faculty Lecturer in Political Science, McGill University

History offers plenty of lessons about what happens when governments support faith groups – and it doesn’t always help them. cosmonaut/iStock via Getty Images Plus

The IRS will offer religious congregations more freedom to endorse political candidates without jeopardizing their tax-exempt status, the agency said in a July 2025 court filing. President Donald Trump has previously vowed to abolish the Johnson Amendment, which bars charitable nonprofits from taking part in political campaigns – although the latest move simply reinterprets the rule.

Celebrating the change, House Speaker Mike Johnson highlighted an argument that’s popular among some conservatives: that the Constitution does not actually require the separation of church and state.

Thomas Jefferson, who coined the phrase, did not intend “to keep religion from influencing issues of civil government,” Johnson wrote in a July 12 op-ed published on the social platform X. “The Founders wanted to protect the church from an encroaching state, not the other way around.”

Officials in several red states have challenged long-standing norms surrounding religion and state, ranging from introducing prayer and Bibles in public classrooms to attempts to secure government funding for religious schools.

Conservative thinkers have long pushed for closer ties between religion and the government, arguing that religious institutions can create strong communities. In my own research, I’ve found that mass shootings are less likely in a more religious environment.

For critics, of course, attempts to lower the wall of separation between church and state raise constitutional concerns. The First Amendment states that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.” What’s more, critics fear that recent attempts to lower barriers between church and state favor conservative Christian groups over other faiths.

But as a scholar of religion and politics, I believe another reason for caution is being overlooked. Research indicates that strong relationships between religion and state can be a factor that actually decreases religious participation, rather than encouraging it.

All or nothing

Some scholars suggest that religious institutions operate like businesses in a marketplace, competing for believers. Government policies toward religion can change the balance of power between competing firms the same way that economic policies can affect markets for consumer goods.

At a glance, it might seem like government support would strengthen religious institutions. In reality, it can backfire, whether or not the government promotes one particular faith above others. In some cases, adherents who cannot practice religion on their own terms opt out of practicing it entirely.

In Israel, for example, Orthodox Jewish institutions receive government recognition that more liberal Jewish denominations do not. Orthodox authorities are allowed to manage religious sites, run public religious schools and perform marriages. Many couples who do not want to get married under Orthodox law, or cannot, hold a ceremony abroad or register as a common-law marriage.

A couple embraces side by side as they observe a small wedding in a wooded area.
Guests attend a wedding in Israel’s Ein Hemed National Park in December 2017.
AP Photo/Ariel Schalit

In fact, many scholars refer to Israel as an example of a religious “monopoly.” Because the government sponsors a particular branch, Orthodox Judaism, Jewish citizens sometimes face an “all or nothing” choice. The country’s Jewish population is sharply divided between people who are religiously observant and people who identify as secular.

Government involvement can also hurt religious institutions by making them seem less independent, decreasing people’s trust. In a 2023 study of 54 Christian-majority countries, political scientists Jonathan Fox and Jori Breslawski found that some adherents felt that religious institutions become less legitimate when backed by the government. In addition, support from the state decreased people’s confidence in government.

Their findings built on previous research showing that the public is less likely to contribute to faith-based charities and attend religious services when the government offers funding for religious institutions.

In fact, many of the world’s lowest rates of religiosity are found in wealthy countries that have official churches, or had one until relatively recently, such as Sweden. Others have a history of separating people of different faiths into their own schools and other institutions, such as Belgium and the Netherlands.

History lessons

Perhaps the strongest example of how government support for religion can decrease religious participation is found in the former Soviet Union and its allies.

During the Cold War, Soviet officials sought to stamp out religious activity among their citizens. However, policies to repress independent religious institutions worked hand in hand with policies to co-opt religious institutions that would work with the government. Access to religious spaces made it easier for officials to spy on members and punish clergy who protested their rule.

In Hungary, the Communist Party sponsored government-run Catholic churches that were cut off from the Vatican. In Romania, the regime integrated formerly Catholic Churches into a state Orthodox Church. In the former Czechoslovakia, meanwhile, the Communist Party paid clergy’s salaries to keep them subservient.

To this day, many countries in the former Eastern Bloc have low rates of religious participation. In Russia, for example, a majority of citizens call themselves Orthodox Christians, and the church wields influence in politics. Yet only 16% of adults say religion is “very important” in their lives.

While scholars can point to the legacy of overt repression as a source of low religiosity, government support of religious institutions is also a lingering factor. Most post-Soviet states inherited systems that require religious groups to register, and they only provide funding to faiths that the government considers legitimate. Similar policies remain common in southeastern and central Eastern Europe.

In recent years, some countries in the region, including Russia and Hungary, have experienced democratic backsliding at the hands of populist leaders who also politicize religion for their own gain. Because of low rates of religious practice in such countries, religious leaders may welcome government support.

Two men, one in black clerical robes, stand stiffly in an ornate room with gold-framed paintings.
In this photograph distributed by the Russian government news agency Sputnik, President Vladimir Putin and Russia’s Orthodox Patriarch, Kirill, visit the Annunciation Church of the Alexander Nevsky Lavra in Saint Petersburg on July 28, 2024.
Alexey Danichev/Pool/AFP via Getty Images

Free market for faith

Most wealthy countries have witnessed steep declines in religiosity in the modern era. The United States is an outlier.

Overall, the percentage of Americans belonging to a religious congregation is declining, as is the share of Americans who regularly attend worship services. However, the percentage of Americans who are intensely religious has remained unchanged over the past several decades. Around 29% of Americans report praying several times a day, for example, and just under 7% say they attend religious services more than once a week.

Some religion scholars argue that the “free-market approach” – where all faiths are free to compete for worshippers, without government interference or preference – is what makes America relatively religious. In other words, they believe that this so-called “American exception” is because of the separation between church and state, not in spite of it.

Time will tell if conservatives’ push for collaboration between religion and the government will continue, or have its intended effects. History suggests, however, that governments’ attempts to strengthen particular religious communities may backfire.

The Conversation

Brendan Szendro does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why government support for religion doesn’t necessarily make people more religious – https://theconversation.com/why-government-support-for-religion-doesnt-necessarily-make-people-more-religious-258541

Dynasties familiales : comment la famille Rothschild a bâti un empire… et pourquoi elle a échoué aux États-Unis

Source: The Conversation – France (in French) – By Liena Kano, Professor, Haskayne School of Business, University of Calgary

Mayer Amschel Rothschild (1745–1812), fondateur de la maison Rothschild est avec d’autres membres de sa famille : Mayer Karl Rothschild, Nath. Mayer v. Rothschild, Baron Lion. Nath. Rothschild, James v. Rothschild, Baron Alb. v. Rothschild, Sal. Mayer v. Rothschild, Anselm Sal. Frh. von Rothschild et Anselm [sic] Mayer von Rothschild. Wikimediacommons

Quelles leçons tirer de l’entreprise Rothschild, la société multicentenaire née au XVIIIe siècle, pour affronter les crises ? Une résilience fondée sur un langage organisationnel commun, des objectifs de long terme, une réputation de marque et des routines. Mais l’entreprise familiale s’est heurtée à ses limites en ratant son développement aux États-Unis : une prise de décision basée sur les émotions, sans déléguer à des associés hors de la famille.


Les entreprises familiales constituent une composante vitale du paysage économique en France. Dans le contexte économique turbulent actuel, marqué par les tensions géopolitiques, les perturbations technologiques et l’évolution des structures commerciales, être une entreprise compétitive est plus important que jamais.

Partout dans le monde, les entreprises familiales ont fait preuve d’une résilience remarquable face aux chocs externes. Certaines des plus anciennes entreprises au monde sont des entreprises familiales qui ont survécu à des guerres mondiales, des révolutions, des catastrophes naturelles et des pandémies.

Parmi ces multinationales de longue date figure la société Rothschild, une banque d’investissement familiale européenne vieille de plusieurs siècles. Notre étude de cas sur la famille Rothschild, fondée sur une analyse historique, met en lumière la façon dont des relations durables, des routines organisationnelles fiables et des objectifs de long terme lui ont conféré des avantages significatifs dans le commerce international.

En même temps, les structures familiales peuvent contribuer à créer des « biais de bifurcation », qui consistent à privilégier les ressources familiales par rapport à des ressources non familiales de valeur égale ou supérieure. Notre étude révèle que ce biais de bifurcation peut compromettre le développement international d’une entreprise, en particulier sur des marchés éloignés et complexes.

Une brève histoire des Rothschild

Mayer Amschel Rothschild (1744-1812), fondateur de la dynastie.
Wikimedia Commons

À l’origine une entreprise commerciale, de prêt sur gages, la société est fondée à la fin du XVIIIe siècle par Mayer Amschel Rothschild, né dans le ghetto juif de Francfort (Hesse, Allemagne).

Rothschild et sa femme, Guttle, ont dix enfants, dont cinq fils : Amschel, Salomon, Nathan, Carl et James.

En 1798, Rothschild envoie Nathan à Manchester en Angleterre. Par effet de ricochet, l’entreprise se développe dans ce pays et opère sa mutation d’opérations purement marchandes à des transactions financières. Dans les années 1820, Rothschild devient une banque multinationale, avec Amschel, Salomon, Nathan, Carl et James à la tête de maisons bancaires à Francfort, à Vienne, à Londres, à Naples et à Paris, respectivement.

Avantages et inconvénients des liens familiaux

Nathan Mayer Rothschild (1777-1836). Fondateur de la branche dite « de Londres », il finança l’effort de guerre britannique contre les ambitions napoléoniennes.
Wikimedia Commons

Au XIXe siècle, la stratégie des Rothschild consistant à s’appuyer sur les membres de la famille fonctionne bien pour l’entreprise.

Ingénieux, les cinq frères Rothschild correspondent dans un langage codé. Ils partagent également un pool commun de ressources à une époque où l’échange de données financières est rare dans le secteur bancaire international.

Leurs liens familiaux étroits permettent aux frères de transférer des informations, de l’argent et des marchandises à travers les frontières à une vitesse et une portée inaccessibles à leurs concurrents.

Cette cohésion interne protège les affaires des Rothschild, en facilitant les transactions et en leur permettant de rester résilients à travers les périodes de bouleversements politiques importants : une vision audacieuse pendant les guerres napoléoniennes, le ralliement républicain en 1848 et un patriotisme économique durant la Première Guerre mondiale.

Cette même dépendance excessive à la famille devient un inconvénient lorsque l’entreprise Rothschild se développe aux États-Unis.

Biais de bifurcation

Les Rothschild montrent un intérêt pour le marché états-unien dès les années 1820. Leurs tentatives répétées d’envoyer des membres de leur famille aux États-Unis pour étendre leurs opérations échouent, car aucun n’est prêt à y rester, préférant le confort de la vie européenne.

August Belmont fut agent de la banque Rothschild aux États-Unis.
Wikimediacommons

Comme ils ne sont pas en mesure d’établir un ancrage familial dans le pays, les Rothschild nomment un agent, August Belmont, pour diriger les opérations états-uniennes en leur nom en 1837.

Belmont n’a pas le pouvoir de prendre des décisions entrepreneuriales, de faire des investissements ou de conclure des accords. Il n’a pas non plus un accès illimité au capital et n’a jamais été chargé d’un mandat officiel Rothschild ou reconnu comme un associé à part entière. Les Rothschild ne sont pas disposés à déléguer de telles décisions à quelqu’un qui n’est pas un descendant masculin direct du fondateur, Mayer Amschel Rothschild.

Cette incapacité à utiliser Belmont comme lien entre la famille – avec ses expériences réussies, ses capacités, ses routines et ses connexions en Europe – et le marché états-unien – avec ses opportunités croissantes et les précieux réseaux que Belmont commence à développer – empêche finalement l’entreprise Rothschild de reproduire son succès aux États-Unis.

Les Rothschild sont finalement éclipsés par les banques Barings et J P Morgan en Amérique. Les deux concurrents suivent un chemin différent sur le marché en ouvrant des filiales états-uniennes à part entière sous leurs marques d’entreprise, avec des fonds importants et une autonomie décisionnelle.

Garde-fous

Ce « biais de bifurcation » n’a pas toujours un impact négatif immédiat. Les pratiques de gouvernance biaisées sont restées sans conséquence pour les Rothschild – tant qu’il y avait suffisamment d’héritiers compétents pour diriger les opérations de la banque dispersées aux quatre coins du monde.

À court et à moyen terme, les liens de la famille, ses routines éprouvées et sa confiance mutuelle ont construit un réservoir de résilience qui a soutenu la banque tout au long du XIXe siècle, l’une des périodes politiques les plus instables de l’histoire européenne.

Mais lorsque les ambitions internationales d’une entreprise dépassent la taille de la famille, le biais de bifurcation peut nuire à la compétitivité, tant sur les marchés internationaux qu’au pays.

À un moment donné, les entreprises familiales doivent passer d’une prise de décision émotionnelle et biaisée à des systèmes de gouvernance efficaces. Ceux-ci impliquent l’intégration de gestionnaires non familiaux et la sélection de ressources, d’emplacements et de projets qui n’ont pas de signification émotionnelle.

La famille Tata, à la tête d’un groupe industriel indien créé en 1868, a mis en place des garde-fous pour empêcher les biais de bifurcation.
Shutterstock

De nombreuses entreprises familiales prospères mettent en œuvre des outils dans leurs systèmes de gouvernance pour détecter et éliminer les comportements biaisés. Par exemple, les multinationales familiales telles que Merck en Allemagne, Cargill aux États-Unis et Tata Group en Inde ont créé des freins et garde-fous qui empêchent les décideurs de penser uniquement en termes familiaux.

Les stratégies les plus efficaces pour se prémunir contre le biais de bifurcation invitent à un examen extérieur dans la prise de décision des entreprises : nomination de PDG extérieurs à la famille, mise en place de conseils d’administration indépendants, embauche de consultants et octroi de pouvoirs de décision aux associés.

Leçons pour les entreprises familiales

Aujourd’hui, alors que l’environnement commercial mondial est sans doute confronté à une instabilité sans précédent, les entreprises cherchent à renforcer leur résilience.

Les entreprises familiales multigénérationnelles doivent apprendre à se prémunir contre le biais de bifurcation pour prospérer sur les marchés internationaux. Leur capacité démontrée à résister aux chocs externes offre de précieuses leçons pour d’autres entreprises.

Comment les entreprises non familiales peuvent-elles imiter le succès et la longévité des Rothschild ? Le cas Rothschild nous enseigne l’importance d’avoir un langage organisationnel commun, de fixer des objectifs à long terme, de maintenir des routines stables et de mettre l’accent sur la réputation de la marque.

Ces stratégies peuvent aider toute entreprise, familiale ou non, à renforcer sa résilience en période de volatilité.

The Conversation

Liena Kano a reçu des financements de SSHRC.

.

Alain Verbeke a reçu des financements de SSHRC.

Luciano Ciravegna a reçu des financements d’INCAE Business School, où il dirige la chaire Steve Aronson.

Andrew Kent Johnston ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Dynasties familiales : comment la famille Rothschild a bâti un empire… et pourquoi elle a échoué aux États-Unis – https://theconversation.com/dynasties-familiales-comment-la-famille-rothschild-a-bati-un-empire-et-pourquoi-elle-a-echoue-aux-etats-unis-260735

Is that wildfire smoke plume hazardous? New satellite tech can map smoke plumes in 3D for better air quality alerts at neighborhood scale

Source: The Conversation – USA – By Jun Wang, Professor of Chemical and Biochemical Engineering, University of Iowa

Smoke from Canadian wildfires prompted air quality alerts in Chicago as it blanketed the city on June 5, 2025. Scott Olson/Getty Images

Canada is facing another dangerous wildfire season, with burning forests sending smoke plumes across the provinces and into the U.S. again. The pace of the 2025 fires is reminiscent of the record-breaking 2023 wildfire season, which exposed millions of people in North America to hazardous smoke levels.

For most of the past decade, forecasters have been able to use satellites to track these smoke plumes, but the view was only two-dimensional: The satellites couldn’t determine how close the smoke was to Earth’s surface.

The altitude of the smoke matters.

If a plume is high in the atmosphere, it won’t affect the air people breathe – it simply floats by, far overhead.

But when smoke plumes are close to the surface, people are breathing in wildfire chemicals and tiny particles. Those particles, known as PM2.5, can get deep into the lungs and exacerbate asthma and other respiratory and cardiac problems.

An animation shows mostly green (safe) air quality from ground-level monitors. However, in Canada, closer to the fire, the same plume shows high levels of PM2.5.
An animation on May 30, 2025, shows a thick smoke plume from Canada moving over Minnesota, but the air quality monitors on the ground detected minimal risk, suggesting it was a high-level smoke plume.
NOAA NESDIS Center for Satellite Applications and Research

The Environmental Protection Agency uses a network of ground-based air quality monitors to issue air quality alerts, but the monitors are few and far between, meaning forecasts have been broad estimates in much of the country.

Now, a new satellite-based method that I and colleagues at universities and federal agencies have been working on for the past two years is able to give scientists and air quality managers a 3D picture of the smoke plumes, providing detailed data of the risks down to the neighborhood level for urban and rural areas alike.

Building a nationwide smoke monitoring system

The new method uses data from a satellite that NASA launched in 2023 called the Tropospheric Emissions: Monitoring of Pollution, or TEMPO, satellite.

A map shows blue over the Dakotas, Nebraska and western parts of Minnesota and Iowa. Pink is over Pennsylvania up through Maine.
Data from the TEMPO satellite shows the height of the smoke plume, measured in kilometers. Light blue areas are closest to the ground, suggesting the worst air quality. Pink areas suggest the smoke is more than 2 miles (3.2 kilometers) above the ground, where it poses little risk to human health. The data aligns with air monitor readings taken on the ground at the same time.
NOAA NESDIS Center for Satellite Applications and Research

TEMPO makes it possible to determine a smoke plume’s height by providing data on how much the oxygen molecules absorb sunlight at the 688 nanometer wavelength. Smoke plumes that are high in the atmosphere reflect more solar radiation at this wavelength back to space, while those lower in the atmosphere, where there is more oxygen to absorb the light, reflect less.

Understanding the physics allowed scientists to develop algorithms that use TEMPO’s data to infer the smoke plume’s altitude and map its 3D movement in nearly real time.

An illustration shows a satellite, Sun and smoke plume at different heights. Higher plumes reflect more light.
Aerosol particles in high smoke plumes reflect more light back into space. Closer to Earth’s surface, there is more oxygen to absorb light at the 688 nanometer wavelength, so less light is reflected. Satellites can detect the difference, and that can be used to determine the height of the smoke plume.
Adapted from Xu et al, 2019, CC BY

By combining TEMPO’s data with measurements of particles in the atmosphere, taken by the Advanced Baseline Imager on the NOAA’s GOES-R satellites, forecasters can better assess the health risk from smoke plumes in almost real time, provided clouds aren’t in the way.

That’s a big jump from relying on ground-based air quality monitors, which may be hundreds of miles apart. Iowa, for example, had about 50 air quality monitors reporting data on a recent day for a state that covers 56,273 square miles. Most of those monitors were clustered around its largest cities.

NOAA’s AerosolWatch tool currently provides a near-real-time stream of wildfire smoke images from its GOES-R satellites, and the agency plans to incorporate TEMPO’s height data. A prototype of this system from my team’s NASA-supported research project on fire and air quality, called FireAQ, shows how users can zoom in to the neighborhood level to see how high the smoke plume is, however the prototype is currently only updated once a day, so the data is delayed, and it isn’t able to provide smoke height data where clouds are also overhead.

Wildfire health risks are rising

Fire risk is increasing across North America as global temperatures rise and more people move into wildland areas.

While air quality in most of the U.S. improved between 2000 and 2020, thanks to stricter emissions regulations on vehicles and power plants, wildfires have reversed that trend in parts of the western U.S. Research has found that wildfire smoke has effectively erased nearly two decades of air quality progress there.

Our advances in smoke monitoring mark a new era in air quality forecasting, offering more accurate and timely information to better protect public health in the face of these escalating wildfire threats.

The Conversation

Prof. Wang’s group have been supported from NOAA, NASA, and Naval ONR to develop research algorithm to retrieve aerosol layer height. The compute codes of the research algorithm were shared with colleagues in NOAA.

ref. Is that wildfire smoke plume hazardous? New satellite tech can map smoke plumes in 3D for better air quality alerts at neighborhood scale – https://theconversation.com/is-that-wildfire-smoke-plume-hazardous-new-satellite-tech-can-map-smoke-plumes-in-3d-for-better-air-quality-alerts-at-neighborhood-scale-259654

Is that wildfire smoke plume hazardous? New satellite tech can map smoke height for better air quality alerts at neighborhood scale

Source: The Conversation – USA (2) – By Jun Wang, Professor of Chemical and Biochemical Engineering, University of Iowa

Smoke from Canadian wildfires prompted air quality alerts in Chicago as it blanketed the city on June 5, 2025. Scott Olson/Getty Images

Canada is facing another dangerous wildfire season, with burning forests sending smoke plumes across the provinces and into the U.S. again. The pace of the 2025 fires is reminiscent of the record-breaking 2023 wildfire season, which exposed millions of people in North America to hazardous smoke levels.

For most of the past decade, forecasters have been able to use satellites to track these smoke plumes, but the view was only two-dimensional: The satellites couldn’t determine how close the smoke was to Earth’s surface.

The altitude of the smoke matters.

If a plume is high in the atmosphere, it won’t affect the air people breathe – it simply floats by, far overhead.

But when smoke plumes are close to the surface, people are breathing in wildfire chemicals and tiny particles. Those particles, known as PM2.5, can get deep into the lungs and exacerbate asthma and other respiratory and cardiac problems.

An animation shows mostly green (safe) air quality from ground-level monitors. However, in Canada, closer to the fire, the same plume shows high levels of PM2.5.
An animation on May 30, 2025, shows a thick smoke plume from Canada moving over Minnesota, but the air quality monitors on the ground detected minimal risk, suggesting it was a high-level smoke plume.
NOAA NESDIS Center for Satellite Applications and Research

The Environmental Protection Agency uses a network of ground-based air quality monitors to issue air quality alerts, but the monitors are few and far between, meaning forecasts have been broad estimates in much of the country.

Now, a new satellite-based method that I and colleagues at universities and federal agencies have been working on for the past two years is able to give scientists and air quality managers a 3D picture of the smoke plumes, providing detailed data of the risks down to the neighborhood level for urban and rural areas alike.

Building a nationwide smoke monitoring system

The new method uses data from a satellite that NASA launched in 2023 called the Tropospheric Emissions: Monitoring of Pollution, or TEMPO, satellite.

A map shows blue over the Dakotas, Nebraska and western parts of Minnesota and Iowa. Pink is over Pennsylvania up through Maine.
Data from the TEMPO satellite shows the height of the smoke plume, measured in kilometers. Light blue areas are closest to the ground, suggesting the worst air quality. Pink areas suggest the smoke is more than 2 miles (3.2 kilometers) above the ground, where it poses little risk to human health. The data aligns with air monitor readings taken on the ground at the same time.
NOAA NESDIS Center for Satellite Applications and Research

TEMPO makes it possible to determine a smoke plume’s height by providing data on how much the oxygen molecules absorb sunlight at the 688 nanometer wavelength. Smoke plumes that are high in the atmosphere reflect more solar radiation at this wavelength back to space, while those lower in the atmosphere, where there is more oxygen to absorb the light, reflect less.

Understanding the physics allowed scientists to develop algorithms that use TEMPO’s data to infer the smoke plume’s altitude and map its 3D movement in nearly real time.

An illustration shows a satellite, Sun and smoke plume at different heights. Higher plumes reflect more light.
Aerosol particles in high smoke plumes reflect more light back into space. Closer to Earth’s surface, there is more oxygen to absorb light at the 688 nanometer wavelength, so less light is reflected. Satellites can detect the difference, and that can be used to determine the height of the smoke plume.
Adapted from Xu et al, 2019, CC BY

By combining TEMPO’s data with measurements of particles in the atmosphere, taken by the Advanced Baseline Imager on the NOAA’s GOES-R satellites, forecasters can better assess the health risk from smoke plumes in almost real time, provided clouds aren’t in the way.

That’s a big jump from relying on ground-based air quality monitors, which may be hundreds of miles apart. Iowa, for example, had about 50 air quality monitors reporting data on a recent day for a state that covers 56,273 square miles. Most of those monitors were clustered around its largest cities.

NOAA’s AerosolWatch tool currently provides a near-real-time stream of wildfire smoke images from its GOES-R satellites, and the agency plans to incorporate TEMPO’s height data. A prototype of this system from my team’s NASA-supported research project on fire and air quality, called FireAQ, shows how users can zoom in to the neighborhood level to see how high the smoke plume is, however the prototype is currently only updated once a day, so the data is delayed, and it isn’t able to provide smoke height data where clouds are also overhead.

Wildfire health risks are rising

Fire risk is increasing across North America as global temperatures rise and more people move into wildland areas.

While air quality in most of the U.S. improved between 2000 and 2020, thanks to stricter emissions regulations on vehicles and power plants, wildfires have reversed that trend in parts of the western U.S. Research has found that wildfire smoke has effectively erased nearly two decades of air quality progress there.

Our advances in smoke monitoring mark a new era in air quality forecasting, offering more accurate and timely information to better protect public health in the face of these escalating wildfire threats.

The Conversation

Prof. Wang’s group have been supported from NOAA, NASA, and Naval ONR to develop research algorithm to retrieve aerosol layer height. The compute codes of the research algorithm were shared with colleagues in NOAA.

ref. Is that wildfire smoke plume hazardous? New satellite tech can map smoke height for better air quality alerts at neighborhood scale – https://theconversation.com/is-that-wildfire-smoke-plume-hazardous-new-satellite-tech-can-map-smoke-height-for-better-air-quality-alerts-at-neighborhood-scale-259654

Arsenalisation de l’espace : quelles armes, quelles menaces, quel droit ?

Source: The Conversation – in French – By Katia Coutant, Chercheuse associée à la chaire Espace de l’ENS-PSL, Université Paris Nanterre – Université Paris Lumières

L’humanité a longtemps connu deux terrains de conflictualité : la terre et la mer. L’air est venu s’y ajouter au début du XXe siècle ; cent ans plus tard est venu le temps du cyber… mais quid de l’espace ? Si les affrontements armés entre vaisseaux spatiaux relèvent encore de la science-fiction, l’arsenalisation de l’espace est déjà en cours. Mais de quoi parle-t-on exactement quand on emploie cette expression de plus en plus en vogue ?


À première vue, on pourrait définir l’arsenalisation de l’espace comme l’ensemble des technologies, des activités et des capacités qui visent à permettre le combat et les actions offensives dans l’espace extra-atmosphérique, par exemple à travers le placement d’armes en orbite.

Pourtant, le terme et son champ exact restent sujets à débat, et l’expression de « course à l’armement dans l’espace » pourrait lui être préférée.

« Arsenalisation » ou « militarisation » ?

L’usage croissant de l’expression « arsenalisation » reflète la conception grandissante de l’espace extra-atmosphérique comme étant un terrain de conflictualité. La définition même de l’arsenalisation est débattue, et soulève l’enjeu de la distinction entre l’arsenalisation et la militarisation de l’espace.

La militarisation couvrirait l’utilisation des moyens spatiaux en soutien à des opérations militaires et aurait ainsi un objet différent de l’arsenalisation. Prenons des exemples : un satellite d’observation utilisé pour surveiller des mouvements de troupes au sol relève a minima de la militarisation. Un dispositif en orbite capable de détruire un satellite adverse, au moyen d’un laser ou d’un missile, relève à coup sûr de l’arsenalisation.

Le groupe d’experts gouvernementaux chargé d’étudier de nouvelles mesures concrètes de prévention d’une course aux armements dans l’espace, rattaché à l’Organisation des Nations unies (ONU), prend en compte toutes les menaces liées aux infrastructures spatiales, y compris les vecteurs « Terre-espace, espace-Terre, espace-espace, et Terre-Terre ». Cette conception large permet de mieux identifier les risques liés à la prolifération d’armes en lien avec le secteur spatial, sans se limiter au piège de la cible géographique parfois associé à l’arsenalisation.

Quelles sont les menaces existantes ?

Il existe différents types de systèmes ou armes ciblant les infrastructures spatiales. Les tests antisatellitaires à ascension directe, dont la légalité est questionnée à l’ONU actuellement, consistent en la capacité pour un État de tirer sur ses propres satellites depuis la Terre.

Si, jusqu’ici, les tirs réalisés par les États ont consisté en des tests sur leurs propres satellites, ces tirs soulèvent toutefois le problème de la création de débris et des risques de collision de ceux-ci avec des satellites, et prouvent que ces États sont capables de viser (et d’atteindre) un satellite ennemi.

La question « Existe-t-il des armes positionnées dans l’espace qui pourraient viser la Terre ? » est souvent posée. Bien que ce soit en théorie envisageable, en pratique, les États n’ont pas recours à ce type de projets.

Les coûts de développement et de maintenance ainsi qu’une efficacité questionnable expliquent la préférence pour les techniques terrestres.

Surtout, de nombreuses technologies duales – c’est-à-dire d’utilisation civile et militaire – existent. Un satellite d’observation peut aussi bien servir à surveiller la déforestation qu’à repérer des infrastructures militaires. Ce flou complique considérablement la mise en place de règles claires.

Si l’on se limite aux technologies visant des cibles dans l’espace depuis l’espace, quelques exemples d’armes existent.

Déjà, en 1962, les États-Unis ont mené un essai d’explosion nucléaire dans l’espace, appelé Starfish Prime. Il a rendu inopérables de nombreux satellites, et les États ont par la suite décidé d’interdire les essais d’armes nucléaires dans l’espace. À ce jour, il n’y a donc pas d’armes nucléaires dans l’espace.

Au-delà des armes nucléaires, différentes technologies ont été essayées. Du côté soviétique, dès les années 1970, la station Almaz avait expérimenté l’installation d’un canon en orbite sur un satellite. Puis, en 2018, un satellite russe a été repéré très près d’un satellite franco-italien. Cette technologie, dite satellite « butineur », peut interférer avec le fonctionnement de la cible.

En réponse à cette situation, la France développe le système Laser Toutatis qui vise à équiper des satellites de défense d’un laser capable de neutraliser tout objet suspect qui s’en approcherait.

Guerre et paix dans l’espace

Cette présence d’armes confirme que l’espace est un lieu de conflictualité. Pourtant, dès 1967, grâce au Traité de l’espace, le principe de l’utilisation pacifique de l’espace a été acté.

Cette expression ne signifie pas que les armes sont illégales dans l’espace : leur présence n’est pas interdite tant qu’elles ne sont pas utilisées. Une nuance : en vertu de ce traité, si l’espace extra-atmosphérique peut accueillir certaines armes, les corps célestes, eux, demeurent entièrement exempts de toute arme, quelle qu’en soit la nature.

Ainsi, si l’on se limite à l’orbite autour de la Terre, l’arsenalisation est (pour l’instant) permise sauf pour les armes nucléaires et de destruction massive. Cela ne veut pas pour autant dire que le recours à la force armée est autorisé dans l’espace.

La Charte des Nations unies, également applicable dans l’espace, interdit le recours à la force ; en revanche est permise la légitime défense. C’est la raison pour laquelle les acteurs présentent leurs nouvelles technologies sous le nom de « technologie de défense active ».

Des initiatives visant à encadrer davantage ces pratiques existent, et des négociations sont en cours dans le cadre de la Conférence du désarmement à Genève.


La série « L’envers des mots » est réalisée avec le soutien de la Délégation générale à la langue française et aux langues de France du ministère de la culture.

The Conversation

Katia Coutant a reçu des financements du ministère de l’enseignement supérieur et de la recherche.

ref. Arsenalisation de l’espace : quelles armes, quelles menaces, quel droit ? – https://theconversation.com/arsenalisation-de-lespace-quelles-armes-quelles-menaces-quel-droit-260465

‘Fibremaxxing’ is trending – here’s why that could be a problem

Source: The Conversation – UK – By Lewis Mattin, Senior Lecturer, Life Sciences, University of Westminster

Soluble fibre. Towfiqu ahamed barbhuiya/Shutterstock.com

You need fibre. That much is true. But in the world of online health trends, what started out as sound dietary advice has spiralled into “fibremaxxing” – a push to consume eye-watering amounts in the name of wellness.

In the UK, NHS guidelines suggest that an adult should consume at least 30g of fibre a day. Children and teens typically need much less.

Yet despite clear guidelines, most Britons fall short of their daily fibre target. One major culprit? The rise of ultra-processed foods, or UPFs. UK adults now get over 54% of their daily calories from ultra-processed foods. For teenagers, it’s nearer 66%.

This matters because UPFs are typically low in fibre and micronutrients, while being high in sugar, salt and unhealthy fats. When these foods dominate our plate, naturally fibre-rich whole foods get pushed out.

Studies show that as ultra-processed food intake increases, fibre consumption decreases, along with other essential nutrients. The result is a population falling well short of its daily fibre target.

Dietary fibre is essential for good health as part of a balanced diet. And it is best found in natural plant-based foods.

Adding high fibre foods to your meals and snacks throughout a typical day, such as switching to wholegrain bread for breakfast, keeping the skin on fruits like an apple, adding lentils and onions to a chilli evening meal and eating a handful of pumpkin seeds or Brazil nuts between meals, would help an average person hit their 30g-a-day dietary requirements.

Displacement

With fibremaxxing, what might make this trend somewhat dangerous is the removal of other food groups such as proteins, carbohydrates and fats and replacing them with fibre-dense foods, supplements or powder. This is where the potential risk could mitigate the benefits of increasing fibre, as no robust studies in humans – as far as I’m aware – have been conducted on long-term fibre intakes over 40g a day. (Some advocates of fibremaxxing suggest consuming between 50 and 100g a day.)

Eating too much fibre too quickly – especially without enough water – can lead to bloating, cramping and constipation. It can also cause a buildup of gas that can escape at the most inconvenient moments, like during a daily commute.

Commuters looking suspiciously at someone off-camera.
Someone’s been fibremaxxing.
William Perugini/Shutterstock.com

Rapidly increasing fibre intake or consuming too much can interfere with the absorption of essential micronutrients like iron, which supports normal body function, as well as macronutrients, which provide the energy needed for movement, repair and adaptation.

However, it’s important to remember that increasing fibre in your diet offers a wide range of health benefits. It supports a healthy digestive system by promoting regular bowel movements and reducing the occurrence of inflammatory bowel disease.

Soluble fibre helps to regulate blood sugar levels by slowing the absorption of glucose, making it especially helpful for people at risk of type 2 diabetes. It also lowers LDL (bad) cholesterol, reducing the risk of heart disease. Fibre keeps you feeling full for longer, which supports healthy weight management and appetite regulation. These findings are all well documented.

Additionally, a high-fibre diet has been linked to a lower risk of certain cancers, particularly colon cancer, by helping to remove toxins efficiently from the body. Gradually increasing fibre intake to recommended levels – through a balanced, varied diet – can offer real health benefits.

Given the evidence, it’s clear that many of us could benefit from eating more fibre – but within reason.

Until we know more, it’s safest to stick to fibre intake within current guidelines, and get it from natural sources rather than powders or supplements. Fibre is vital, but more isn’t always better. Skip the social media fads and aim for balance: whole grains, veg, nuts and seeds. Your gut – and your fellow commuters – will thank you.

The Conversation

Lewis Mattin is affiliated with The Physiological Society, The Society for Endocrinology, In2Science & UKRI funded Ageing and Nutrient Sensing Network.

ref. ‘Fibremaxxing’ is trending – here’s why that could be a problem – https://theconversation.com/fibremaxxing-is-trending-heres-why-that-could-be-a-problem-261280

Netflix is now using generative AI – but it risks leaving viewers and creatives behind

Source: The Conversation – UK – By Edward White, PhD Candidate in Psychology, Kingston University

Netflix’s recent use of generative AI to create a building collapse scene in the sci-fi show El Eternauta (The Eternaut) marks more than a technological milestone. It reveals a fundamental psychological tension about what makes entertainment authentic.

The sequence represents the streaming giant’s first official deployment of text-to-video AI in final footage. According to Netflix, it was completed ten times faster than traditional methods would have allowed.

Yet this efficiency gain illuminates a deeper question rooted in human psychology. When viewers discover their entertainment contains AI, does this revelation of algorithmic authorship trigger the same cognitive dissonance we experience when discovering we’ve been seduced by misinformation?

The shift from traditional CGI (computer-generated imagery) to generative AI is the most significant change in visual effects (VFX) since computer graphics displaced physical effects.

Traditional physical VFX requires legions of artists meticulously crafting mesh-based models, spending weeks perfecting each element’s geometry, lighting and animation. Even the use of CGI with green screens demands human artists to construct every digital element from 3D models and programme the simulations. They have to manually key-frame each moment, setting points to show how things move or change.

Netflix’s generative AI approach marks a fundamental shift. Instead of building digital scenes piece by piece, artists simply describe what they want and algorithms generate full sequences instantly. This turns a slow, laborious craft into something more like a creative conversation. But it also raises tough questions. Are we seeing a new stage of technology – or the replacement of human creativity with algorithmic guesswork?


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


El Eternauta’s building collapse scene demonstrates this transformation starkly. What would once have demanded months of modelling, rigging and simulation work has been accomplished through text-to-video generation in a fraction of the time.

The economics driving this transformation extend far beyond Netflix’s creative ambitions.

The text-to-video AI market is projected to be worth £1.33 billion by 2029. This reflects an industry looking to cut corners after the streaming budget cuts of 2022. In that year, Netflix’s content spending declined 4.6%, while Disney and other major studios implemented widespread cost-cutting measures.

AI’s cost disruption is bewildering. Traditional VFX sequences can cost thousands per minute. As a result, the average CGI and VFX budget for US films reached US$33.7 million (£25 million) per movie in 2018. Generative AI could lead to cost reductions of 10% across the media industry, and as much as 30% in TV and film. This will enable previously impossible creative visions to be realised by independent filmmakers – but this increased accessibility comes with losses too.

The trailer for El Eternauta.

The OECD reports that 27% of jobs worldwide are at “high risk of automation” due to AI. Meanwhile, surveys by the International Alliance of Theatrical Stage Employees have revealed that 70% of VFX workers do unpaid overtime, and only 12% have health insurance. Clearly, the industry is already under pressure.

Power versus precision

While AI grants filmmakers unprecedented access to complex imagery, it simultaneously strips away the granular control that defines directorial vision.

As an experiment, film director Ascanio Malgarini spent a year creating an AI-generated short film called Kraken (2025). He used AI tools like MidJourney, Kling, Runway and Sora, but found that “full control over every detail” was “simply out of the question”.

Malgarini described working more like a documentary editor. He assembled “vast amounts of footage from different sources” rather than directing precise shots.

Kraken, the experimental AI short film by Ascanio Malgarini.

And it’s not just filmmakers who prefer the human touch. In the art world, studies have shown that viewers strongly prefer original artworks to pixel-perfect AI copies. Participants cited sensitivity to the creative process as fundamental to appreciation.

When applied to AI-generated content, this bias creates fascinating contradictions. Recent research in Frontiers in Psychology found that when participants didn’t know the origin, they significantly preferred AI-generated artwork to human-made ones. However, once AI authorship was revealed, the same content suffered reduced perceptions of authenticity and creativity.

Hollywood’s AI reckoning

Developments in AI are happening amid a regulatory vacuum. While the US Congress held multiple AI hearings in 2023, no comprehensive federal AI legislation exists to govern Hollywood’s use. The stalled US Generative AI Copyright Disclosure Act leaves creators without legal protections, as companies deploy AI systems trained on potentially copyrighted materials.

The UK faces similar challenges, with the government launching a consultation in December 2024 on copyright and AI reform. This included a proposal for an “opt-out” system, meaning creators could actively prevent their work from being used in AI training.

The 2023 Hollywood strikes crystallised industry fears about AI displacement. Screenwriters secured protections ensuring AI cannot write or rewrite material, while actors negotiated consent requirements for digital replicas. Yet these agreements primarily cover the directors, producers and lead actors who have the most negotiating power, while VFX workers remain vulnerable.

Copyright litigation is now beginning to dominate the AI landscape – over 30 infringement lawsuits have been filed against AI companies since 2020. Disney and Universal’s landmark June 2025 lawsuit against Midjourney represents the first major studio copyright challenge, alleging the AI firm created a “bottomless pit of plagiarism” by training on copyrighted characters without permission.

Meanwhile, federal courts in the US have delivered mixed rulings. A Delaware judge found against AI company Ross Intelligence for training on copyrighted legal content, while others have partially sided with fair use defences.

The industry faces an acceleration problem – AI advancement outpaces contract negotiations and psychological adaptation. AI is reshaping industry demands, yet 96% of VFX artists report receiving no AI training, with 31% citing this as a barrier to incorporating AI in their work.

Netflix’s AI integration shows that Hollywood is grappling with fundamental questions about creativity, authenticity and human value in entertainment. Without comprehensive AI regulation and retraining programs, the industry risks a future where technological capability advances faster than legal frameworks, worker adaptation and public acceptance can accommodate.

As audiences begin recognising AI’s invisible hand in their entertainment, the industry must navigate not just economic disruption, but the cognitive biases that shape how we perceive and value creative work.

The Conversation

Edward White is affiliated with Kingston University.

ref. Netflix is now using generative AI – but it risks leaving viewers and creatives behind – https://theconversation.com/netflix-is-now-using-generative-ai-but-it-risks-leaving-viewers-and-creatives-behind-261699

Yazidi genocide victims offered glimmer of hope for justice – but challenges remain

Source: The Conversation – UK – By Busra Nisa Sarac, Senior Lecturer in International Security and Gender Studies, University of Portsmouth

A French national called Sonia Mejri will stand trial for her alleged involvement in crimes committed against the Yazidi community, a Paris court ruled in early July. Mejri is accused of having joined the Islamic State (IS) group’s so-called caliphate in Iraq and Syria, and participating in its genocidal campaign against the Yazidi religious minority group 11 years ago.

At that time, IS overran the Sinjar region of northern Iraq and carried out atrocities against the civilian population. The Yazidi people were subjected to murder, rape, enslavement and forced conversion to Islam. Approximately 12,000 Yazidis were killed or abducted by IS, and around 250,000 fled to Mount Sinjar where they faced near starvation.

The Paris court’s ruling follows the prosecution of several other people across Europe in recent years for their role in enslaving Yazidis. These developments have offered the Yazidi community a glimmer of hope for justice.

In 2021, for example, a former member of IS called Taha al-Jumailly was convicted of genocide and crimes against humanity. A court in Frankfurt, Germany, ruled that he intended to eliminate the Yazidis by purchasing two women and enslaving them. This was the world’s first trial concerning the Yazidi genocide.

More recently, in 2024, a Dutch woman known as Hasna Aarab stood trial in The Hague, Netherlands, for charges also related to the enslavement of Yazidi women. She was sentenced to ten years in prison. Then, in February 2025, a Swedish woman called Lina Ishaq was convicted of committing genocide, crimes against humanity and gross war crimes against Yazidis in Syria.

Despite the fact that the international community has been slow in prosecuting members of IS for their roles in the genocide, these cases are a positive development. But it should also be noted that they are the result of years of advocacy and campaigning by Yazidi organisations and activists.

The Free Yezidi Foundation and Nadia’s Initiative are just two examples of organisations that have been fighting for justice and reparation since 2014.

Notwithstanding these developments, and the fact that IS lost control of its territory in Iraq and Syria in 2017, there are still significant challenges facing the Yazidi community. One pressing concern is the whereabouts of the more than 2,000 Yazidis who are still missing.

A few Yazidi women have emerged from different locations in recent years, which has made families hopeful. But the missing elderly women are now presumed dead and many others are believed to have been killed by airstrikes in the international military campaign against IS. These people are thought to be buried in mass graves.

Another concern is linked to the detention camps in northeast Syria, where suspected members of IS are detained indefinitely. A 2024 report by Amnesty International indicated that hundreds of Yazidis are probably being held in the camps.

This can be explained by two factors. First, Yazidi women in these camps may avoid identification due to fears of being separated from their children born in IS slavery. Yazidi leaders have declared that children born to IS members are not welcome and could never be assimilated into Yazidi society.

Second, it’s possible that some Yazidis in the camps no longer know their identity due to prolonged captivity and exposure to radical views from IS members. Both factors may prevent many Yazidis from returning to their communities, compounding the long-term consequences of the genocide.

The Al-Hol detention camp in north-eastern Syria.
The Al-Hol detention camp in north-eastern Syria, where many people with ties to IS are held.
Trent Inness / Shutterstock

Persistent security challenges

The Yazidis also continue to face persistent security challenges, as they lack the necessary infrastructure and support to rebuild their home towns. More than a decade on, 200,000 Yazidis remain displaced, with the majority living in makeshift camps. These camps are mainly located in Duhok, a city in the autonomous Kurdistan Region of Iraq.

The Kurdistan regional government has been actively working to close down or merge the displacement camps in an attempt to encourage the displaced families to return home. But a lack of infrastructure, including access to water, and limited employment opportunities continue to hinder their return and resettlement.

Iraq’s federal government has said it will give 4 million Iraqi dinars (roughly £2,250) to each Yazidi family that returns home, as well as offering interest-free bank loans. But the compensation scheme has now been paused due to a lack of funds. Even when it was offered, the amount was not enough to help people rebuild their lives in places that are in ruins.

The presence of various armed groups supported by different states in the region also threatens the safety and security of the Yazidis. Sinjar’s rugged terrain and remoteness from political centres has long encouraged groups, including the Kurdish Workers’ Party and Iraq’s Popular Mobilisation Forces, to establish transit routes there to support their allies in Iraq, Syria, Jordan and Turkey.




Read more:
How does the PKK’s disarmament affect Turkey, Syria and Iraq?


Sinjar is also a disputed territory, claimed by the federal government in Baghdad and the Kurdish regional government in Erbil. Clashes between local militia groups continue to destabilise Sinjar, leading to the re-displacement of some Yazidis who have only recently returned, while preventing many others from returning even if they wanted to do.

The trials of IS members have given Yazidis some hope for justice. But persistent problems since 2014 have made it hard for them to return to their hometowns, or feel safe if they do so. Until these things are dealt with properly, the same problems will continue in the years to come.

The Conversation

Busra Nisa Sarac does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Yazidi genocide victims offered glimmer of hope for justice – but challenges remain – https://theconversation.com/yazidi-genocide-victims-offered-glimmer-of-hope-for-justice-but-challenges-remain-261612

Madagascar : le marché de compensation carbone en pleine émergence

Source: The Conversation – in French – By Quentin Grislain, Chercheur en géographie politique, Cirad

Plantation d’acacia sur une parcelle de colline d’un membre du projet TERAKA. Quentin Grislain / Cirad, , Fourni par l’auteur

Depuis la fin des années 2000, les projets de compensation carbone se sont multipliés à Madagascar. Ces projets renvoient à des initiatives diverses, allant de programmes de reforestation et de boisement à la restauration des zones humides et à la prévention de la déforestation. Ils sont présentés par leurs promoteurs comme des solutions « gagnant-gagnant », offrant à la fois des avantages en termes d’atténuation du changement climatique et de développement socio-économique local.

Mais au-delà du discours et des promesses, de quoi parle-t-on lorsqu’on évoque les projets de compensation carbone à Madagascar ? Quels sont les contours de ces projets en termes d’échelle d’intervention, d’acteurs impliqués et de promesses d’engagement envers les communautés locales ?

Mes recherches portent sur l’analyse des interactions entre agriculture familiale et dynamiques territoriales au prisme des questions foncières. J’ai récemment mené une étude exploratoire entre les mois de janvier et juin 2025, avec l’appui d’une équipe d’étudiants de l’Ecole supérieure des sciences agronomiques de l’université d’Antananarivo. Elle apporte un éclairage sur l’ampleur et la diversité des projets de compensation carbone à Madagascar.

Une réglementation en évolution

À Madagascar, le décret 2021-1113 relatif à la régulation de l’accès au marché du carbone forestier avait introduit des dispositions attribuant à l’État la propriété de tous les crédits carbone générés dans le pays. Cela avait eu pour effet de limiter l’accès au marché du carbone pour les organisations non gouvernementales (ONG) et les acteurs du secteur privé.

Face à ce constat, le gouvernement malgache a initié, fin 2024, un processus de révision de ce décret. Ce processus a abouti le 6 juin 2025 à l’adoption du décret 2025-626 élargissant les droits d’accès au marché du carbone forestier. Il ouvre également des opportunités à d’autres types d’acteurs pour bénéficier des revenus issus de la vente de crédits carbone.

Désormais, toute personne physique ou morale, publique ou privée, nationale ou étrangère, peut, sous certaines conditions, générer des réductions d’émissions, en revendiquer la propriété et les commercialiser.

Néanmoins, à ce stade, les effets de ce nouveau cadre réglementaire sur le marché du carbone à Madagascar restent incertains.

État des lieux des projets de compensation carbone

Malgré cette incertitude, Madagascar a connu de nombreuses initiatives visant à établir un modèle commercial pour la conservation et la restauration des forêts. Cependant, seul un nombre limité de projets a dépassé la phase initiale de démarrage et a finalement été enregistré sur le marché volontaire du carbone.

Dix projets de carbone terrestre sont actuellement répertoriés à Madagascar dans les registres de compensation carbone. Au total, six projets ont progressé avec succès vers un statut d’enregistrement, tandis que quatre restent en cours de développement ou de validation. Parmi les projets enregistrés, quatre sont également confrontés à des retards de vérification.

La superficie totale des terres répertoriées pour l’ensemble de ces projets de compensation carbone s’élève à 894 026 hectares. Ces projets sont initiés par une diversité d’acteurs (État, ONG de conservation, entreprise étrangère, etc.) au nom de la lutte contre les crises du climat et de la biodiversité. Toutefois, ils peuvent engendrer une concurrence accrue pour les terres et constituer ainsi une menace supplémentaire pour l’agriculture familiale malgache.

Principaux types de projets de compensation carbone

Les données du registre de compensation carbone montrent que les projets sont très divers. Ils peuvent être regroupés en trois types : les projets de conservation à grande échelle, les projets communautaires et les investissements du secteur privé.

Depuis le début des années 2000, la finance carbone est reconnue comme un outil potentiel pour financer durablement la gestion des aires de conservation.

À Madagascar, le secteur de la compensation carbone comprend un petit nombre de projets à grande échelle menés par des ONG internationales telles que Conservation International et Wildlife Conservation Society (WCS), qui visent à générer des fonds supplémentaires pour les efforts de conservation grâce à la vente de crédits carbone.

Ces initiatives sont généralement mises en œuvre sur des terres appartenant à l’État. Elles sont gérées dans le cadre d’accords à long terme avec des ONG internationales – comme par exemple le projet d’aire protégée de la forêt de Makira, qui représente à lui seul plus de 350 000 hectares dans le registre Verified Carbon Standard de Verra, mis en œuvre par WCS.

Un autre type de projet vise à diversifier les revenus des agriculteurs locaux et à promouvoir le développement rural dans le cadre de projets de moindre envergure, dont la superficie est généralement inférieure à 10 000 hectares.

Par exemple, le projet Tahiry Honko, mis en œuvre par l’ONG britannique Blue Ventures dans l’aire marine protégée de Velondriake, au sud-ouest de Madagascar, se concentre sur le reboisement et la conservation de plus de 1 200 hectares de mangroves. Il vise également le développement de moyens de subsistance alternatifs pour les communautés locales.

Bien que les terres de mangroves appartiennent à l’État, l’association Velondriake cherche à obtenir des droits légaux pour gérer la forêt de mangroves. Cependant, l’interdiction nationale de l’exploitation des mangroves et les questions foncières non résolues restent des défis à relever, en particulier pour les ventes de crédits carbone.

Par ailleurs, quelques projets sont également menés par des acteurs du secteur privé tel que le projet Fagnako financé par Canopy Energy, une société française basée à Paris. Le projet intervient dans l’est de Madagascar. Il vise à réhabiliter des terres dégradées en plantant 3 millions d’arbres Pongamia, des espèces fourragères et des arbres fruitiers.

Pour ce faire, en décembre 2023, le projet a signé un bail emphytéotique de 35 ans avec la commune de Vohitranivona sur 10 500 ha.

Notre étude met en évidence que la taille des projets varie considérablement selon la nature des acteurs impliqués et leurs stratégies. Elle va de petites initiatives communautaires de moins de 2 000 hectares à des efforts REDD+ (Réduction des émissions dues à la déforestation et à la dégradation des forêts) à grande échelle couvrant plus de 300 000 hectares. Cela souligne les ambitions très diverses au sein du secteur des compensations carbone à Madagascar.

Formation de petits groupes de paysans à la mise en œuvre de pépinière artisanale dans la commune de Mahazoarivo dans le cadre du projet TERAKA.
Quentin Grislain / Cirad,, Fourni par l’auteur

Des promesses d’engagement variables

Les engagements d’investissement en faveur des communautés locales et du développement territorial varient considérablement d’un projet à l’autre. Ils vont de simples initiatives de formation et de sensibilisation des communautés locales à la création de centaines d’emplois et à la réhabilitation d’infrastructures communautaires.

Le projet de reforestation communautaire TERAKA, par exemple, adopte une approche centrée sur la diffusion des connaissances et le renforcement des capacités des ménages ruraux au travers de formations diverses (plantation d’arbres, fourneaux améliorés, etc.).

En outre, le projet met l’accent sur le partage des bénéfices de la vente des crédits carbone. En revanche, il ne vise pas à créer des emplois directs ou à fournir des appuis à des services plus larges.

À l’inverse, le projet Fagnako vise à créer des opportunités d’emploi, avec un résultat attendu de 300 emplois permanents et de 3 000 emplois saisonniers, ainsi qu’à réhabiliter les infrastructures municipales. Cependant, le document de projet ne mentionne aucun mécanisme de partage des revenus visant à fournir aux communautés locales une part directe des revenus générés par les ventes de crédits carbone.

Par ailleurs, des écarts peuvent exister entre les annonces des promoteurs de projets et les concrétisations sur le terrain.

La réalisation des promesses d’engagement dépend d’une combinaison de facteurs. Le volontarisme des porteurs de projets joue un rôle central, notamment en matière d’inclusion des exploitants familiaux et de reconnaissance des droits fonciers locaux. Elle repose aussi sur des engagements contraignants visant à renforcer la transparence tout au long du cycle de vie des projets.

Enfin, les moyens financiers devraient être alignés avec les objectifs annoncés par les promoteurs de projets en termes de retombées socio-économiques pour les populations locales. En effet, de manière générale, il y a une sous-estimation des ressources nécessaires aux activités de développement local dans ces projets.

Suivre l’évolution des marchés de compensation carbone

Finalement, cette étude exploratoire montre que le marché du carbone à Madagascar reste un secteur émergent. Il est caractérisé par un ensemble diversifié d’acteurs aux objectifs variés, de nombreux projets encore en phase de démarrage et un cadre réglementaire national sur le carbone nouvellement révisé. Cette situation nécessite une attention particulière de la part des décideurs politiques, des chercheurs et des organisations de la société civile.

Dans ce contexte d’incertitude, la nécessité de poursuivre des recherches de terrain approfondies s’impose. Cette nécessité apparaît d’autant plus saillante dans le contexte actuel, marqué par l’adoption du décret 2025-626, qui élargit les droits d’accès au marché du carbone forestier. Cette nouvelle réglementation laisse entrevoir une multiplication des projets de compensation carbone dans le pays au cours des prochaines années.

Elle pourrait, en l’absence d’initiatives de régulation en matière de transparence et de redevabilité des porteurs de projets, alimenter une vague d’« accaparements verts » au nom de la biodiversité et de la protection de la nature et ainsi menacer les moyens de subsistance de millions d’exploitants familiaux.

The Conversation

Quentin Grislain est accueilli au FOFIFA (Centre national de la recherche appliquée au développement rural) à Madagascar. Il a, via le CIRAD, reçu des financements du Lincoln Institute of Land Policy. Il a également bénéficié de l’appui du GIGA (German Institute of Global and Area Studies) et de l’initiative Land Matrix.

ref. Madagascar : le marché de compensation carbone en pleine émergence – https://theconversation.com/madagascar-le-marche-de-compensation-carbone-en-pleine-emergence-261480

AI agents are here. Here’s what to know about what they can do – and how they can go wrong

Source: The Conversation – Global Perspectives – By Daswin de Silva, Professor of AI and Analytics, Director of AI Strategy, La Trobe University

George Peters / Getty Images

We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in “teams” or use tools to accomplish complex tasks.

The latest hot product is OpenAI’s ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, “thinks and acts”.

These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do – as well as their drawbacks and risks – is rapidly becoming essential.

From chatbots to agents

ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology.

Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision.

Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory.

Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems.

Agents are also “tool users” as they can also call on software tools for specialised tasks – things such as web browsers, spreadsheets, payment systems and more.

A year of rapid development

Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms.

Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google’s Vertex AI and Meta’s Llama agents.

Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged “cheat at anything” agent that has gained attention but is yet to deliver meaningful results.

Not all agents are made for general-purpose activity. Some are specialised for particular areas.

Coding and software engineering are at the vanguard here, with Microsoft’s Copilot coding agent and OpenAI’s Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags.

Search, summarisation and more

One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete.

OpenAI’s Deep Research tackles complex tasks using multi-step online research. Google’s AI “co-scientist” is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals.

Agents can do more – and get more wrong

Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks.

OpenAI also says its ChatGPT agent is “high risk” due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge.

But the kind of risks agents may pose in real-world situations are shown by Anthropic’s Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business – and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food.

In another cautionary tale, a coding agent deleted a developer’s entire database, later saying it had “panicked”.

Agents in the office

Nevertheless, agents are already finding practical applications.

In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week.

Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon’s use of an interactive AI agent to manage defects in its apartment developments.

Human and other costs

At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs.

People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury.

The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents – especially for more complex tasks.

Learn about agents – and build your own

Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It’s not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations.

For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks.

For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework.

The Conversation

Daswin de Silva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI agents are here. Here’s what to know about what they can do – and how they can go wrong – https://theconversation.com/ai-agents-are-here-heres-what-to-know-about-what-they-can-do-and-how-they-can-go-wrong-261579