Why two tiny mountain peaks became one the internet’s most famous images

Source: The Conversation – USA (2) – By Christopher Schaberg, Director of Public Scholarship, Washington University in St. Louis

The icon has various iterations, but all convey the same meaning: an image should be here. Christopher Schaberg, CC BY-SA

It’s happened to you countless times: You’re waiting for a website to load, only to see a box with a little mountain range where an image should be. It’s the placeholder icon for a “missing image.”

But have you ever wondered why this scene came to be universally adopted?

As a scholar of environmental humanities, I pay attention to how symbols of wilderness appear in everyday life.

The little mountain icon – sometimes with a sun or cloud in the background, other times crossed out or broken – has become the standard symbol, across digital platforms, to signal something missing or something to come. It appears in all sorts of contexts, and the more you look for this icon, the more you’ll see it.

You click on it in Microsoft Word or PowerPoint when you want to add a picture. You can purchase an ironic poster of the icon to put on your wall. The other morning, I even noticed a version of it in my Subaru’s infotainment display as a stand-in for a radio station logo.

So why this particular image of the mountain peaks? And where did it come from?

Arriving at the same solution

The placeholder icon can be thought of as a form of semiotic convergence, or when a symbol ends up meaning the same thing in a variety of contexts. For example, the magnifying glass is widely understood as “search,” while the image of a leaf means “eco-friendly.”

It’s also related to something called “convergent design evolution,” or when organisms or cultures – even if they have little or no contact – settle on a similar shape or solution for something.

In evolutionary biology, you can see convergent design evolution in bats, birds and insects, who all utilize wings but developed them in their own ways. Stilt houses emerged in various cultures across the globe as a way to build durable homes along shorelines and riverbanks. More recently, engineers in different parts of the world designed similar airplane fuselages independent of one another.

For whatever reason, the little mountain just worked across platforms to evoke open-ended meanings: Early web developers needed a simple shorthand way to present that something else should or could be there.

Depending on context, a little mountain might invite a user to insert a picture in a document; it might mean that an image is trying to load, or is being uploaded; or it could mean an image is missing or broken.

Down the rabbit hole on a mountain

But of the millions of possibilities, why a mountain?

In 1994, visual designer Marsh Chamberlain created a graphic featuring three colorful shapes as a stand-in for a missing image or broken link for the web browser Netscape Navigator. The shapes appeared on a piece of paper with a ripped corner. Though the paper with the rip will sometimes now appear with the mountain, it isn’t clear when the square, circle and triangle became a mountain.

A generic camera dial featuring various modes, with the 'landscape mode' – represented by two little mountain peaks – highlighted.
Two little mountain peaks are used to signal ‘landscape mode’ on many SLR cameras.
Althepal/Wikimedia Commons, CC BY

Users on Stack Exchange, a forum for developers, suggest that the mountain peak icon may trace back to the “landscape mode” icon on the dials of Japanese SLR cameras. It’s the feature that sets the aperture to maximize the depth of field so that both the foreground and background are in focus.

The landscape scene mode – visible on many digital cameras in the 1990s – was generically represented by two mountain peaks, with the idea that the camera user would intuitively know to use this setting outdoors.

Another insight emerged from the Stack Exchange discussion: The icon bears a resemblance to the Microsoft XP wallpaper called “Bliss.” If you had a PC in the years after 2001, you probably recall the rolling green hills with blue sky and wispy clouds.

The stock photo was taken by National Geographic photographer Charles O’Rear. It was then purchased by Bill Gates’ digital licensing company Corbis in 1998. The empty hillside in this picture became iconic through its adoption by Windows XP as its default desktop wallpaper image.

A colorful stock photo of green rolling hills, a blue sky and clouds.
If you used a PC at the turn of the 21st century, you probably encountered ‘Bliss.’
Wikimedia Commons

Mountain riddles

“Bliss” became widely understood as the most generic of generic stock photos, in the same way the placeholder icon became universally understood to mean “missing image.” And I don’t think it’s a coincidence that they both feature mountains or hills and a sky.

Mountains and skies are mysterious and full of possibilities, even if they remain beyond grasp.

Consider Japanese artist Hokusai’s “36 Views of Mount Fuji,” which were his series of paintings from the 1830s – the most famous of which is probably “The Great Wave off Kanagawa,” where a tiny Mount Fuji can be seen in the background. Each painting features the iconic mountain from different perspectives and is full of little details; all possess an ambiance of mystery.

A painting of a large rowboat manned by people on rolling waves with a large mountain in the background.
‘Tago Bay near Ejiri on the Tokaido,’ from Hokusai’s series ‘36 Views of Mount Fuji.’
Heritage Art/Heritage Images via Getty Images

I wouldn’t be surprised if the landscape icon on those Japanese camera dials emerged as a minimalist reference to Mount Fuji, Japan’s highest mountain. From some perspectives, Mount Fuji rises behind a smaller incline. And the Japanese photography company Fujifilm even borrowed the namesake of that mountain for their brand.

The enticing aesthetics of mountains also reminded me of the environmental writer Gary Snyder’s 1965 translation of Han Shan’s “Cold Mountain Poems.” Han Shan – his name literally means “Cold Mountain” – was a Chinese Buddhist poet who lived in the late eighth century. “Shan” translates as “mountain” and is represented by the Chinese character 山, which also resembles a mountain.

Han Shan’s poems, which are little riddles themselves, revel in the bewildering aspects of mountains:

Cold Mountain is a house
Without beams or walls.
The six doors left and right are open
The hall is a blue sky.
The rooms are all vacant and vague.
The east wall beats on the west wall
At the center nothing.

The mystery is the point

I think mountains serve as a universal representation of something unseen and longed for – whether it’s in a poem or on a sluggish internet browser – because people can see a mountain and wonder what might be there.

The placeholder icon does what mountains have done for millennia, serving as what the environmental philosopher Margret Grebowicz describes as an object of desire. To Grebowicz, mountains exist as places to behold, explore and sometimes conquer.

The placeholder icon’s inherent ambiguity is baked into its form: Mountains are often regarded as distant, foreboding places. At the same time, the little peaks appear in all sorts of mundane computing circumstances. The icon could even be a curious sign of how humans can’t help but be “nature-positive,” even when on computers or phones.

This small icon holds so much, and yet it can also paradoxically mean that there is nothing to see at all.

Viewing it this way, an example of semiotic convergence becomes a tiny allegory for digital life writ large: a wilderness of possibilities, with so much just out of reach.

The Conversation

Christopher Schaberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why two tiny mountain peaks became one the internet’s most famous images – https://theconversation.com/why-two-tiny-mountain-peaks-became-one-the-internets-most-famous-images-268169

Supply-chain delays, rising equipment prices threaten electricity grid

Source: The Conversation – USA (2) – By Morgan Bazilian, Professor of Public Policy and Director, Payne Institute, Colorado School of Mines

High-voltage power lines run through an electrical substation in Florida. Joe Raedle/Getty Images

Two new data centers in Silicon Valley have been built but can’t begin processing information: The equipment that would supply them with electricity isn’t available.

It’s just one example of a crisis facing the U.S. power grid that can’t be solved simply by building more power lines, approving new power generation, or changing out grid software. The equipment needed to keep the grid running – transformers that regulate voltage, circuit breakers that protect against faults, high-voltage cables that carry power across regions, and steel poles that hold the network together – is hard to make, and materials are limited. Supply-chain bottlenecks are taking years to clear, delaying projects, inflating costs and threatening reliability.

Meanwhile, U.S. electricity demand is surging from several sources – electrification of home and business appliances and equipment, increased domestic manufacturing and growth in AI data centers. Without the right equipment, these efforts may take years longer and cost vast sums more than planners expect.

Not enough transformers to replace aging units

Transformers are key to the electricity grid: They regulate voltage as power travels across the wires, increasing voltage for more efficient long-distance transmission, and decreasing it for medium-distance travel and again for delivery to buildings.

The National Renewable Energy Laboratory estimates that the U.S. has about 60 million to 80 million high-voltage distribution transformers in service. More than half of them are over 33 years old – approaching or exceeding their expected lifespans.

Replacing them has become costly and time-consuming, with utilities reporting that transformers cost four to six times what they cost before 2022, in addition to the multiyear wait times.

To meet rising electricity demand, the country will need many more of them – perhaps twice as many as already exist.

A person drives a forklift near a group of large metal canisters.
Even smaller transformers like these are in high demand and short supply.
AP Photo/Mel Evans

The North American Electric Reliability Corporation says the lead time, the wait between placing an order and the product being delivered, hit roughly 120 weeks – more than two years – in 2024, with large power transformers taking as long as 210 weeks – up to four years. Even smaller transformers used to reduce voltage for distribution to homes and businesses are back-ordered as much as two years. Those delays have slowed both maintenance and new construction across much of the grid.

Transformer production depends heavily on a handful of materials and suppliers. The cores of most U.S transformers use grain-oriented electrical steel, a special type of steel with particular magnetic properties, which is made domestically only by Cleveland-Cliffs at plants in Pennsylvania and Ohio. Imports have long filled the gap: Roughly 80% of large transformers have historically been imported from Mexico, China and Thailand. But global demand has also surged, tightening access to steel, as well as copper, a soft metal that conducts electricity well and is crucial in wiring.

In partial recognition of these shortages, in April 2024, the U.S. Department of Energy delayed the enforcement of new energy-efficiency rules for transformers, to avoid making the situation worse.

Further slowing progress, these items cannot be mass-produced. They must be designed, tested and certified individually.

Even when units are built, getting them to where they are needed can be a feat. Large power transformers often weigh between 100 tons and 400 tons and require specialized transport – sometimes needing one of only about 10 suitable super-heavy-load railcars in the country. Those logistics alone can add months to a replacement project, according to the Department of Energy.

A massive railcar carries a large metal box.
Enormous railcars like this one in Germany are often needed to transport high-voltage transformers from where they are manufactured to where they are used.
Raimond Spekking via Wikimedia Commons, CC BY-SA

Other key equipment

Transformers are not the only grid machinery facing delays. A Duke University Nicholas Institute study, citing data from research and consulting firm Wood Mackenzie, shows that high-voltage circuit-breaker lead times reached about 151 weeks – nearly three years – by late 2023, roughly double pre-pandemic norms.

Facing similar delays are a range of equipment types, such as transmission cables that can handle high voltages, switchgear – a technical category that includes switches, circuit breakers and fuses – and insulators to keep electricity from going where it would be dangerous.

For transmission projects, equipment delays can derail timelines. High-voltage direct-current cables now take more than 24 months to procure, and offshore wind projects are particularly strained: Orders for undersea cables can take more than a decade to fill. And fewer than 50 cable-laying vessels operate worldwide, limiting how quickly manufacturers can install them, even once they are manufactured.

Supply-chain strains are hitting even the workhorse of the power grid: natural gas turbines. Manufacturers including Siemens Energy and GE Vernova have multiyear backlogs as new data centers, industrial electrification and peaking-capacity projects flood the order books. Siemens recently reported a record US$158 billion backlog, with some turbine frames sold out for as long as seven years.

A large industrial building.
The Cleveland-Cliffs steelworks in Ohio makes a specialized type of steel that is crucial for making transformers.
AP Photo/Sue Ogrocki

Alternate approaches

As a result of these delays, utility companies are finding other ways to meet demand, such as battery storage, actively managing electricity demand, upgrading existing equipment to produce more power, or even reviving decommissioned generation sites.

Some utilities are stockpiling materials for their own use or to sell to other companies, which can shrink delays from years to weeks.

There have been various other efforts, too. In addition to delaying transformer efficiency requirements, the Biden administration awarded Cleveland-Cliffs $500 million to upgrade its electrical-steel plants – but key elements of that grant were canceled by the Trump administration.

Utilities and industry groups are exploring standardized designs and modular substations to cut lead times – but acknowledge that those are medium-term fixes, not quick solutions.

Large government incentives, including grants, loans and guaranteed-purchase agreements, could help expand domestic production of these materials and supplies. But for now, the numbers remain stark: roughly 120 weeks for transformers, up to four years for large units, nearly three years for breakers and more than two years for high-voltage cable manufacturing. Until the underlying supply-chain choke points – steel, copper, insulation materials and heavy transport – expand meaningfully, utilities are managing reliability not through construction, but through choreography.

The Conversation

Kyri Baker receives funding from the U.S. Department of Energy, the National Science Foundation, and The Climate Innovation Collaboratory. She is a visiting researcher at Google DeepMind. The views and opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the author’s employer or any affiliated organizations.

Morgan Bazilian does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Supply-chain delays, rising equipment prices threaten electricity grid – https://theconversation.com/supply-chain-delays-rising-equipment-prices-threaten-electricity-grid-269448

Les hivers pourraient disparaître de la région des Grands Lacs

Source: The Conversation – in French – By Marguerite Xenopoulos, Professor and Canada Research Chair in Global Change of Freshwater Ecosystems, Trent University

Il y a cinquante ans, l’hiver ne se contentait pas de visiter les Grands Lacs, il s’y installait. Si l’on clignait des yeux trop lentement, nos cils gelaient. Après une tempête de neige de janvier, au bord du lac Supérieur, tout était blanc et immobile, sauf le lac. Le vent l’avait balayé, révélant des fissures dans la glace qui craquaient.

À Noël, la baie de Saginaw, sur le lac Huron, est habituellement gelée et la glace est suffisamment épaisse pour permettre aux camions de circuler. Des cabanes de pêcheurs ponctuent l’horizon comme de petites villes en bois. Les gens sortent leurs tarières et leurs appâts avant l’aube, et leurs thermos de café noir fument dans le froid.

À l’hiver 2019-2020, la glace ne s’est jamais formée.

L’air humide et gris était légèrement au-dessus de zéro. Le sol était boueux. Les enfants tentaient de faire de la luge sur l’herbe sèche. Les entreprises de location de cabanes sont restées fermées, et les habitants se demandaient si c’était le nouveau visage de l’hiver.

Les conséquences environnementales et sociales du réchauffement hivernal ont un impact sur les lacs du monde entier. Malgré ces signes évidents, la plupart des activités d’observation des Grands Lacs ont lieu pendant les périodes chaudes et calmes.

En tant que professeurs spécialisés dans la recherche sur l’hiver et de membres du Conseil consultatif scientifique des Grands Lacs de la Commission mixte internationale, nous avons élaboré des recommandations fondées sur des données probantes à l’intention des décideurs politiques du Canada et des États-Unis concernant les priorités et la coordination en matière de qualité de l’eau. Pour renforcer la coopération internationale, nous recommandons de mettre en place une surveillance hivernale afin de mieux comprendre les facteurs affectant les lacs.




À lire aussi :
Le rôle invisible des eaux souterraines dans le soutien des lacs


Syndrome du réchauffement hivernal

La région des Grands Lacs est touchée par le « syndrome du réchauffement hivernal », caractérisé par une hausse de la température de l’eau de surface, plus particulièrement pendant la saison froide.

Les hivers y sont de plus en plus chauds et humides, et la couverture glacielle maximale annuelle diminue considérablement. Les conditions hivernales sont également de plus en plus courtes, avec une réduction d’environ deux semaines par décennie depuis 1995.

Dans la région des Grands Lacs, les entreprises, les touristes et les quelque 35 millions d’habitants subissent les effets du réchauffement hivernal tout au long de l’année. Les changements saisonniers entraînent une augmentation du ruissellement des nutriments, favorisant la prolifération d’algues qui gâchent les journées d’été à la plage.

La modification des réseaux alimentaires affecte des espèces importantes sur les plans commercial et culturel, comme le grand corégone. La diminution de la couverture glacielle rend les loisirs et les transports moins sûrs, transformant ainsi l’identité et la culture de la région.

L’hiver, la saison la moins étudiée

Nous risquons de perdre l’hiver dans la région des Grands Lacs avant d’avoir pleinement compris son influence sur l’écosystème et les communautés. Notre analyse des publications récentes montre que l’hiver est peu étudié.

Les chercheurs ont une connaissance limitée des processus physiques, biologiques et biogéochimiques en jeu. Toute modification de ces processus peut avoir des répercussions sur la qualité de l’eau, l’écosystème, la santé humaine, ainsi que sur le bien-être social, culturel et économique de la région. Toutefois, il est difficile de comprendre ces phénomènes sans disposer des données nécessaires.

En vertu de l’Accord relatif à la qualité de l’eau dans les Grands Lacs, les agences canadiennes et américaines surveillent les indicateurs de santé et la qualité de l’eau. L’accord fixe des objectifs pour la qualité de l’eau des Grands Lacs, notamment en ce qui concerne la potabilité, ainsi que la sécurité pour les loisirs et la consommation de poissons et d’espèces sauvages. Cependant, les efforts actuels se concentrent sur les mois chauds.




À lire aussi :
Les lacs ne dorment pas en hiver ! Au contraire, il y a un monde qui vit sous la glace


Étendre la recherche à l’hiver permettrait de combler d’importantes lacunes dans les données. Des études ponctuelles ont déjà montré que l’hiver requiert un suivi systématique. En 2022, une douzaine d’universités et d’agences canadiennes et américaines ont prélevé des échantillons sous la glace dans tout le bassin, dans le cadre du projet Great Lakes Winter Grab.

Les équipes se sont déplacées à pied ou en motoneige et ont percé la glace afin de recueillir des informations sur la vie lacustre et la qualité de l’eau dans les cinq Grands Lacs.

Il en a résulté la création d’un réseau hivernal des Grands Lacs composé d’universitaires et de chercheurs gouvernementaux, afin de mieux comprendre la rapidité avec laquelle les conditions hivernales changent et d’améliorer le partage des données, la coordination des ressources et l’échange de connaissances.

Une série d'images montrant l'étendue de la couverture de glace hivernale dans les Grands Lacs.
Couverture glacielle maximale sur les Grands Lacs de 1973 à 2025. Bien qu’il y ait des variations importantes d’une année à l’autre, la couverture a diminué d’environ 0,5 % par an depuis 1973.
(NOAA Great Lakes Environmental Research Laboratory)

Impacts sur les communautés

Les hivers plus chauds entraînent une hausse des noyades en raison de l’instabilité de la glace. Le ruissellement accru des nutriments favorise la prolifération d’algues nocives et complique le traitement de l’eau potable.

La réduction de la couverture de glace peut prolonger la saison de navigation, mais elle nuit au secteur de la pêche, qui représente 5,1 milliards de dollars américains, par la modification des habitats, l’augmentation des espèces envahissantes et la dégradation de la qualité de l’eau.




À lire aussi :
Les microplastiques contaminent les Grands Lacs. Il faut diminuer la production et la consommation de plastique


L’hiver façonne également l’identité culturelle et les loisirs. Qu’il s’agisse de sorties en raquette ou de patinage sur les lacs gelés, les sports hivernaux laissent de beaux souvenirs aux habitants et aux touristes de la région. La disparition de ces activités pourrait éroder les liens communautaires, les traditions et les moyens de subsistance.

Les changements des conditions hivernales menacent également les traditions et les pratiques culturelles des peuples autochtones. Pour beaucoup d’entre eux, le lien avec leurs terres ancestrales s’exprime à travers la chasse, la pêche, la cueillette et l’agriculture.

La diminution de la quantité totale de neige et l’augmentation de la fréquence des cycles de gel et de dégel entraînent notamment une perte de nutriments dans le sol et peuvent modifier le calendrier saisonnier ainsi que la disponibilité d’espèces végétales importantes sur le plan culturel. L’instabilité de la glace restreint les possibilités de pêche et de transmission des compétences, de la langue et des pratiques culturelles aux générations futures.

un homme vêtu d'habits d'hiver debout sur un lac gelé avec des instruments pour prélever des échantillons.
Des échantillons sont prélevés sur le lac Érié afin d’étudier les conditions hivernales. Cette recherche a été menée dans le cadre du projet Great Lakes Winter Grab en 2022.
(Paul Glyshaw/NOAA)

Recherche scientifique hivernale dans la région des Grands Lacs

La collecte de données par temps froids pose des défis logistiques. Les scientifiques ont besoin d’équipements spécialisés, de personnel qualifié et d’approches coordonnées pour réaliser des observations sûres et efficaces. Le développement de la recherche hivernale dans les Grands Lacs requiert davantage de ressources.

Notre récent rapport met en lumière les lacunes dans les connaissances relatives aux processus hivernaux, aux impacts socio-économiques et culturels des conditions changeantes, ainsi qu’aux moyens de renforcer la science hivernale dans cette région.

Le rapport souligne également les limites infrastructurelles et recommande davantage de formations pour permettre aux scientifiques de travailler en toute sécurité dans des conditions climatiques rigoureuses, à l’image de l’atelier de formation du Réseau de limnologie hivernale de 2024. Une gestion améliorée et un meilleur partage des données sont nécessaires pour maximiser la valeur des informations recueillies.

La science hivernale des Grands Lacs est en plein essor, mais il est essentiel d’accroître les capacités et la coordination pour suivre le rythme des changements qui affectent non seulement les écosystèmes, mais aussi les communautés. Le développement de la science hivernale permettra de préserver la santé et le bien-être des personnes qui vivent, travaillent et se divertissent dans le bassin des Grands Lacs.

La Conversation Canada

Marguerite Xenopoulos reçoit un financement des Chaires de recherche du Canada et du Conseil de recherches en sciences naturelles et en génie.

Michael R. Twiss est affilié à l’Association internationale pour la recherche sur les Grands Lacs.

ref. Les hivers pourraient disparaître de la région des Grands Lacs – https://theconversation.com/les-hivers-pourraient-disparaitre-de-la-region-des-grands-lacs-267790

Recent studies prove the ancient practice of nasal irrigation is effective at fighting the common cold

Source: The Conversation – USA (3) – By Mary J. Scourboutakos, Adjunct Assistant Professor in Family and Community Medicine, Macon & Joan Brock Virginia Health Sciences at Old Dominion University

Nasal irrigation can help shorten the duration of the common cold. SimpleImages/Moment via Getty Images

It starts with a slight scratchiness at the back of your throat.

Then, a sneeze.

Then coughing, sniffling and full-on congestion, with or without fever, for a few insufferable days.

Viral upper respiratory tract infections – also known as the common cold – afflict everyone, typically three times per year, lasting, on average, nine days.

Colds don’t respond to antibiotics, and most over-the-counter medications deliver modest results at best.

In recent years, research has emerged demonstrating the effectiveness of the ancient practice of nasal saline irrigation in fighting the common cold in both adults and children.

Not only does nasal saline irrigation decrease the duration of illness, it also reduces viral transmission to other people, minimizes the need for antibiotics and could even lower a patient’s risk of hospitalization. Better yet, it costs pennies and doesn’t require a prescription.

I’m both an adjunct assistant professor of medicine and a practicing physician. As a family doctor, I see the common cold every day. My patients are usually skeptical when I first recommend nasal saline irrigation. However, they frequently return to tell me that this practice has changed their life. Not only does it help with upper respiratory viruses, but it also helps manage allergies, chronic congestion, postnasal drip and recurrent sinus infections.

What is nasal saline irrigation?

Nasal saline irrigation is a process by which the nasal cavity is bathed in a saltwater solution. In some studies, this is accomplished using a pump-action spray bottle.

In others, participants used a traditional neti pot, which is a vessel resembling a teapot.

This practice of nasal irrigation originated in the Ayurvedic tradition, which is a system of alternative medicine from India dating back more than 5,000 years.

The neti pot can be traced back to the 15th century. It garnered mainstream interest in the U.S. in 2012 after Dr. Oz demonstrated it on the “Oprah Winfrey Show.” But it’s not the only device that has historically been employed for such purposes. Ancient Greek and Roman physicians had their own nasal lavage devices. Such practices were even discussed in medical journals such as The Lancet over a century ago, in 1902.

woman using a neti pot over a sink with water draining out her nostril
A neti pot is one tool for irrigating your nasal passages.
swissmediavision/E+ via Getty Images

How does nasal saline irrigation work?

Nasal saline has a few key benefits. First, it physically flushes debris out of the nasal passage. This not only includes mucus and crust, but also the virus itself, along with allergens and other environmental contaminants.

Second, salt water is slightly lower on the pH scale compared with fresh water. Its acidity creates an environment that is inhospitable for viruses and makes it harder for them to replicate.

Third, nasal saline helps restore the actions of part of our natural defense system, which is composed of microscopic, hairlike projections called cilia that line the surface of the nasal passage. These cilia beat in a coordinated fashion to act like an escalator, propelling viruses and other foreign particles out of the body. Nasal saline irrigation helps keep this system running effectively.

What the research shows

A study of more than 11,000 people published in The Lancet in 2024 demonstrated that nasal saline irrigation, initiated at the first sign of symptoms and performed up to six times per day, reduced the duration of symptomatic illness by approximately two days. Meanwhile, smaller studies have reported that the reduced duration of illness could be as high as four days.

Research has also demonstrated that nasal saline irrigation can help prevent the spread of illness. A study in hospitalized patients showed that after detection of COVID-19 via nasal swab, nasal saline irrigation performed every four hours over a 16-hour period decreased COVID-19 viral load by 8.9%. Meanwhile, the viral load in the control group continued to increase during that time.

The benefits of nasal saline also extend beyond acute infectious illnesses. When performed regularly by patients with allergic rhinitis, also known as hay fever, a meta-analysis of 10 randomized controlled trials showed that nasal saline irrigation can enable a 62% reduction in the use of allergy medications. It’s also effective for chronic congestion, postnasal drip and recurrent sinus infections.

Why it matters

Besides helping patients feel better faster, one of the most valuable benefits of nasal saline irrigation is that its use can help decrease unnecessary antibiotic prescriptions, which are a major contributor to antibiotic resistance.

It is well established that antibiotics do not shorten the duration or reduce the severity of respiratory tract infections. Despite this, studies have shown that patients are happier when they leave their doctor’s office with an antibiotic prescription in hand.

This may be why 10 million inappropriate antibiotic prescriptions are given each year for viral respiratory tract infections. In one study of more than 49,000 patient encounters for respiratory infections, antibiotics were unnecessarily prescribed to 42.4% of patients.

One reason patients with upper respiratory viral infections tend to initially feel better with antibiotics is because of their off-target, anti-inflammatory properties. However, this benefit can be better achieved with anti-inflammatory medications such as ibuprofen or naproxen, that can be taken in conjunction with nasal saline irrigation.

Overall, nasal saline irrigation is a cheap, effective, evidence-based alternative that will not only shorten the duration of illness but also prevent its spread, minimize the need for unnecessary antibiotics and keep people out of the hospital.

How to do it

Irrigating your nasal passages as soon as you feel the first signs of illness is proven to reduce the duration and severity of the common cold.

For those who want to try it, you don’t need anything fancy. Even a neti pot is not necessary. Many pharmacies sell salt water in a container with a nozzle and even spray bottles that can be refilled with a homemade saltwater solution.

You’ll mix approximately half a teaspoon of non-iodized salt with 1 cup of water. It’s important for your safety that the water be either distilled water or boiled for at least five minutes and then cooled to destroy any harmful bacteria. You can also add a pinch of baking soda to reduce any potential sting.

Note that saltier solutions are not more effective. However, some studies have suggested natural seawater, due to its additional minerals such as magnesium, potassium and calcium, could offer even greater benefits. Saltwater solutions can also be purchased commercially, which might be worth a try for those with an insufficient response to saline alone.

You can use nasal saline irrigation after any potential exposure to an infectious illness. For best results, you’ll want to start irrigating the nasal passage at the first sign of an infection. You can repeat rinses throughout the day as often as needed for the duration of the illness. At minimum, you’ll want to irrigate the nasal passages every morning and evening. You can also consider gargling salt water as an adjunctive therapy.

The Conversation

Mary J. Scourboutakos does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Recent studies prove the ancient practice of nasal irrigation is effective at fighting the common cold – https://theconversation.com/recent-studies-prove-the-ancient-practice-of-nasal-irrigation-is-effective-at-fighting-the-common-cold-266659

SNAP benefits have been cut and disrupted – causing more kids to go without enough healthy food and harming child development

Source: The Conversation – USA (2) – By Jenalee Doom, Associate Professor of Psychology, University of Denver

Being able to buy nutritious groceries is essential for your family’s health. Spencer Platt/Getty Images

About 4 in 10 of the more than 42 million Americans who get Supplemental Nutrition Assistance Program benefits are children under 18. This food aid helps their families buy groceries and boosts their health in many ways – both during childhood and once they’re adults.

I am a developmental psychologist who studies how stress and nutrition affect kids’ mental and physical health during childhood, and how those effects continue once they become adults.

Researchers like me are worried that the SNAP benefits disruption caused by the 2025 government shutdown and the SNAP cuts included in the big tax-and-spending package President Donald Trump signed into law on July 4 will make even more children experience high levels of stress and will prevent millions of kids from accessing a steady diet of nutritious food.

Food insecurity can harm kids – even before they’re born

Food insecurity is the technical term for when people lack consistent access to enough nutritious food.

In childhood, it’s associated with having worse physical health than most people, including an elevated risk of getting asthma and other chronic illnesses.

It is also tied to a higher risk of child obesity. It seems counterintuitive that lower food access is associated with greater obesity risk. One explanation is that not having access to enough nutritious food may lead people to eat a higher-fat, higher-sugar diet that includes food that’s cheap and filling but may cause them to gain weight.

Even temporary disruptions to the disbursement of SNAP benefits can harm American kids. While the effects of brief food shortages can be hard to measure, a study on a temporary food shortage in Kenya suggests that even short-term food shortages can influence both parents and their kids for a long time.

And SNAP spending cuts, including those in what Trump called his “big beautiful bill,” are bound to hurt many children whose families were relying on SNAP to get enough to eat and are now losing their benefits.

A study by researchers from Northwestern and Princeton universities, published in 2025, followed more than 1,000 U.S. children into adulthood. It showed that food insecurity in early childhood predicted higher cardiovascular risks in adulthood. But those researchers also found that SNAP benefits could reduce cardiovascular risks later in life for kids facing food insecurity.

Food insecurity in pregnancy is dangerous too, and not just for mothers. It also poses risks to their babies.

Another study published in 2025 reviewed the medical records of over 19,000 pregnant U.S. women. It found that pregnant women who experience food insecurity are more likely to have pregnancy-related complications, such as giving birth weeks or months before their due date, developing gestational diabetes or spending extra time in the hospital, with their baby requiring a stay in a neonatal intensive care unit.

This same study found that when pregnant women received SNAP benefits and other forms of government food assistance, they were largely protected from these risks tied to food insecurity.

Food insecurity harms children’s mental health

A 2021 analysis of more than 100,000 U.S. children led by researchers at the University of California, Berkeley, and Kaiser Permanente showed that when kids experienced food insecurity sometimes or often in a 12-month period, they ran a 50% greater risk of anxiety or depression compared to kids who didn’t.

Food insecurity in childhood is also associated with more behavior problems and worse academic performance. These mental health, academic and behavioral problems in childhood can put people on a path toward poorer health and fewer job opportunities later on.

Children and babies experiencing food insecurity are more likely to have nutrient deficiencies, including insufficient iron. A review of decades of research that I participated in found that iron deficiency during infancy and early childhood, when the brain is developing quickly, can cause lasting harm.

Other research projects I’ve taken part in have found that iron deficiency in infancy is associated with cognitive deficits, not getting a high school diploma or going to college, and mental health problems later on.

Food insecurity is often one of many sources of stress kids face

If a child is experiencing food insecurity, they are often dealing with other types of stress at the same time. Food insecurity is more common for children experiencing poverty and homelessness. It’s also common for kids with little access to health care.

Research from the research group I lead as well as other researchers have found that experiencing multiple sources of stress in childhood can harm mental and physical health, including how bodies manage stress. These different sources of stress often pile up, contributing to health problems.

Parents experiencing food insecurity often get stressed out because they’re scrambling to get enough food for their children. And when parents are stressed they become more susceptible to mental health problems, and may become more likely to lose their tempers or be physically aggressive with their kids.

In turn, when parents are stressed out, have mental health problems or develop harsh parenting styles, it’s bad for their kids.

SNAP falls short, even in normal times

To be sure, even before the 2025 government shutdown disrupted SNAP funding, its benefits didn’t cover the full cost of feeding most families.

Because they fell short of what was necessary to prevent food insecurity, many families with SNAP benefits needed to regularly visit food pantries and food banks – especially toward the end of the month once their benefits had been spent.

A grocer in my rural hometown in South Dakota posted on Facebook in November 2025 about the effects of food insecurity on families that he regularly sees. He explained that he keeps his stores open after midnight on SNAP disbursement days. Many of his customers, he said, are in a rush to get their “first real food in days.”

The Conversation

Jenalee Doom receives research funding from the National Institutes of Health.

ref. SNAP benefits have been cut and disrupted – causing more kids to go without enough healthy food and harming child development – https://theconversation.com/snap-benefits-have-been-cut-and-disrupted-causing-more-kids-to-go-without-enough-healthy-food-and-harming-child-development-269362

Hybrid workers are putting in 90 fewer minutes of work on Fridays – and an overall shift toward custom schedules could be undercutting collaboration

Source: The Conversation – USA (2) – By Christos Makridis, Associate Research Professor of Information Systems, Arizona State University; Institute for Humane Studies

It gets lonely if you stick around an office until late afternoon on Fridays. Dimitri Otis/Stone via Getty Images

Do your office, inbox and calendar feel like a ghost town on Friday afternoons? You’re not alone.

I’m a labor economist who studies how technology and organizational change affect productivity and well-being. In a study published in an August 2025 working paper, I found that the way people allocate their time to work has changed profoundly since the COVID-19 pandemic began.

For example, among professionals in occupations that can be done remotely, 35% to 40% worked remotely on Thursdays and Fridays in 2024, compared with only 15% in 2019. On Mondays, Tuesdays and Wednesdays, nearly 30% worked remotely, versus 10% to 15% five years earlier.

And white-collar employees have also become more likely to log off from work early on Fridays. They’re starting the weekend sooner than before the pandemic, whether while working at an office or remotely as the workweek comes to a close. Why is that happening? I suspect that remote work has diluted the barrier between the workweek and the weekend – especially when employees aren’t working at the office.

The changing rhythm of work

The American Time Use Survey, which the U.S. Labor Department’s Bureau of Labor Statistics conducts annually, asks thousands of Americans to recount how they spent the previous day, minute by minute. It tracks how long they spend working, commuting, doing housework and caregiving.

Because these diaries cover both weekdays and weekends, and include information about whether respondents could work remotely, this survey offers the most detailed picture available of how the rhythms of work and life are changing. This data also allows me to see where people conduct each activity, making it possible to estimate the share of time American professionals spend working from home.

When I examined how the typical workday changed between 2019 and 2024, I saw dramatic shifts in where, when and how people worked throughout that period.

Millions of professionals who had never worked remotely suddenly did so full time at the height of the pandemic. Hybrid arrangements have since become common; many employees spend two or three days a week at home and the rest in the office.

I found another change: From 2019 to 2024, the average number of minutes worked on Fridays fell by about 90 minutes in jobs that can be done from home. That change accounts for other factors, such as a professional’s age, education and occupation.

The decline for employees with jobs that are harder to do remotely was much smaller.

Even if you just look at the raw data, U.S. employees with the potential to work remotely were working about 7½ hours per weekday on average in 2024, down about 13 minutes from 2019. These averages mask substantial variation between those with jobs that can more easily be done remotely and those who must report to the office most of the time.

For example, among workers in the more remote-intensive jobs, they spent 7 hours, 6 minutes working on Fridays in 2024, but 8 hours, 24 minutes in 2019.

That means I found, looking at the raw data, that Americans were working 78 fewer minutes on Fridays in 2024 than five years earlier. And controlling for other factors (e.g., demographics), this is actually an even larger 90-minute difference for employees who can do their jobs remotely.

In contrast, those employees were working longer hours on Wednesdays. They worked 8 hours, 24 minutes on Wednesdays in 2024, half an hour more than the 7 hours, 54 minutes logged on that day of the week in 2019. Clearly, there’s a shift from some Friday hours, with employees making up the bulk of the difference on other weekdays.

Fridays have long been a little different

Although employees are shifting some of this skipped work time to other days of the week, most of the reduction – whether at the office or at home – has gone to leisure.

To be sure, Fridays have always been a little different than other weekdays. Many bosses allowed their staff to dress more casually on Fridays and permitted people to depart early, long before the pandemic began. But the ability to work remotely has evidently amplified that tendency.

This informal easing into the weekend, once confined to office norms, can be a morale booster. But as it has expanded, it’s become more individualized through remote and hybrid arrangements.

Those workers in remote-intensive occupations who are single, young or male reduced their working hours across the board the most, relative to 2019, although their time on the job increased a bit in 2024.

Pencils on a desk spell out TGIF, an abbreviation of thank God it's Friday.
Office workers have always been eager to get started with their weekends.
Epoxydude/fStop via Getty Images

The benefits and limits of flexibility

There are a few causal studies on the effects of remote work on productivity and well-being in the workplace, including some in which I participated. A general takeaway is that people tend to spend less time collaborating and more time on independent tasks when they work remotely.

That’s fine for some professions, but in roles that depend on frequent coordination, that pattern can complicate communication or weaken team cohesion. Colocation – being physically present with your colleagues – does matter for some types of tasks.

But even if productivity doesn’t necessarily suffer, every hour of unscheduled, independent work can be an hour not spent in coordinated effort with colleagues. That means what happens when people clock out or log off early on a Friday – whether at home or at their office – depends on the nature of their work.

In occupations that require continuous handoffs – such as journalism, health care or customer service – staggered schedules can actually improve efficiency by spreading coverage across more hours in the day.

But for employees in project-based or collaborative roles that depend on overlapping hours for brainstorming, review or decision-making, uneven schedules can create friction. When colleagues are rarely online at the same time, small delays can compound and slow collective progress.

The problem arises when flexible work becomes so individualized that it erodes shared rhythms altogether. The time-use data I analyzed suggests that remote-capable employees now spread their work more unevenly across the week, with less overlap in real time.

Eventually, that can make it harder to sustain the informal interactions and team cohesion that once happened organically when everyone left the office together at the end of the week. As some of my other research has shown, that also can reduce job satisfaction and increase turnover in jobs requiring greater coordination.

Businesswoman interacts with her with teammates in meeting at their office.
For many professions, team interaction is easier to have when people work at an office.
Morsa Images/DigitalVision via Getty Images

The future of work

To be sure, allowing employees to do remote work and have some scheduling flexibility on any day of the week isn’t necessarily bad for business.

The benefits – in terms of work-life balance, autonomy, recruitment and reducing turnover – can be very real.

Flexible and remote arrangements expand the pool of potential applicants by freeing employers from strict geographic limits. A company based in Chicago can now hire a software engineer in Boise or a designer in Atlanta without requiring relocation.

This wider reach increases the supply of qualified candidates. It can – particularly in jobs requiring more coordination – also improve retention by allowing employees to adjust their work schedules around family or personal needs rather than having to choose between relocating or quitting.

What’s more, many women who might have had to exit the labor force altogether when they became parents have been able to remain employed, at least on a part-time basis.

But in my view, the erosion of Fridays may go beyond what began as an informal tradition – leaving the office early before the weekend begins. It is part of a broader shift toward individualized schedules that expand autonomy but reduce shared time for coordination.

The Conversation

Christos Makridis is also a senior researcher for Gallup, and provides economics research counsel for think tanks.

ref. Hybrid workers are putting in 90 fewer minutes of work on Fridays – and an overall shift toward custom schedules could be undercutting collaboration – https://theconversation.com/hybrid-workers-are-putting-in-90-fewer-minutes-of-work-on-fridays-and-an-overall-shift-toward-custom-schedules-could-be-undercutting-collaboration-267921

Can the world quit coal?

Source: The Conversation – USA (2) – By Stacy D. VanDeveer, Professor of Global Governance & Human Security, UMass Boston

A fisherman looks at the Suralaya coal-fired power plant in Cilegon, Indonesia, in 2023. Ronald Siagian/AFP via Getty Images

As world leaders and thousands of researchers, activists and lobbyists meet in Brazil at the 30th annual United Nations climate conference, there is plenty of frustration that the world isn’t making progress on climate change fast enough.

Globally, greenhouse gas emissions and global temperatures continue to rise. In the U.S., the Trump administration, which didn’t send an official delegation to the climate talks, is rolling back environmental and energy regulations and pressuring other countries to boost their use of fossil fuels – the leading driver of climate change.

Coal use is also rising, particularly in India and China. And debates rage about justice and the future for coal-dependent communities as coal burning and coal mining end.

But underneath the bad news is a set of complex, contradictory and sometimes hopeful developments.

The problem with coal

Coal is the dirtiest source of fossil fuel energy and a major contributor of greenhouse gas emissions, making it bad not just for the climate but also for human health. That makes it a good target for cutting global emissions.

A swift drop in coal use is the main reason U.S. greenhouse gas emissions fell in recent years as natural gas and renewable energy became cheaper.

Today, nearly a third of all countries worldwide have pledged to phase out their unabated coal-burning power plants in the coming years, including several countries you might not expect. Germany, Spain, Malaysia, the Czech Republic – all have substantial coal reserves and coal use today, yet they are among the more than 60 countries that have joined the Powering Past Coal Alliance and set phase-out deadlines between 2025 and 2040.

Several governments in the European Union and Latin America are now coal phase-out leaders, and EU greenhouse gas emissions continue to fall.

Progress, and challenges ahead

So, where do things stand for phasing out coal burning globally? The picture is mixed. For example:

  • The accelerating deployment of renewable energy, energy storage, electric vehicles and energy efficiency globally offer hope that global emissions are on their way to peaking. More than 90% of the new electricity capacity installed worldwide in 2024 came from clean energy sources. However, energy demand is also growing quickly, so new renewable power does not always replace older fossil fuel plants or prevent new ones, including coal.

  • China now burns more coal than the rest of the world combined, and it continues to build new coal plants. But China is also a driving force in the dramatic growth in solar and wind energy investments and electricity generation inside China and around the world. As the industry leader in renewable energy technology, it has a strong economic interest in solar and wind power’s success around the world.

  • While climate policies that can reduce coal use are being subject to backlash politics and policy rollbacks in the U.S. and several European democracies, many other governments around the world continue to enact and implement cleaner energy and emissions reduction policies.

Phasing out coal isn’t easy, or happening as quickly as studies show is needed to slow climate change.

To meet the 2015 Paris Agreement’s goals of limiting global warming to well under 2 degrees Celsius (3.6 Fahrenheit) compared to pre-industrial times, research shows that the world will need to rapidly reduce nearly all fossil fuel burning and associated emissions – and it is not close to being on track.

Ensuring a just transition for coal communities

Many countries with coal mining operations worry about the transition for coal-dependent communities as mines shut down and jobs disappear.

No one wants a repeat of then-Prime Minister Margaret Thatcher’s destruction of British coal communities in the 1980s in her effort to break the mineworkers union. Mines rapidly closed, and many coal communities and regions were left languishing in economic and social decline for decades.

Two men put coal chunks into a sack with a power plant in the background.
Two men collect coal for cooking outside the Komati Power Station, where they used to work, in 2024, in Komati, South Africa. Both lost their jobs when Eskom closed the power plant in 2022 under international pressure to cut emissions.
Per-Anders Pettersson/Getty Images

Other regions have also struggled as coal facilities shut down.

But as more countries phase out coal, they offer examples of how to ensure coal-dependent workers, communities, regions and entire countries benefit from a just transition to a coal-free system.

At local and national levels, research shows that careful planning, grid updates and reliable financing schemes, worker retraining, small-business development and public funding of coal worker pensions and community and infrastructure investments can help set coal communities on a path for prosperity.

A fossil fuel nonproliferation treaty?

At the global climate talks, several groups, including the Powering Past Coal Alliance and an affiliated Coal Transition Commission, have been pushing for a fossil fuel nonproliferation treaty. It would legally bind governments to a ban on new fossil fuel expansion and eventually eliminate fossil fuel use.

The world has affordable renewable energy technologies with which to replace coal-fired electricity generation – solar and wind are cheaper than fossil fuels in most places. There are still challenges with the transition, but also clear ways forward. Removing political and regulatory obstacles to building renewable energy generation and transmission lines, boosting production of renewable energy equipment, and helping low-income countries manage the upfront cost with more affordable financing can help expand those technologies more widely around the world.

Shifting to renewable energy also has added benefits: It’s much less harmful to the health of those who live and work nearby than mining and burning coal is.

So can the world quit coal? Yes, I believe we can. Or, as Brazilians say, “Sim, nós podemos.”

The Conversation

Stacy D. VanDeveer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Can the world quit coal? – https://theconversation.com/can-the-world-quit-coal-269772

Quand l’IA devient le consommateur

Source: The Conversation – in French – By Sylvie-Eléonore Rolland, Maître de conférences, Université Paris Dauphine – PSL

L’intelligence artificielle (IA) ne se contente plus de guider nos choix : elle anticipe nos besoins et agit à notre place. En orchestrant décisions et transactions, devient-elle une entité consommatrice ? Que devient notre libre arbitre de consommateur face à un marché piloté par les algorithmes ?

Ce texte est publié dans le cadre des Dauphine Digital Days dont The Conversation France est partenaire.


Alors que les modèles classiques de comportement du consommateur reposent sur l’intention, la préférence et le choix, l’automatisation introduite par l’intelligence artificielle (IA) transforme en profondeur la chaîne décisionnelle. En s’immisçant dans les étapes de reconnaissance des besoins, d’évaluation des alternatives et d’achat, l’IA ne se contente plus de guider – elle agit.

Cette mutation questionne le cadre théorique du consumer agency, l’idée selon laquelle les consommateurs ont la capacité d’agir de manière intentionnelle, de faire des choix et d’exercer une influence sur leur propre vie et sur leur environnement. Ce déplacement progressif du pouvoir décisionnel interroge. Il interpelle la nature même de l’acte de consommer.

L’IA peut-elle être considérée comme une actrice de consommation à part entière ? Sommes-nous encore maîtres de nos choix ou sommes-nous devenus les récepteurs d’un système marchand autonome façonné par l’intelligence artificielle ?

Personnalisation de l’expérience

Les algorithmes prédictifs, programmes qui anticipent des résultats futurs à partir de données passées, sont aujourd’hui des acteurs incontournables de l’environnement numérique, présents sur des plateformes, telles que Netflix, Amazon, TikTok ou Spotify. Conçus pour analyser les comportements des utilisateurs, ces systèmes visent à personnaliser l’expérience en proposant des contenus et des produits adaptés aux préférences individuelles. En réduisant le temps de recherche et en améliorant la pertinence des recommandations, ils offrent une promesse d’assistance optimisée.

Toutefois, cette personnalisation soulève une question centrale : ces algorithmes améliorent-ils l’accès aux contenus et aux produits pertinents, ou participent-ils à un enfermement progressif dans des habitudes de consommation préétablies ?




À lire aussi :
Sommes-nous prêts à confier nos décisions d’achat à une IA ?


En favorisant les contenus similaires à ceux déjà consultés, les systèmes de recommandation tendent à renforcer les préférences préexistantes des utilisateurs, tout en restreignant la diversité des propositions auxquelles ces derniers sont exposés. Ce phénomène, identifié sous le terme de « bulle de filtre », limite l’ouverture à des perspectives nouvelles et contribue à une uniformisation des expériences de consommation.

L’utilisateur se trouve ainsi progressivement enfermé dans un environnement façonné par ses interactions antérieures, au détriment d’une exploration libre et fortuite, le « faire les boutiques » d’autrefois.

Glissement progressif de l’IA

Ce glissement remet en question l’équilibre entre l’intelligence artificielle en tant qu’outil d’assistance et son potentiel aliénant, dans la mesure où la liberté de choix et l’autonomie décisionnelle constituent des dimensions fondamentales du bien-être psychologique et de la construction identitaire.

Il soulève également des enjeux éthiques majeurs : dans quelle mesure l’expérience de consommation est-elle encore véritablement choisie, lorsqu’elle est orientée, voire imposée, par des algorithmes, souvent à l’insu des consommateurs, notamment ceux dont la littératie numérique demeure limitée ?

Des algorithmes qui deviennent cibles de la publicité

L’optimisation des publicités et des publications en ligne repose de plus en plus sur des critères imposés par les plateformes.

Cette tendance est particulièrement visible sur des plateformes comme YouTube, où les vidéos adoptent systématiquement des codes visuels optimisés : visages expressifs, polices de grande taille, couleurs vives. Ce format ne résulte pas d’une préférence spontanée des internautes, mais découle des choix algorithmiques qui privilégient ces éléments pour maximiser le taux de clics.

De manière similaire, sur les réseaux sociaux, les publications adoptent des structures spécifiques, phrases courtes et anecdotes engageantes, comme sur X, où les utilisateurs condensent leurs messages en formules percutantes pour maximiser les retweets. Cela ne vise pas nécessairement à améliorer l’expérience de lecture, mais répond aux critères de visibilité imposés par l’algorithme de la plateforme.

Ainsi, l’objectif des annonceurs ne se limite plus à séduire un public humain, mais vise principalement à optimiser la diffusion de leurs contenus en fonction des impératifs algorithmiques. Cette dynamique conduit à une homogénéisation des messages publicitaires, où l’innovation et l’authenticité tendent à s’effacer au profit d’une production standardisée répondant aux logiques des algorithmes.

Influence sur les préférences des consommateurs

Ces formats prédominants sont-ils uniquement imposés par les algorithmes, ou reflètent-ils les attentes des consommateurs ? En effet, si les algorithmes sont conçus pour maximiser l’engagement, cela suppose qu’ils s’appuient en partie sur les comportements et les préférences des utilisateurs. Pourtant, la véritable interrogation réside sans doute dans la manière dont les algorithmes influencent, par des expositions répétées, nos propres préférences, jusqu’à redéfinir ce que nous percevons comme attractif ou pertinent.

L’évolution de l’intelligence artificielle a donné naissance aux systèmes d’achat autonomes, qui prennent des décisions d’achat en toute indépendance. Ces systèmes reposent sur deux types d’agents intelligents : les agents verticaux et les agents horizontaux.

Les agents verticaux sont des IA spécialisées dans des domaines précis. Ils optimisent la gestion des achats en analysant des besoins spécifiques. Par exemple, les réfrigérateurs « intelligents » scannent leur contenu, identifient les produits manquants et passent commande automatiquement avant même que les consommateurs ne décident eux-mêmes de passer commande.

Les agents horizontaux coordonnent quant à eux plusieurs domaines d’achat. Des assistants, comme Alexa et Google Assistant, analysent les besoins en alimentation, mobilité et divertissement pour proposer une consommation intégrée et cohérente. L’interaction multi-agents permet ainsi d’accroître l’autonomie des systèmes d’achat.

Les agents verticaux assurent la précision et l’optimisation des achats, tandis que les agents horizontaux garantissent la cohérence des décisions à l’échelle globale. Cette synergie préfigure un avenir où la consommation devient totalement ou partiellement automatisée et prédictive. Progressivement, nous ne décidons plus quand acheter ni même quoi acheter : ces systèmes autonomes agissent pour nous, que ce soit pour notre bien ou à notre détriment !

Arte, 2024.

Qui est le principal agent de décision ?

L’accès à l’information et l’instantanéité offertes par l’IA aurait fait de nous des consommateurs « augmentés ». Pourtant, son évolution rapide soulève désormais une question fondamentale : sommes-nous encore les véritables décideurs de notre consommation, ou sommes-nous progressivement relégués à un rôle passif ? L’IA ne se limite plus à nous assister ; elle structure désormais un écosystème au sein duquel nos décisions tendent à être préprogrammées par des algorithmes, dans une logique d’optimisation.

Une telle dynamique soulève des interrogations profondes quant à l’avenir des modes de consommation : l’IA est-elle en passe de devenir le véritable consommateur, tandis que l’humain se limiterait à suivre un flux prédéfini ? Assistons-nous à l’émergence d’un marché où les interactions entre intelligences artificielles supplantent celles entre individus ?

L’avenir du libre arbitre

Si ces technologies offrent un confort indéniable, elles posent également la question du devenir de notre libre arbitre et de notre autonomie en tant que consommateurs, citoyens et humains. Dès lors, ne sommes-nous pas à l’aube d’une révolution où l’humain, consommateur passif, s’efface au profit d’une économie pilotée par des systèmes de consommation intelligents autonomes ?

Plus qu’une volonté de contrôle total des technologies qui freine l’innovation, c’est peut-être notre propre autonomie qu’il convient de repenser à l’aune de l’émergence de ces systèmes. Il s’agit alors de construire, selon la perspective des « technologies de soi » de Michel Foucault, des pratiques par lesquelles l’individu œuvre à sa propre transformation et à son émancipation des diverses formes de domination algorithmique.

x.
Fourni par l’auteur

The Conversation

Sylvie-Eléonore Rolland ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Quand l’IA devient le consommateur – https://theconversation.com/quand-lia-devient-le-consommateur-269765

Commerce, résilience, durabilité : la recette du G20 pour l’Afrique

Source: The Conversation – in French – By Wandile Sihlobo, Senior Fellow, Department of Agricultural Economics, Stellenbosch University

Le groupe de travail sur les systèmes alimentaires et agricoles durables du Business 20, un groupe consultatif du G20, a approuvé trois principes qui, selon lui, contribueront à la mise en place de systèmes alimentaires et d’une agriculture durables. Ces principes sont l’augmentation des échanges commerciaux, la résilience des chaînes d’approvisionnement et les pratiques agricoles durables.

L’économiste agricole Wandile Sihlobo explique ces trois principes et comment les pays africains peuvent les mettre à profit.

Qu’est-ce que la sécurité alimentaire mondiale ? En quoi diffère-t-elle de la pauvreté alimentaire ?

La sécurité alimentaire mondiale est un concept plus large. Elle vise à relever les défis liés à l’accès à la nourriture, à la nutrition, à la durabilité et à l’accessibilité financière. Elle cherche aussi à renforcer la coopération entre les pays – notamment les membres du G20 – pour réduire la pauvreté, à la fois dans le monde, au niveau national et au sein des ménages.

Pour atteindre cet objectif, chaque pays doit adapter ses politiques agricoles. Cela passe par une hausse de la production, une approche respectueuse de l’environnement et une réduction des obstacles au commerce.

Les pays qui ne produisent pas assez doivent pouvoir importer de la nourriture à un coût abordable. Cela implique de faciliter la logistique mondiale, de supprimer certains droits de douane et de lever, dans certains cas, les interdictions d’exportation. En 2023, par exemple, l’Inde a interdit l’exportation du riz non basmati, ce qui a provoqué une hausse des prix mondiaux.

C’est pour cette raison que je défends l’approche consistant à « assurer la sécurité alimentaire grâce au commerce ». Dans un monde où les échanges sont souvent entravés, cette approche permet de réduire les coûts et d’améliorer le niveau de vie, notamment dans les régions les plus pauvres principalement le Moyen-Orient et l’Asie.

Comment l’augmentation des échanges commerciaux, la résilience des chaînes d’approvisionnement et les pratiques agricoles durables peuvent-elles renforcer la sécurité alimentaire ?

Ces leviers sont au cœur de la réduction des coûts. Si les obstacles au commerce (tarifs douaniers, barrières non tarifaires ou interdictions d’exportation) sont allégés, il devient plus facile et moins cher d’acheminer les denrées des zones de production vers les zones de consommation à un prix abordable.

Des chaînes d’approvisionnement résilientes signifient également que les denrées alimentaires peuvent être produites, transformées et acheminées vers les points de consommation avec moins d’obstacles, même en cas de catastrophes naturelles et de conflits.

Quant aux pratiques agricoles durables, elles sont essentielles au système alimentaire mondial. Cela ne signifie pas qu’il faut abandonner les semences améliorées, la recherche génétique ou les intrants chimiques. Il s’agit principalement de mieux les utiliser.

J’ai remarqué une tendance inquiétante à l’activisme qui vise à éliminer les intrants agricoles, une voie qui conduirait à une baisse de la productivité et de la production agricoles, et finalement à une aggravation de la faim. La clé réside dans une utilisation sûre et optimale de ces intrants.

Lors des récentes manifestations agricoles dans l’Union européenne, l’approche réglementaire de l’UE en matière de pratiques agricoles durables a été l’un des principaux risques soulevés par les agriculteurs. Ils ont cité le Pacte vert pour l’Europe, qui vise à accélérer la réduction de l’utilisation d’intrants tels que les pesticides, les engrais et certains autres produits chimiques, qui sont essentiels à l’augmentation de la production.

À mon avis, le G20 devrait se prémunir contre les initiatives militantes qui mettent en danger la sécurité alimentaire mondiale.

Quelles politiques spécifiques les pays, en particulier les nations africaines, devraient-ils mettre en place pour garantir le succès de ces principes ?

L’Afrique du Sud et l’Union africaine, qui sont toutes deux membres du G20, devraient promouvoir trois grandes interventions dans le domaine de l’agriculture afin de mettre en œuvre les trois principes du G20 et de stimuler la production alimentaire au profit du continent africain.

1. Agriculture intelligente face au climat

Tout d’abord, il convient d’appeler fermement au partage des connaissances sur les pratiques agricoles intelligentes face au climat. Il s’agit de nouvelles innovations et méthodes agricoles qui minimisent les dommages causés aux cultures par les catastrophes climatiques telles que la sécheresse et les vagues de chaleur. Cela est important car l’Afrique est très vulnérable aux catastrophes naturelles.

Pour que l’agriculture africaine puisse se développer, les gouvernements doivent mettre en place des politiques coordonnées sur la manière de répondre aux catastrophes. Ces réponses doivent inclure tout ce dont les pays africains ont besoin pour atténuer les catastrophes climatiques, s’adapter au changement climatique et se remettre rapidement lorsque des catastrophes surviennent.

2. Réforme commerciale

Deuxièmement, l’Afrique doit faire pression pour une réforme du système commercial mondial et améliorer la sécurité alimentaire en Afrique grâce au commerce. L’Afrique du Sud bénéficie déjà d’un meilleur accès au commerce agricole avec plusieurs économies du G20 grâce à des droits de douane réduits et à un accès en franchise de droits.

Tous les membres du G20 ont intérêt à défendre un commerce ouvert entre les nations. Cela permet d’acheter et de vendre des produits agricoles à moindre coût. Ce qui est essentiel dans un contexte mondial où certains pays adoptent une attitude plus conflictuelle en matière de commerce.

Les pays africains dont l’agriculture est moins productive, avec des rendements généralement faibles ou médiocres, pourraient ne pas bénéficier autant, à court terme, d’un commerce ouvert. Ils en tireront toutefois profit à long terme.

3. Améliorer l’accès aux engrais

Troisièmement, l’Afrique doit continuer à prioriser la production et le commerce des engrais. Dans la plupart des pays d’Afrique subsaharienne, l’accès et l’usage des engrais restent faibles. Or, ils sont essentiels pour accroître la production et réduire l’insécurité alimentaire. L’accès à des financements abordables est également un défi pour l’agriculture africaine.

Il est donc essentiel de relier les discussions sur les engrais aux investissements dans les industries de réseau telles que les routes et les ports. Disposer d’engrais est une chose, mais leur acheminement vers les zones agricoles est difficile dans certains pays et augmente les coûts pour les agriculteurs. Dans ce cadre, le G20 devrait encourager la production locale.

La production d’engrais sur le continent atténuerait l’impact négatif des chocs mondiaux sur les prix. Elle permettrait également aux pays africains les plus vulnérables d’acheter et de distribuer des engrais à un prix abordable.

Comment concilier productivité agricole et réduction de l’impact climatique ?

Nous devons utiliser la technologie pour nous adapter au changement climatique plutôt que de diaboliser l’utilisation des produits agrochimiques et la sélection des semences, qui est certainement une tendance à la hausse dans certaines régions d’Afrique du Sud. Si nous utilisons des variétés de semences à haut rendement, des engrais et des produits agrochimiques pour lutter contre les maladies, nous pouvons alors cultiver une superficie relativement plus petite et compter sur un rendement suffisant.

Mais si nous réduisons considérablement ces intrants, nous dépendons davantage de l’expansion de la superficie que nous cultivons. Cultiver plus de terres signifie nuire à l’environnement. L’accent devrait être mis sur l’utilisation optimale et sûre des intrants agricoles afin d’améliorer la production alimentaire. C’est la clé pour parvenir à la sécurité alimentaire mondiale.

Le G20 a un rôle à jouer pour garantir que nous nous dirigeons vers un monde meilleur. Les principes agricoles évoqués ici offrent une feuille de route concrète pour construire un monde meilleur avec plus de sécurité alimentaire.

The Conversation

Wandile Sihlobo is the Chief Economist of the Agricultural Business Chamber of South Africa (Agbiz) and a member of the Presidential Economic Advisory Council (PEAC).

ref. Commerce, résilience, durabilité : la recette du G20 pour l’Afrique – https://theconversation.com/commerce-resilience-durabilite-la-recette-du-g20-pour-lafrique-269627

NASA goes on an ESCAPADE – twin small, low-cost orbiters will examine Mars’ atmosphere

Source: The Conversation – USA – By Christopher Carr, Assistant Professor of Aerospace Engineering, Georgia Institute of Technology

This close-up illustration shows what one of the twin ESCAPADE spacecraft will look like conducting its science operations. James Rattray/Rocket Lab USA/Goddard Space Flight Center

Envision a time when hundreds of spacecraft are exploring the solar system and beyond. That’s the future that NASA’s ESCAPADE, or Escape and Plasma Acceleration and Dynamics Explorers, mission will help unleash: one where small, low-cost spacecraft enable researchers to learn rapidly, iterate, and advance technology and science.

The ESCAPADE mission launched on Nov. 13, 2025 on a Blue Origin New Glenn rocket, sending two small orbiters to Mars to study its atmosphere. As aerospace engineers, we’re excited about this mission because not only will it do great science while advancing the deep space capabilities of small spacecraft, but it also will travel to the red planet on an innovative new trajectory.

The ESCAPADE mission is actually two spacecraft instead of one. Two identical spacecraft will take simultaneous measurements, resulting in better science. These spacecraft are smaller than those used in the past, each about the size of a copy machine, partly enabled by an ongoing miniaturization trend in the space industry. Doing more with less is very important for space exploration, because it typically takes most of the mass of a spacecraft simply to transport it where you want it to go.

A patch with a drawing of two spacecraft, one behind the other, on a red background and the ESCAPADE mission title.
The ESCAPADE mission logo shows the twin orbiters.
TRAX International/Kristen Perrin

Having two spacecraft also acts as an insurance policy in case one of them doesn’t work as planned. Even if one completely fails, researchers can still do science with a single working spacecraft. This redundancy enables each spacecraft to be built more affordably than in the past, because the copies allow for more acceptance of risk.

Studying Mars’ history

Long before the ESCAPADE twin spacecraft Blue and Gold were ready to go to space – billions of years ago, to be more precise – Mars had a much thicker atmosphere than it does now. This atmosphere would have enabled liquids to flow on its surface, creating the channels and gullies that scientists can still observe today.

But where did the bulk of this atmosphere go? Its loss turned Mars into the cold and dry world it is today, with a surface air pressure less than 1% of Earth’s.

Mars also once had a magnetic field, like Earth’s, that helped to shield its atmosphere. That atmosphere and magnetic field would have been critical to any life that might have existed on early Mars.

A view of Mars' crater-flecked surface from above.
Today, Mars’ atmosphere is very thin. Billions of years ago, it was much thicker.
©UAESA/MBRSC/HopeMarsMission/EXI/AndreaLuck, CC BY-ND

ESCAPADE will measure remnants of this magnetic field that have been preserved by ancient rock and study the flow and energy of Mars’ atmosphere and how it interacts with the solar wind, the stream of particles that the sun emits along with light. These measurements will help to reveal where the atmosphere went and how quickly Mars is still losing it today.

Weathering space on a budget

Space is not a friendly place. Most of it is a vacuum – that is, mostly empty, without the gas molecules that create pressure and allow you to breathe or transfer heat. These molecules keep things from getting too hot or too cold. In space, with no pressure, a spacecraft can easily get too hot or too cold, depending on whether it is in sunlight or in shadow.

In addition, the Sun and other, farther astronomical objects emit radiation that living things do not experience on Earth. Earth’s magnetic field protects you from the worst of this radiation. So when humans or our robotic representatives leave the Earth, our spacecraft must survive in this extreme environment not present on Earth.

ESCAPADE will overcome these challenges with a shoestring budget totaling US$80 million. That is a lot of money, but for a mission to another planet it is inexpensive. It has kept costs low by leveraging commercial technologies for deep space exploration, which is now possible because of prior investments in fundamental research.

For example, the GRAIL mission, launched in 2011, previously used two spacecraft, Ebb and Flow, to map the Moon’s gravity fields. ESCAPADE takes this concept to another world, Mars, and costs a fraction as much as GRAIL.

Led by Rob Lillis of UC Berkeley’s Space Sciences Laboratory, this collaboration between spacecraft builders Rocket Lab, trajectory specialists Advanced Space LLC and launch provider Blue Origin – all commercial partners funded by NASA – aims to show that deep space exploration is now faster, more agile and more affordable than ever before.

NASA’s ESCAPADE represents a partnership between a university, commercial companies and the government.

How will ESCAPADE get to Mars?

ESCAPADE will also use a new trajectory to get to Mars. Imagine being an archer in the Olympics. To hit a bull’s-eye, you have to shoot an arrow through a 15-inch – 40-centimeter – circle from a distance of 300 feet, or 90 meters. Now imagine the bull’s-eye represents Mars. To hit it from Earth, you would have to shoot an arrow through the same 15-inch bull’s-eye at a distance of over 13 miles, or 22 kilometers. You would also have to shoot the arrow in a curved path so that it goes around the Sun.

Not only that, but Mars won’t be at the bull’s-eye at the time you shoot the arrow. You must shoot for the spot that Mars will be in 10 months from now. This is the problem that the ESCAPADE mission designers faced. What is amazing is that the physical laws and forces of nature are so predictable that this was not even the hardest problem to solve for the ESCAPADE mission.

It takes energy to get from one place to another. To go from Earth to Mars, a spacecraft has to carry the energy it needs, in the form of rocket fuel, much like gasoline in a car. As a result, a high percentage of the total launch mass has to be fuel for the trip.

When going to Mars orbit from Earth orbit, as much as 80% to 85% of the spacecraft mass has to be propellant, which means not much mass is dedicated to the part of the spacecraft that does all the experiments. This issue makes it important to pack as much capability into the rest of the spacecraft as possible. For ESCAPADE, the propellant is only about 65% of the spacecraft’s mass.

ESCAPADE’s route is particularly fuel-efficient. First, Blue and Gold will go to the L2 Lagrange point, one of five places where gravitational forces of the Sun and Earth cancel out. Then, after about a year, during which they will collect data monitoring the Sun, they will fly by the Earth, using its gravitational field to get a boost. This way, they will arrive at Mars in about 10 more months.

This new approach has another advantage beyond needing to carry less fuel: Trips from Earth to Mars are typically favorable to save fuel about every 26 months due to the two planets’ relative positions. However, this new trajectory makes the departure time more flexible. Future cargo and human missions could use a similar trajectory to have more frequent and less time-constrained trips to Mars.

ESCAPADE is a testament to a new era in spaceflight. For a new generation of scientists and engineers, ESCAPADE is not just a mission – it is a blueprint for a new collaborative era of exploration and discovery.

This article was updated on Nov. 13, 2025 to reflect the ESCAPADE launch’s date and success.

The Conversation

Christopher E. Carr is part of the science team for the Rocket Lab Mission to Venus (funding from Schmidt Sciences and NASA). More information is available at https://www.morningstarmissions.space/rocketlabmissiontovenus

Glenn Lightsey does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. NASA goes on an ESCAPADE – twin small, low-cost orbiters will examine Mars’ atmosphere – https://theconversation.com/nasa-goes-on-an-escapade-twin-small-low-cost-orbiters-will-examine-mars-atmosphere-269321