Source: The Conversation – USA – By Stephen DiKerby, Postdoctoral Researcher in Physics and Astronomy, Michigan State University
In a few billion years, the Milky Way and Andromeda, the nearest spiral galaxy, might collide. Future observers could be treated to fantastic views.NASA; ESA; Z. Levay and R. van der Marel, STScI; T. Hallas; and A. Mellinger
How will the universe end? – Iez M., age 9, Rochester, New York
Whether the universe will “end” at all is not certain, but all evidence suggests it will continue being humanity’s cosmic home for a very, very long time.
The universe – all of space and time, and all matter and energy – began about 14 billion years ago in a rapid expansion called the Big Bang, but since then it has been in a state of continuous change. First, it was full of a diffuse gas of the particles that now make up atoms: protons, neutrons and electrons. Then, that gas collapsed into stars and galaxies.
Our current theory for the history of the universe. On the left is the Big Bang roughly 14 billion years ago. The structure and makeup of the universe have changed over time. NASA/WMAP Science Team
Our understanding of the future of the universe is informed by the objects and processes we observe today. As an astrophysicist, I observe objects like distant galaxies, which lets me study how stars and galaxies change over time. By doing so, I develop theories that predict how the universe will change in the future.
Predicting the future by studying the past?
Predicting the future of the universe by extending what we see today is extrapolation. It’s risky, because something unexpected could happen.
Interpolation – connecting the dots within a dataset – is much safer. Imagine you have a picture of yourself when you were 5 years old, and then another when you were 7 years old. Someone could probably guess what you looked like when you were 6. That’s interpolation.
Using a picture of the author when he was 5 years old and 7 years old, you could interpolate what he looked like when he was 6 years old, but you couldn’t predict what he would look like at 29. Stephen DiKerby
Maybe they could extrapolate from the two pictures to what you’d look like when you are 8 or 9 years old, but no one can accurately predict too far into the future. Maybe in a few years you get glasses or suddenly get really tall.
Scientists can predict what the universe will probably look like a few billion years into the future by extrapolating how stars and galaxies change over time, but eventually things could get weird. The universe and the stuff within might once again change, like it has in the past.
How will stars change in the future?
Good news: The Sun, our medium-sized yellow star, is going to continue shining for billions of years. It’s about halfway through its 10 billion-year lifetime. The lifetime of a star depends on its size. Big, hot, blue stars live shorter lives, while tiny, cool, red stars live for much longer.
Today, some galaxies are still producing new stars, but others have depleted their star-forming gas. When a galaxy stops forming stars, the blue stars quickly go “supernova” and disappear, exploding after only a few million years. Then, billions of years later, the yellow stars like the Sun eject their outer layers into a nebula, leaving only the red stars puttering along. Eventually, all galaxies throughout the universe will stop producing new stars, and the starlight filling the universe will gradually redden and dim.
Red dwarf stars are the longest-lived type of stars. Once star formation shuts down throughout the universe, eventually only red stars will be left, gradually fading away over trillions of years. NASA/ESA/STScI/G. Bacon
In trillions of years – hundreds of times longer than the universe’s current age – these red stars will also fade away into darkness. But until then, there will be lots of stars providing light and warmth.
How will galaxies change in the future?
Think of building a sand castle on the beach. Each bucket of sand makes the castle bigger and bigger. Galaxies grow over time in a similar way by eating up smaller galaxies. These galactic mergers will continue into the future.
In galaxy clusters, hundreds of galaxies fall inward toward their shared center, often resulting in messy collisions. In these mergers, spiral galaxies, which are orderly disks, combine in chaotic ways into disordered blob-shaped clouds of stars. Think of how easy it is to turn a well-constructed sand castle into a big mess by kicking it over.
For this reason, the universe over time will have fewer spiral galaxies and more elliptical galaxies because the spiral galaxies combine into elliptical galaxies.
The Milky Way galaxy and the neighboring Andromeda galaxy might combine in this way in a few billion years. Don’t worry: The stars in each galaxy would whiz past each other totally unharmed, and future stargazers would get a fantastic view of the two galaxies merging.
How will the universe itself change in the future?
The Big Bang kick-started an expansion that probably will continue in the future. The gravity of all the stuff in the universe – stars, galaxies, gas, dark matter – pulls inward and slows down the expansion, and some theories suggest that the universe’s expansion will coast along or slow to a halt.
However, some evidence suggests that some unknown force is starting to exert a repulsive force, causing expansion to speed up. Scientists call this outward force dark energy, but very little is known about it. Like raisins in a baking cookie, galaxies will zoom away from each other faster and faster. If this continues into the future, other galaxies might be too far apart to observe from the Milky Way.
After star formation shuts down and galaxies merge into huge ellipticals, the expansion of the universe might mean that other galaxies are impossible to observe. For trillions of years, this might be the view of the unchanging night sky: a single red elliptical galaxy. NASA; ESA; Z. Levay and R. van der Marel, STScI; T. Hallas; and A. Mellinger
To summarize the best current prediction of the future: Star formation will shut down, so galaxies will be full of old, red, dim stars gradually cooling into darkness. Each group or cluster of galaxies will merge into a single, massive, elliptical galaxy. The accelerated expansion of the universe will make it impossible to observe other galaxies beyond the local group.
This scenario eventually winds down into a dark eternity, lasting trillions of years. New data might come to light that changes this story, and the next stage in the universe’s history might be something totally different and unexpectedly beautiful. Depending on how you look at it, the universe might not have an “end,” after all. Even if what exists is very different from how the universe is now, it’s hard to envision a distant future where the universe is entirely gone.
How does this scenario make you feel? It sometimes makes me feel wistful, which is a type of sadness, but then I remember we live at a very exciting time in the story of the universe: right at the start, in an era full of exciting stars and galaxies to observe! The cosmos can support human society and curiosity for billions of years into the future, so there’s lots of time to keep exploring and searching for answers.
Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.
And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.
Stephen DiKerby receives funding from the National Science Foundation.
Every year, companies and space agencies launch hundreds of rockets into space – and that number is set to grow dramatically with ambitious missions to the Moon, Mars and beyond. But these dreams hinge on one critical challenge: propulsion – the methods used to push rockets and spacecraft forward.
To make interplanetary travel faster, safer and more efficient, scientists need breakthroughs in propulsion technology. Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.
Machine learning is a branch of AI that identifies patterns in data that it has not explicitly been trained on. It is a vast field with its own branches, with a lot of applications. Each branch emulates intelligence in different ways: by recognizing patterns, parsing and generating language, or learning from experience. This last subset in particular, commonly known as reinforcement learning, teaches machines to perform their tasks by rating their performance, enabling them to continuously improve through experience.
As a simple example, imagine a chess player. The player does not calculate every move but rather recognizes patterns from playing a thousand matches. Reinforcement learning creates similar intuitive expertise in machines and systems, but at a computational speed and scale impossible for humans. It learns through experiences and iterations by observing its environment. These observations allows the machine to correctly interpret each outcome and deploy the best strategies for the system to reach its goal.
Reinforcement learning can improve human understanding of deeply complex systems – those that challenge the limits of human intuition. It can help determine the most efficient trajectory for a spacecraft heading anywhere in space, and it does so by optimizing the propulsion necessary to send the craft there. It can also potentially design better propulsion systems, from selecting the best materials to coming up with configurations that transfer heat between parts in the engine more efficiently.
In reinforcement learning, you can train an AI model to complete tasks that are too complex for humans to complete themselves.
Reinforcement learning for propulsion systems
In regard to space propulsion, reinforcement learning generally falls into two categories: those that assist during the design phase – when engineers define mission needs and system capabilities – and those that support real-time operation once the spacecraft is in flight.
Among the most exotic and promising propulsion concepts is nuclear propulsion, which harnesses the same forces that power atomic bombs and fuel the Sun: nuclear fission and nuclear fusion.
Fission works by splitting heavy atoms such as uranium or plutonium to release energy – a principle used in most terrestrial nuclear reactors. Fusion, on the other hand, merges lighter atoms such as hydrogen to produce even more energy, though it requires far more extreme conditions to initiate.
Fission is a more mature technology that has been tested in some space propulsion prototypes. It has even been used in space in the form of radioisotope thermoelectric generators, like those that powered the Voyager probes. But fusion remains a tantalizing frontier.
Nuclear thermal propulsion could one day take spacecraft to Mars and beyond at a lower cost than that of simply burning fuel. It would get a craft there faster than electric propulsion, which uses a heated gas made of charged particles called plasma.
Unlike these systems, nuclear propulsion relies on heat generated from atomic reactions. That heat is transferred to a propellant, typically hydrogen, which expands and exits through a nozzle to produce thrust and shoot the craft forward.
So how can reinforcement learning help engineers develop and operate these powerful technologies? Let’s begin with design.
The nuclear heat source for the Mars Curiosity rover, part of a radioisotope thermoelectric generator, is encased in a graphite shell. The fuel glows red hot because of the radioactive decay of plutonium-238. Idaho National Laboratory, CC BY
Reinforcement learning’s role in design
Early nuclear thermal propulsion designs from the 1960s, such as those in NASA’s NERVA program, used solid uranium fuel molded into prism-shaped blocks. Since then, engineers have explored alternative configurations – from beds of ceramic pebbles to grooved rings with intricate channels.
The first nuclear thermal rocket was built in 1967 and is seen in the background. In the foreground is the protective casing that would hold the reactor. NASA/Wikipedia
Why has there been so much experimentation? Because the more efficiently a reactor can transfer heat from the fuel to the hydrogen, the more thrust it generates.
This area is where reinforcement learning has proved to be essential. Optimizing the geometry and heat flow between fuel and propellant is a complex problem, involving countless variables – from the material properties to the amount of hydrogen that flows across the reactor at any given moment. Reinforcement learning can analyze these design variations and identify configurations that maximize heat transfer. Imagine it as a smart thermostat but for a rocket engine – one you definitely don’t want to stand too close to, given the extreme temperatures involved.
Reinforcement learning and fusion technology
Reinforcement learning also plays a key role in developing nuclear fusion technology. Large-scale experiments such as the JT-60SA tokamak in Japan are pushing the boundaries of fusion energy, but their massive size makes them impractical for spaceflight. That’s why researchers are exploring compact designs such as polywells. These exotic devices look like hollow cubes, about a few inches across, and they confine plasma in magnetic fields to create the conditions necessary for fusion.
Controlling magnetic fields within a polywell is no small feat. The magnetic fields must be strong enough to keep hydrogen atoms bouncing around until they fuse – a process that demands immense energy to start but can become self-sustaining once underway. Overcoming this challenge is necessary for scaling this technology for nuclear thermal propulsion.
Reinforcement learning and energy generation
However, reinforcement learning’s role doesn’t end with design. It can help manage fuel consumption – a critical task for missions that must adapt on the fly. In today’s space industry, there’s growing interest in spacecraft that can serve different roles depending on the mission’s needs and how they adapt to priority changes through time.
Military applications, for instance, must respond rapidly to shifting geopolitical scenarios. An example of a technology adapted to fast changes is Lockheed Martin’s LM400 satellite, which has varied capabilities such as missile warning or remote sensing.
But this flexibility introduces uncertainty. How much fuel will a mission require? And when will it need it? Reinforcement learning can help with these calculations.
From bicycles to rockets, learning through experience – whether human or machine – is shaping the future of space exploration. As scientists push the boundaries of propulsion and intelligence, AI is playing a growing role in space travel. It may help scientists explore within and beyond our solar system and open the gates for new discoveries.
Sreejith Vidhyadharan Nair receives funding from the University of North Dakota. I have previously received external research funding from agencies such as the FAA and NASA; however, these projects were not related to nuclear propulsion systems.
Marcos Fernandez Tous, Preeti Nair, and Sai Susmitha Guddanti do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Alex McPhee-Browne, PhD student studying the American and global far right, University of Cambridge
Right-wing influencer Nick Fuentes, center, speaks in front of flags that say ‘America First’ at a pro-Trump march on Nov. 14, 2020, in Washington. AP Photo/Jacquelyn Martin, File
“We’ve had some great interviews with Tucker Carlson, but you can’t tell him who to interview,” President Donald Trump said on Nov. 17, 2025. “Ultimately, people have to decide.”
This dynamic reveals how fringe ideologies operate differently today compared to the mid-20th century, when institutional gatekeepers – political parties, law enforcement, the media – could more effectively contain extremist movements.
And through their 21st-century methods of communication and operation, Nick Fuentes and his followers – the “Groypers” – have managed to get what their 20th-century predecessors could not: widespread awareness and political influence.
Atlanta, 1940: Brazen but brief fascist group
As a historian of the American far right, I have spent years examining how fascist movements adapted to the conditions of postwar America. The trajectory from the 1940s until today shows a fundamental shift: from defined organizational structures that could be dismantled to diffuse cultural movements that spread through social media.
Let me offer an example.
In 1946, barely a year after Hitler’s defeat, young men in khaki shirts marched through Atlanta, Georgia, performing Nazi salutes and promising racial vengeance.
Led by Homer Loomis Jr. – a Princeton dropout who called Hitler’s manifesto “Mein Kampf” his “bible” – this group, known as the Columbians, offered Atlanta a glimpse of explicit fascism. They conducted armed patrols, held uniformed drills and even drew up blueprints for blowing up City Hall.
Their brazenness, however, was matched by their brevity. Ten months after forming, Atlanta authorities revoked their charter and jailed the ringleaders.
The swift suppression seemed to prove that explicit fascism had no future in postwar America. And for decades that held true. Open Nazi sympathizers remained marginal, their organizations small and easily ostracized.
In the 1970s, when a group of American Nazis planned to march in Skokie, Illinois, a predominantly Jewish suburb of Chicago, the event was most notable for the counterprotests it triggered.
Mainstreaming fascism
But the Columbians’ failure, it turned out, was organizational, not ideological. The government could revoke a charter and convict leaders. They could not repress a mood.
In the digital age, Fuentes represents that mood as a diffuse sensibility rather than a structured organization. Where the Columbians wore uniforms that advertised their fascist allegiance, Fuentes wears suits and frames his worldview in the rhetoric of “America First.”
The difference is strategic. In a 2019 livestream, Fuentes explained his approach openly: “Bit by bit we start to break down these walls … and then one day, we become the mainstream.”
This packaging marks a deliberate shift. Fuentes treats plausible deniability – of fascism, of antisemitism – not as a weakness but as a central feature. The content of his message remains extreme, but the ironic wrapping enables something the Columbians never achieved – cultural saturation.
Fuentes’s followers, Groypers, have in turn mastered this diffusion strategy.
For many conservatives under 40, exposure to Groyper-style content isn’t in meetings. They absorb it through social media feeds, Discord servers and group chats. A tone of grievance and ironic provocation becomes prominent background noise, moving the marginal toward the mainstream. A generation raised on anti-woke content, 4chan and transgressive memes now shapes the neofascist movement’s tone.
At the same time, institutional authority has in many ways effectively collapsed. The Columbians faced united opposition from media, prosecutors and politicians. Those gatekeepers no longer control conservatism or the white nationalists who are adjacent to it.
In late 2022, former President Donald Trump issued this social media post after having dinner with Nick Fuentes. X
Achieving what predecessors could not
The Carlson-Fuentes interview has instead exposed a rift within MAGA circles.
Several board members of the Heritage Foundation, a conservative think tank with deep ties to the Trump administration, have resigned over the controversy, including one this week.
They were angered that Kevin Roberts, the foundation’s president, released a video defending the interview. Roberts has apologized for some of its contents but not retracted it.
Republicans aren’t all in agreement about whether Groypers represent a threat or an important constituency. Members of Congress have given speeches at Fuentes’ conferences; Trump dined with him at Mar-a-Lago in 2022.
Last year, JD Vance, now the vice president, called Fuentes a “total loser.” Fuentes attempted, without success, to mobilize Groypers against Trump in 2024 and called the president a “scam artist” earlier this year for failing to release the files in the Jeffrey Epstein case.
Yet the broader Groyperfication of conservative youth culture proceeds apace. Trump reversed his stance on the Epstein files. In defending Carlson’s interview with Fuentes, Trump said, “I don’t know much about him.”
The old remedies no longer function. Authorities cannot ban an atmosphere or revoke the charter of a meme. Social media platforms designed to maximize engagement often maximize anger. Fuentes and imitators exploit this frustration.
They remain controversial, and the Groypers’ lack of formal institutions could mean they will at some point fade like other far-right youth movements. Trump’s eventual exit from politics may also deprive them of a central reference point.
But they might represent something new: a post-organizational extremism uniquely adapted to digital life.
The Columbians once promised to control Atlanta in six months and America in 10 years. They lasted 10 months. The Groypers have already long outlasted them. That endurance signals a new, far more successful approach.
Alex McPhee-Browne does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
U.S. President John F. Kennedy, right, confers with his brother, Attorney General Robert F. Kennedy, at the White House on Oct. 1, 1962, during the buildup of military tensions that became the Cuban missile crisis later that month. AP Photo
Something’s missing from Robert F. Kennedy Jr.’s accounts of “Operation Northwoods.” Something that explains the origins of this menu of false flag operations – pretexts for war with Cuba – drafted by the Pentagon in March 1962.
Something about his father.
Most people remember Robert F. Kennedy as President John F. Kennedy’s closest confidant, campaign manager and attorney general, the tough but idealistic younger brother who helped him through the Cuban missile crisis and later waged an antiwar campaign for president, before becoming the second Kennedy brother slain by an assassin.
Kennedy Jr. pinned the blame for the pretexts solely on “the highest officials in the U.S. military,” accusing them of “lethal zealotry,” decrying “how badly the American military leadership had lost its moral bearings.”
To illustrate the point, he cited one pretext at length: “A ‘Remember the Maine’ incident could be arranged in several forms: We could blow up a U.S. ship in Guantánamo Bay and blame Cuba.”
In each of these accounts, Kennedy Jr. omitted the most important part of the “Operation Northwoods” story: his father’s role. I learned of that role from documents declassified by the JFK Assassination Records Review Board, in the Kennedy Library and in other archives while researching a book I’m writing, “Clandestine Camelot.”
Robert F. Kennedy aimed to use false flag operations as a pretext to go to war with Cuba and depose its communist leader, Fidel Castro, seen here in 1963. Keystone-France/Gamma-Keystone via Getty Images
Debacle with a chaser of deceit
In the first foreign policy memo he dictated, Attorney General Kennedy broached the idea of fabricating an attack on the U.S. naval base at Guantánamo Bay in Cuba, one of the spoils of the Spanish-American War.
It was April 19, 1961, and the Bay of Pigs invasion was in mid-collapse. Roughly 1,500 CIA-trained and -financed Cuban expatriates were mounting a doomed attempt to overthrow Fidel Castro, the Cuban revolutionary-turned-tyrant. Castro had the invaders pinned down on the beach under fire from Moscow-furnished MiG fighter jets.
It was then that the attorney general asked the president if they could get Central and South American nations “to take some action” to stop the flow of Russian arms to Cuba “if it was reported that one or two of Castro’s MiGs attacked Guantanamo Bay and the United States made noises like this was an act of war and that we might very well have to take armed action ourselves.”
Castro, of course, had not attacked the U.S. naval base. That would have meant war with America and the end of his regime.
President Kennedy didn’t act on his brother’s suggestion, but began including him regularly in foreign policymaking. Newspapers started calling RFK “the second most important man in the Western World.”
Fomenting revolution in Cuba faced an insurmountable obstacle: Castro was already powerful enough to crush any purely internal uprising.
CIA, State Department, and Defense Department officials agreed that the only way to overthrow Castro was a U.S. invasion.
Under Robert Kennedy’s leadership, the “Special Group (Augmented),” the interagency group JFK charged with overseeing Mongoose, proposed to change the covert operation’s goal from orchestrating subversion to justifying U.S. military intervention.
On March 5, 1962, the group asked Deputy Under Secretary of State for Political Affairs U. Alexis Johnson “to have a list prepared of various situations which would serve as a plausible pretext for intervention.” In the minutes of that meeting, someone crossed out “plausible pretext” and wrote “valid basis.”
The minutes of a March 5, 1962, meeting show that Robert F. Kennedy’s group asked a State Department staffer to prepare a list of various situations ‘which would serve as a plausible pretext for intervention’ in Cuba. National Archives
The Joint Chiefs of Staff responded by drafting the document now known as “Operation Northwoods.”
Fun fact: No one called it “Operation Northwoods” at the time.
“Northwoods” was just a code word the Joint Chiefs of Staff used on Mongoose documents. In the 21st century, however, historiansmistook the code word for a code name and gave the pretexts their unhistorical handle. There was no “Operation Northwoods,” but that didn’t stop it from getting its own Wikipedia page.
The Special Group (Augmented) voted on March 13, 1962, to alter the Mongoose guidelines to state “that final success will require decisive U.S. military intervention.”
Three days later, the group briefed the president on the revised guidelines, including secret “plans for creating plausible pretexts to use force, with the pretexts either attacks on U.S. aircraft or a Cuban action in Latin America for which we would retaliate.”
President Kennedy said “bluntly” that they were not then able to make a decision on the use of military force.
But he did tell the group to “go ahead on the guidelines.” Since the revised guidelines said that Cubans “will be used to prepare for and justify this [U.S. military] intervention, and thereafter to facilitate and support it,” the revision transformed Mongoose into a secret program to furnish the president with a pretext to invade Cuba if he so chose.
Fortunately, he didn’t.
Apocalyptic advice
The last recorded time Robert Kennedy urged his brother to consider a false flag operation was on Oct. 16, 1962, the first day of the Cuban missile crisis.
The president’s secret White House recording system captured Robert Kennedy advising him to consider fabricating a pretext for U.S military intervention: “Can I say that one other thing is whether we should also think of whether there is some other way we can get involved in this, through Guantánamo Bay or something. Or whether there’s some ship that … you know, sink the Maine again or something.”
JFK ignored the suggestion. Taking it would have, in all likelihood, started a nuclear war.
This year marks the centennial of Robert Kennedy’s birth, the perfect occasion to stop scapegoating the military for his darkest deeds. In drafting pretexts for war, the Pentagon was complying with instructions it received through the command structure the president established for Operation Mongoose. Generals have little choice but to comply with such instructions unless and until Congress outlaws false flag operations.
Robert F. Kennedy Jr. wrote that the “Operation Northwoods memo should serve as a warning [to] the American people about the dangers of allowing the military to set goals or standards for our country.”
In reality, it reveals the dangers of letting someone like Robert F. Kennedy use the power of the U.S. government to deceive Americans about life-or-death matters.
Ken Hughes is a research specialist with the Presidential Recordings Program of the University of Virginia’s Miller Center, whose work is funded in part by grants from the National Historical Publications and Records Commission.
Source: The Conversation – USA (2) – By Francesco Agnellini, Lecturer in Digital and Data Studies, Binghamton University, State University of New York
Preserving the value of real human voices will likely depend on how people adapt to artificial intelligence and collaborate with it. BlackJack3D/E+ via Getty Images
The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.
Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence.
As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it’s affecting culture, I’ve thought a lot about what this technology can do and where it falls short.
If you’re more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?
It isn’t all or nothing
Thinking about these questions reminded me of Umberto Eco’s essay “Apocalyptic and Integrated,” which was originally written in the early 1960s. Parts of it were later included in an anthology titled “Apocalypse Postponed,” which I first read as a college student in Italy.
In it, Eco draws a contrast between two attitudes toward mass media. There are the “apocalyptics” who fear cultural degradation and moral collapse. Then there are the “integrated” who champion new media technologies as a democratizing force for culture.
Italian philosopher, cultural critic and novelist Umberto Eco cautioned against overreacting to the impact of new technologies. Leonardo Cendamo/Getty Images
Back then, Eco was writing about the proliferation of TV and radio. Today, you’ll often see similar reactions to AI.
Yet Eco argued that both positions were too extreme. It isn’t helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shape – and sometimes reinforce – power structures.
While I was teaching a course on deepfakes during the 2024 election, Eco’s lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent “deepfake apocalypse.”
Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home?
For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning?
It’s important to clarify what’s meant by “online content,” the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements.
A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers.
The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business.
A whole industry of writers – mostly freelance, including many translators – has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.
Collaborating with AI
The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity.
How can you distinguish a human-written article from a machine-generated one? And does that ability even matter?
Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them and then reshape that output into the final text.
This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone.
Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. It’s worth noting that even before AI, human writing has never been entirely human, either. Every technology, from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it.
Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and human–AI co-produced text.
But what happens when people become overly reliant on AI in their writing?
Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Western – and especially English-speaking – norms in the writing of people from other cultures, raising concerns about a new form of AI colonialism.
In this context, texts that display originality, voice and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models.
If you set aside the more apocalyptic scenarios and assume that AI will continue to advance – perhaps at a slower pace than in the recent past – it’s quite possible that thoughtful, original, human-generated writing will become even more valuable.
Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans.
Francesco Agnellini does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Emily Ronay Johnston, Assistant Teaching Professor of Global Arts, Media and Writing Studies, University of California, Merced
Ordinary and universal, the act of writing changes the brain. From dashing off a heated text message to composing an op-ed, writing allows you to, at once, name your pain and create distance from it. Writing can shift your mental state from overwhelm and despair to grounded clarity — a shift that reflects resilience.
Psychology, the media and the wellness industry shape public perceptions of resilience: Social scientists study it, journalists celebrate it, and wellness brands sell it.
In my work as a professor of writing studies, I research how people use writing to navigate trauma and practice resilience. I have witnessed thousands of students turn to the written word to work through emotions and find a sense of belonging. Their writing habits suggest that writing fosters resilience. Insights from psychology and neuroscience can help explain how.
Writing rewires the brain
In the 1980s, psychologist James Pennebaker developed a therapeutic technique called expressive writing to help patients process trauma and psychological challenges. With this technique, continuously journaling about something painful helps create mental distance from the experience and eases its cognitive load.
In other words, externalizing emotional distress through writing fosters safety. Expressive writing turns pain into a metaphorical book on a shelf, ready to be reopened with intention. It signals the brain, “You don’t need to carry this anymore.”
Translating emotions and thoughts into words on paper is a complex mental task. It involves retrieving memories and planning what to do with them, engaging brain areas associated with memory and decision-making. It also involves putting those memories into language, activating the brain’s visual and motor systems.
Writing things down supports memory consolidation — the brain’s conversion of short-term memories into long-term ones. The process of integration makes it possible for people to reframe painful experiences and manage their emotions. In essence, writing can help free the mind to be in the here and now.
Taking action through writing
The state of presence that writing can elicit is not just an abstract feeling; it reflects complex activity in the nervous system.
Brain imaging studies show that putting feelings into words helps regulate emotions. Labeling emotions — whether through expletives and emojis or carefully chosen words — has multiple benefits. It calms the amygdala, a cluster of neurons that detects threat and triggers the fear response: fight, flight, freeze or fawn. It also engages the prefrontal cortex, a part of the brain that supports goal-setting and problem-solving.
In other words, the simple act of naming your emotions can help you shift from reaction to response. Instead of identifying with your feelings and mistaking them for facts, writing can help you simply become aware of what’s arising and prepare for deliberate action.
Even mundane writing tasks like making a to-do list stimulate parts of the brain involved in reasoning and decision-making, helping you regain focus.
Making meaning through writing
Choosing to write is also choosing to make meaning. Studies suggest that having a sense of agency is both a prerequisite for, and an outcome of, writing.
Researchers have long documented how writing is a cognitive activity — one that people use to communicate, yes, but also to understand the human experience. As many in the field of writing studies recognize, writing is a form of thinking — a practice that people never stop learning. With that, writing has the potential to continually reshape the mind. Writing not only expresses but actively creates identity.
Writing also regulates your psychological state. And the words you write are themselves proof of regulation — the evidence of resilience.
Popular coverage of human resilience often presents it as extraordinary endurance. News coverage of natural disasters implies that the more severe the trauma, the greater the personal growth. Pop psychology often equates resilience with unwavering optimism. Such representations can obscure ordinary forms of adaptation. Strategies people already use to cope with everyday life — from rage-texting to drafting a resignation letter — signify transformation.
Building resilience through writing
These research-backed tips can help you develop a writing practice conducive to resilience:
1. Write by hand whenever possible. In contrast to typing or tapping on a device, handwriting requires greater cognitive coordination. It slows your thinking, allowing you to process information, form connections and make meaning.
2. Write daily. Start small and make it regular. Even jotting brief notes about your day — what happened, what you’re feeling, what you’re planning or intending — can help you get thoughts out of your head and ease rumination.
3. Write before reacting. When strong feelings surge, write them down first. Keep a notebook within reach and make it a habit to write it before you say it. Doing so can support reflective thinking, helping you act with purpose and clarity.
4. Write a letter you never send. Don’t just write down your feelings — address them to the person or situation that’s troubling you. Even writing a letter to yourself can provide a safe space for release without the pressure of someone else’s reaction.
5. Treat writing as a process. Any time you draft something and ask for feedback on it, you practice stepping back to consider alternative perspectives. Applying that feedback through revision can strengthen self-awareness and build confidence.
Resilience may be as ordinary as the journal entries people scribble, the emails they exchange, the task lists they create — even the essays students pound out for professors.
The act of writing is adaptation in progress.
Emily Johnston receives funding from the Andrew W. Mellon Foundation.
Et si la physique des particules pouvait améliorer la cuisson des pâtes ? En scrutant leur structure à l’échelle atomique, des chercheurs ont compris comment le gluten maintient la fermeté des spaghettis et pourquoi les versions sans gluten restent si fragiles.
Que vous préfériez vos spaghettis al dente ou délicieusement fondants, il n’est pas toujours facile d’atteindre la perfection à la maison. Beaucoup d’entre nous ont déjà vu leurs pâtes se transformer en une bouillie beige – surtout lorsqu’il s’agit d’alternatives sans gluten.
Alors, quelle quantité d’eau et de sel faut-il vraiment utiliser, et combien de temps faut-il cuire les pâtes pour obtenir un résultat optimal ? Et surtout, comment adapter sa méthode de cuisson quand on utilise des pâtes sans gluten ? Une étude récente que mes collègues et moi avons menée, publiée dans Food Hydrocolloids, apporte des réponses en dévoilant la physique du processus de cuisson.
En nous tournant vers le Diamond Light Source, le synchrotron national du Royaume-Uni (un accélérateur de particules circulaire), nous avons étudié la diffusion des rayons X sur des pâtes afin d’en révéler la structure interne. Nous nous sommes ensuite rendus à Isis et à l’Institut Laue-Langevin, deux centres de recherche situés respectivement au Royaume-Uni et en France, pour analyser à l’aide de neutrons (qui, avec les protons, composent le noyau atomique) la microstructure des spaghettis classiques et sans gluten soumis à différentes conditions de cuisson.
L’étude montre comment la structure cachée des pâtes se modifie au cours de la cuisson, et pourquoi les versions sans gluten se comportent de manière si différente.
Ce dispositif nous a permis d’examiner la structure de l’amidon et du gluten dans les spaghettis à des échelles très fines, allant de plusieurs dizaines de fois le rayon d’un atome à plusieurs milliers de fois. Nous avons ainsi pu comparer les transformations qui s’opèrent dans les pâtes classiques et sans gluten selon diverses conditions de cuisson – par exemple lorsqu’elles sont trop cuites ou cuites sans sel.
Nos expériences nous ont permis de « voir » séparément les différents composants des pâtes. En mélangeant de l’eau normale et de « l’eau lourde » (qui contient un isotope appelé deutérium), nous pouvions rendre soit le gluten, soit l’amidon invisible au faisceau de neutrons. De cette manière, nous avons pu isoler efficacement chaque structure à tour de rôle et comprendre le rôle respectif de l’amidon et du gluten pendant la cuisson.
Le rôle du gluten et du sel
Notre étude montre que, dans les pâtes classiques, le gluten agit comme une armature solide qui maintient les granules d’amidon en place même pendant l’ébullition, ce qui confère aux pâtes leur fermeté et leur lenteur de digestion. Dans les pâtes sans gluten, en revanche, les granules d’amidon gonflent et s’effondrent plus facilement – ce qui explique leur texture pâteuse et leur dégradation plus rapide lorsque ce type de pâtes est cuit dans des conditions non optimales.
Nous avons également étudié l’effet du sel contenu dans l’eau de cuisson sur la structure des pâtes. Nous avons constaté que le sel ne se contente pas d’améliorer leur goût : il influence fortement la microstructure des spaghettis. Lorsque des pâtes classiques sont bouillies dans une eau salée, le gluten conserve sa structure, et les granules d’amidon sont moins altérés par le processus de cuisson.
Alors, quelle quantité de sel faut-il ajouter pour préserver la structure microscopique des pâtes ? Notre étude a révélé que l’idéal est de sept grammes de sel par litre d’eau, avec une quantité d’eau plus importante nécessaire pour de plus grandes portions de pâtes. Le temps de cuisson idéal est de dix minutes pour les pâtes classiques et onze minutes pour les pâtes sans gluten. À l’inverse, lorsque la concentration en sel était doublée, l’ordre interne se dégradait plus rapidement et la structure des granules d’amidon était significativement altérée par la cuisson.
Pour les pâtes sans gluten, les conclusions étaient encore différentes en raison de l’absence de la protection offerte par le gluten. Même de petites quantités de sel ne pouvaient compenser cette absence. Les composés artificiels à base d’amidons transformés, utilisés par les fabricants pour remplacer le gluten, se dégradaient rapidement. L’exemple le plus extrême de cette dégradation est survenu lorsque les spaghettis sans gluten étaient cuits trop longtemps – par exemple treize minutes au lieu de onze – et dans une eau très salée.
La principale conclusion est donc que les pâtes sans gluten sont structurellement plus fragiles et moins tolérantes à une cuisson prolongée ou à une mauvaise proportion de sel.
Améliorer les alternatives sans gluten
Comprendre la structure des pâtes à des échelles aussi infimes, invisibles même au microscope, aidera à concevoir de meilleurs aliments sans gluten. L’objectif est notamment de créer des alternatives sans gluten plus résistantes aux mauvaises conditions de cuisson et dont la texture se rapproche davantage de celle des spaghettis classiques.
Les pâtes de blé classiques ont un faible indice glycémique, car le gluten ralentit la dégradation des granules d’amidon lors de la digestion. Les pâtes sans gluten, fabriquées à partir de farines de riz et de maïs, manquent souvent de cette structure, ce qui entraîne une libération plus rapide des sucres. Grâce à la diffusion des neutrons, les scientifiques de l’alimentation peuvent désormais identifier quels ingrédients et quelles conditions de cuisson reproduisent le mieux la structure du gluten.
C’est aussi une illustration de la manière dont des outils expérimentaux de pointe, principalement utilisés pour la recherche fondamentale, transforment aujourd’hui la recherche alimentaire. La diffusion des neutrons a joué un rôle essentiel dans la compréhension des matériaux magnétiques, des batteries, des polymères et des protéines. Elle permet désormais aussi d’expliquer le comportement de nos aliments du quotidien à l’échelle microscopique.
Andrea Scotti reçoit des financements de la Fondation Knut et Alice Wallenberg ainsi que du Conseil suédois de la recherche.
Secrétaire à la défense sous Bush père, vice-président sous Bush fils, grand artisan de l’intervention des États-Unis en Irak en 2003, Dick Cheney restera dans l’histoire comme une figure emblématique de la mise en œuvre des idées néoconservatrices – un mouvement en recul au sein du Parti républicain depuis que Donald Trump en est devenu la tête de pont.
Décédé le 3 novembre 2025 à l’âge de 84 ans, Dick Cheney était l’une des figures les plus controversées de la politique américaine. L’ancien vice-président des États-Unis durant les deux mandats de George W. Bush (2001-2009) est connu à la fois comme le « père » de l’intervention en Irak de 2003 et comme un symbole malgré lui d’un courant intellectuel dont il ne faisait pas partie et qui a traversé la politique du pays après la Seconde Guerre mondiale, le néoconservatisme.
Son décès sonne comme le glas symbolique de ce courant associé au Parti républicain, à une politique étrangère offensive et à la défense d’Israël. Sa carrière, qui a connu tous les honneurs que peut offrir un cursus honorum américain, est le symbole de la trajectoire de toute une droite née dans le sillage de l’arrivée de Ronald Reagan au pouvoir en 1980. Quel est l’héritage de Dick Cheney et son lien avec ce courant ? Que reste-t-il du néoconservatisme aujourd’hui ?
Un début de carrière fulgurant
Quittant le Wyoming de sa jeunesse avec sa future épouse, Lynne Cheney, qu’il a connue au lycée, Richard B. Cheney entame sa carrière par de brillantes études à Yale (Connecticut) et un doctorat inachevé en sciences politiques. À partir de 1969, il rejoint l’administration Nixon où il travaille notamment pour Donald Rumsfeld, qui devient son mentor.
Quand Donald Rumsfeld est nommé secrétaire à la défense en 1975 par Gerald Ford, Dick Cheney passe directeur de cabinet de la Maison Blanche (White House Chief of staff), un poste clé, puis directeur de campagne de Gerald Ford en 1976 contre le démocrate Jimmy Carter. Selon l’anecdote, Dick Cheney était présent avec Donald Rumsfeld lors de la fameuse démonstration de la théorie de l’offre par l’économiste Arthur Laffer sur une serviette, l’affaire de la « serviette de Laffer ». La théorie de l’offre allait être le socle des reaganomics plusieurs années plus tard.
En 1978, il est élu représentant du Wyoming (ouest des États-Unis), siège qu’il conservera jusqu’en 1989. À la Chambre, il est rapidement associé à un autre représentant nouvellement élu en 1978 (en Géorgie), Newt Gingrich. L’ouvrage de Thomas E. Mann et Norman J. Ornstein The Broken Branch (2006), qui propose une généalogie de la crise que traverse depuis plusieurs décennies le Congrès américain, associe Gingrich et Cheney et présente le second comme un soutien actif du premier.
Newt Gingrich, accusé d’être l’homme qui a détruit la politique américaine, joue un rôle clé pour transformer le Parti républicain au Congrès entre 1978 et 1994 afin de le rendre plus offensif et plus homogène, de façon à ce qu’il soit en mesure d’enfin remporter les élections législatives. Le parti est en effet resté minoritaire à la Chambre durant 40 ans sans discontinuité, entre 1954 et 1994.
Le représentant Dick Cheney en 1984.
En 1994, après une ascension au sein du parti, Gingrich mène la campagne du Contrat avec l’Amérique lors des élections de mi-mandat de Bill Clinton. Cette première campagne législative unifiée sous un slogan commun permet au Parti républicain de redevenir majoritaire au Congrès. Le compagnon de route de Gingrich, Mel Steely, confirme dans sa biographie The Gentleman from Georgia (2000) le rôle clé de Dick Cheney, avec Trent Lott (chef de la majorité républicaine au Sénat de 1996 à 2002), pour constituer un relais privilégié de ses stratégies auprès de la Maison Blanche dès 1983. Ces faits sont confirmés par l’étude des archives de Gingrich (les Newt Gingrich Papers, situés à Tulane, en Louisiane).
Qui sont les néoconservateurs ?
Mais Dick Cheney est principalement connu pour deux choses : son rôle direct dans la seconde guerre du Golfe à partir de 2003 en tant que vice-président de George W. Bush et sa proximité avec ceux que l’on appelle les « néoconservateurs ».
« Tous les néoconservateurs sont faucons, mais tous les faucons ne sont pas néoconservateurs », comme l’écrit Justin Vaïsse, auteur d’un livre de référence sur les néoconservateurs, rappelant que Dick Cheney, Donald Rumsfeld ou John Bolton ont été leurs alliés, sans l’être vraiment eux-mêmes.
Ce courant de pensée est accusé d’être derrière les interventions en Afghanistan en 2001 et en Irak en 2003 et de leurs bilans désastreux, incarnant l’impérialisme américain. Né en 1965 au sein de la revue The Public Interest, il voit ses représentants arriver aux manettes sous la première administration Reagan à partir de 1980 et développer leur réflexion dans des think tanks comme l’American Enterprise Institute et l’Hudson Institute. Il finit par se confondre avec le conservatisme classique à partir de l’ère Reagan, intégrant le credo « fusionniste », nom donné à la synthèse conservatrice née en 1955 de l’association des libertariens aux conservateurs sociétaux.
Quelles sont leurs idées ? Si Justin Vaïsse distingue trois grands âges du néoconservatisme, Francis Fukuyama a pour sa part listé quatre de leurs grands principes. Le premier est la conviction, inspirée du philosophe Leo Strauss (1899-1973), que le caractère interne des régimes a de l’importance et que la politique étrangère doit refléter les valeurs les plus profondes des sociétés démocratiques libérales. Le deuxième est la conviction que la puissance américaine a été et doit être utilisée à des fins morales, et que les États-Unis doivent rester engagés dans les affaires internationales. Le troisième est la défiance systématique à l’encontre des ambitieux projets d’ingénierie sociale, une méfiance au cœur du néoconservatisme depuis sa naissance. Enfin, le dernier principe est le scepticisme au sujet de la légitimité et de l’efficacité de la législation et des institutions internationales pour imposer la sécurité ou la justice.
Les néoconservateurs sont arrivés au pouvoir lors de la première administration Reagan, passant du Parti démocrate au Parti républicain. Issus du trotskysme, la révélation des crimes de Staline les fit passer à l’aile droite du Parti démocrate.
C’est l’article de Jeane Kirkpatrick de 1979 dans la revue Commentary au sujet du soutien aux dictatures anticommunistes « Dictatorships and double standards » qui marque un tournant. Ronald Reagan, ayant apprécié cette analyse, voulut rencontrer l’autrice, fin février 1980. Richard V. Allen, qui devint conseiller à la sécurité nationale de Reagan, entre 1981 et 1982, se chargea de faire l’entremetteur et de recruter 26 néoconservateurs pour faire partie des 68 conseillers officiels du président en politique étrangère.
Le rôle de Dick Cheney dans l’évolution post-guerre froide du mouvement
Quand en 1989 George H. W. Bush succède à Reagan, dont il avait été le vice-président au cours de ses deux mandats, il nomme Cheney au poste de secrétaire à la défense. À ce titre, ce dernier joue un rôle déterminant pour façonner la doctrine néoconservatrice post-guerre froide. Ni Dick Cheney ni Donald Rumsfeld ne font partie du mouvement néoconservateur. Ils sont plutôt, comme le formulent Ivo Daalder et James Lindsay, des « nationalistes agressifs » (« assertive nationalists ») souhaitant démontrer la force de l’Amérique au Moyen-Orient. Mais des néoconservateurs se trouvaient dans l’entourage du président.
En 1992, à la sortie de la guerre froide, Dick Cheney et son proche entourage jouent un rôle clé pour façonner la doctrine néoconservatrice post-guerre froide. Pierre Bourgois, auteur d’un ouvrage, en 2023, sur ce courant, narre cet épisode. C’est en février 1992 qu’est publié le document « Defense Planning Guidance », rédigé par Paul Wolfowitz, sous-secrétaire à la politique de défense de Dick Cheney et éminent néoconservateur, entouré de Scooter Libby et de Zalmay Khalilzad, eux aussi membres importants du courant.
Dans ce document, nous voyons poindre les bases de la vision néoconservatrice post-guerre froide dans le cadre du « moment unipolaire américain ». L’objectif est d’assurer le maintien de l’hégémonie de Washington. Le New York Timespublie des extraits du brouillon le mois suivant, ce qui provoque une polémique du fait du militarisme et de l’unilatéralisme qui imprègnent le texte. Dick Cheney est alors contraint de le réviser. Il publie une ultime version en janvier 1993 sous le nom de « Defense Strategy for the 1990s : The Regional Defense Strategy », moins polémique mais insistant sur l’accroissement du budget de la défense malgré la chute du bloc soviétique.
Entre 1995 et 2000, entre les administrations Bush père et fils, Dick Cheney administra l’entreprise pétrolière texane Halliburton qui fut au centre de plusieurs polémiques avec les contrats juteux engrangés après la guerre en Irak mais aussi en raison de ses pratiques comptables « agressives » sous le mandat Cheney.
La vice-présidence de 2001 à 2009 et la guerre en Irak
Le point culminant de la carrière de Dick Cheney fut sa vice-présidence durant les deux mandats de George W. Bush.
Bande-annonce du film Vice (2019), consacré à Dick Cheney interprété par Christian Bale.
Les attentats du 11 septembre 2001 ont permis aux néoconservateurs de l’administration Bush, tels que Paul Wolfowitz, Doug Feith, Scooter Libby ou encore Elliott Abrams, de faire prévaloir leur vision, avec leurs alliés bien en place incarnés dans le duo habituel Cheney/Rumsfeld, ce dernier étant cette fois-ci le secrétaire à la défense, avec Paul Wolfowitz pour numéro deux. Les néoconservateurs sont tenus pour responsables des guerres en Afghanistan et en Irak.
Avec la fin de l’URSS, le Moyen-Orient était devenu leur sujet de prédilection. Dick Cheney était très influencé par Bernard Lewis et Fouad Ajami, deux experts du monde islamique issus de ce courant.
Toute l’administration Bush n’est pas néoconservatrice, toutefois. Colin Powell, secrétaire d’État, était un réaliste, fréquemment opposé aux néoconservateurs. Condoleezza Rice, alors conseillère à la sécurité nationale, ne l’était pas non plus, mais restait plus neutre. Donald Rumsfeld était sous le feu de critiques virulentes tant de la part du Parti républicain que des néoconservateurs, Bill Kristol, figure du mouvement depuis les années 1990, allant jusqu’à clamer que l’armée méritait un meilleur secrétaire à la Défense. Rumsfeld avait pourtant été plus proche du mouvement que Dick Cheney, faisant partie du Committee on the Present Danger à partir de 1978 et codirigeant le Committee for the Free World, deux structures néoconservatrices majeures des années 1970.
Le déclin des néoconservateurs
La démission du secrétaire à la Défense Donald Rumsfeld en novembre 2006 marqua le début de leur déclin, suivie de la chute de Scooter Libby, chef de cabinet de Dick Cheney, et de la démission en 2006 du faucon John Bolton, nommé ambassadeur à l’ONU, à la suite des échecs en Irak.
Quid du néoconservatisme dans la droite étatsunienne actuelle ? Le Parti républicain et l’écosystème conservateur qui l’accompagne ont été profondément modifiés par l’arrivée de Donald Trump au pouvoir en 2016, qui a rompu idéologiquement avec l’héritage dit « fusionniste », ce mélange de conservatisme sociétal et de laissez-faire économique né en 1955, nous l’avons dit, avec la National Review, et enrichi plus tard du néoconservatisme en politique étrangère.
Imposant une rupture tant avec son prédécesseur démocrate qu’avec la ligne néoconservatrice de l’ère Bush, Trump a inauguré un nouveau rapport au monde fait de critique de la mondialisation néolibérale, de refus de l’interventionnisme en politique étrangère et de refus de l’immigration, laquelle était acceptée par les néoconservateurs. Dans un Parti républicain noyauté par Trump et ses proches, les derniers néoconservateurs sont partis en 2021, avec Liz Cheney, fille de Dick, représentante du Wyoming comme son père, qui n’a pas supporté l’assaut du Capitole du 6 janvier.
Ce qui reste de ce courant qui fut central durant trente ans dans le parti est rassemblé parmi les « Never Trumpers » avec la création, en 2018, du site The Bulwark soutenu par Bill Kristol, conçu avec les équipes de son Weekly Standard. Bill Kristol lança même en mai 2020 son Republican Accountability Project rassemblant des républicains militant contre Trump pour la présidentielle de 2020 dans le cadre d’une vaste campagne publicitaire à 10 millions de dollars (8,6 milliards d’euros) ciblant les Blancs diplômés des États clés.
Les think tanks historiques du néoconservatisme, l’American Enterprise Institute, qui a tant alimenté les administrations Bush, mais aussi l’Hudson Institute, sont aujourd’hui marginalisés, tandis que l’Heritage Foundation, think tank historique du reaganisme, a fait sa mue idéologique complète sous l’égide de Kevin D. Roberts, travaillant autant à fournir des milliers de curriculum vitæ à l’administration Trump 2 qu’à lui proposer le Projet 2025 clé en main.
Gabriel Solans ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
¿Ha pensado alguna vez cuál es su primer recuerdo de infancia? Seguramente tenga que ver con alguna sensación o emoción intensa o novedosa que experimentó alrededor de los 3, 4 o 5 años. Es curioso que guardemos tan pocos recuerdos de una etapa tan importante de nuestro desarrollo. Yo por ejemplo solamente tengo tres recuerdos de mi paso por la guardería: hacer bolas de barro en el patio, cantar en corro y pedirle pan a la cocinera a través del ventanuco con barrotes que daba a la cocina (dudo que nos mataran de hambre, pero justo antes de comer a algunos se nos acababa la paciencia).
Mi guardería era solo eso, una “guardería”, como se llamaban entonces. Los niños más mayores tenían cinco años. Aunque tengo pocos recuerdos, sé que apenas pasaba tiempo sentada en un pupitre. Dibujábamos, nos disfrazábamos, escuchábamos cuentos. Sobre todo, jugábamos.
A mediados de los años 2000 en España esta etapa educativa se trasladó a los centros públicos de primaria, para ofrecer educación infantil universal y gratuita a todos los niños de entre 3 y 6 años,. Fue un enorme hito legislativo, respuesta a una demanda social no solo relativa a la conciliación familiar, sino también al derecho a una atención temprana al desarrollo para garantizar las posibilidades de éxito académico. Y así, los niños y niñas empezaron a ir al colegio a los 3 años. ¿Cómo se adaptaron los espacios para ellos? Básicamente, y en función de las posibilidades de cada centro, se habilitaron aulas con pupitres minúsculos y colchonetas apilables.
Pero reproducir a escala infantil ese mobiliario no basta para adaptar las aulas a las necesidades específicas de esta edad. De hecho, los pupitres deberían estar arrinconados la mayor parte del tiempo. Cuando ocupan el espacio central, tendemos a organizar las actividades del día con cada niño sentado en su sitio. Y esto no es buena idea por varias razones.
Imaginemos la mente de un niño de tres años. Las funciones “ejecutivas” (capacidades esenciales para filtrar información del entorno, procesarla y tomar decisiones adecuadas) no están maduras: apenas están empezando a desarrollarse. Muchos no saben aún cómo concentrarse en una tarea concreta durante demasiado tiempo y cómo ignorar los estímulos no pertinentes. Se distraen con el vuelo de una mosca. O sea: para ellos tiene tanto o mucho más interés ver lo que hace una mosca que observar cómo se traza determinada letra. Y es normal que sea así.
Además, la necesidad de movimiento y espacios menos estructurados no es algo exclusivo de la etapa infantil. Estudiantes de cualquier edad pueden beneficiarse de un diseño más flexible y creativo.
Durante estas pasadas semanas hemos publicado artículos de expertos que explican cómo podemos usar lo que sabemos sobre la psicomotricidad infantil para mejorar la atención que se da a los niños más pequeños, para desarrollar mejor sus capacidades motoras y sus funciones ejecutivas, aquellas que determinarán su mejor o peor ajuste a la carrera académica y a la vida en general. Incluso, para ayudar a tratar algunas dificultades antes de que surjan, como el déficit de atención y la hiperactividad. Porque ayudar a los más “inquietos” a dar salida a su energía es también una manera de ayudarles a aprender a frenar.
Et si la physique des particules pouvait améliorer la cuisson des pâtes ? En scrutant leur structure à l’échelle atomique, des chercheurs ont compris comment le gluten maintient la fermeté des spaghettis et pourquoi les versions sans gluten restent si fragiles.
Que vous préfériez vos spaghettis al dente ou délicieusement fondants, il n’est pas toujours facile d’atteindre la perfection à la maison. Beaucoup d’entre nous ont déjà vu leurs pâtes se transformer en une bouillie beige – surtout lorsqu’il s’agit d’alternatives sans gluten.
Alors, quelle quantité d’eau et de sel faut-il vraiment utiliser, et combien de temps faut-il cuire les pâtes pour obtenir un résultat optimal ? Et surtout, comment adapter sa méthode de cuisson quand on utilise des pâtes sans gluten ? Une étude récente que mes collègues et moi avons menée, publiée dans Food Hydrocolloids, apporte des réponses en dévoilant la physique du processus de cuisson.
En nous tournant vers le Diamond Light Source, le synchrotron national du Royaume-Uni (un accélérateur de particules circulaire), nous avons étudié la diffusion des rayons X sur des pâtes afin d’en révéler la structure interne. Nous nous sommes ensuite rendus à Isis et à l’Institut Laue-Langevin, deux centres de recherche situés respectivement au Royaume-Uni et en France, pour analyser à l’aide de neutrons (qui, avec les protons, composent le noyau atomique) la microstructure des spaghettis classiques et sans gluten soumis à différentes conditions de cuisson.
L’étude montre comment la structure cachée des pâtes se modifie au cours de la cuisson, et pourquoi les versions sans gluten se comportent de manière si différente.
Ce dispositif nous a permis d’examiner la structure de l’amidon et du gluten dans les spaghettis à des échelles très fines, allant de plusieurs dizaines de fois le rayon d’un atome à plusieurs milliers de fois. Nous avons ainsi pu comparer les transformations qui s’opèrent dans les pâtes classiques et sans gluten selon diverses conditions de cuisson – par exemple lorsqu’elles sont trop cuites ou cuites sans sel.
Nos expériences nous ont permis de « voir » séparément les différents composants des pâtes. En mélangeant de l’eau normale et de « l’eau lourde » (qui contient un isotope appelé deutérium), nous pouvions rendre soit le gluten, soit l’amidon invisible au faisceau de neutrons. De cette manière, nous avons pu isoler efficacement chaque structure à tour de rôle et comprendre le rôle respectif de l’amidon et du gluten pendant la cuisson.
Le rôle du gluten et du sel
Notre étude montre que, dans les pâtes classiques, le gluten agit comme une armature solide qui maintient les granules d’amidon en place même pendant l’ébullition, ce qui confère aux pâtes leur fermeté et leur lenteur de digestion. Dans les pâtes sans gluten, en revanche, les granules d’amidon gonflent et s’effondrent plus facilement – ce qui explique leur texture pâteuse et leur dégradation plus rapide lorsque ce type de pâtes est cuit dans des conditions non optimales.
Nous avons également étudié l’effet du sel contenu dans l’eau de cuisson sur la structure des pâtes. Nous avons constaté que le sel ne se contente pas d’améliorer leur goût : il influence fortement la microstructure des spaghettis. Lorsque des pâtes classiques sont bouillies dans une eau salée, le gluten conserve sa structure, et les granules d’amidon sont moins altérés par le processus de cuisson.
Alors, quelle quantité de sel faut-il ajouter pour préserver la structure microscopique des pâtes ? Notre étude a révélé que l’idéal est de sept grammes de sel par litre d’eau, avec une quantité d’eau plus importante nécessaire pour de plus grandes portions de pâtes. Le temps de cuisson idéal est de dix minutes pour les pâtes classiques et onze minutes pour les pâtes sans gluten. À l’inverse, lorsque la concentration en sel était doublée, l’ordre interne se dégradait plus rapidement et la structure des granules d’amidon était significativement altérée par la cuisson.
Pour les pâtes sans gluten, les conclusions étaient encore différentes en raison de l’absence de la protection offerte par le gluten. Même de petites quantités de sel ne pouvaient compenser cette absence. Les composés artificiels à base d’amidons transformés, utilisés par les fabricants pour remplacer le gluten, se dégradaient rapidement. L’exemple le plus extrême de cette dégradation est survenu lorsque les spaghettis sans gluten étaient cuits trop longtemps – par exemple treize minutes au lieu de onze – et dans une eau très salée.
La principale conclusion est donc que les pâtes sans gluten sont structurellement plus fragiles et moins tolérantes à une cuisson prolongée ou à une mauvaise proportion de sel.
Améliorer les alternatives sans gluten
Comprendre la structure des pâtes à des échelles aussi infimes, invisibles même au microscope, aidera à concevoir de meilleurs aliments sans gluten. L’objectif est notamment de créer des alternatives sans gluten plus résistantes aux mauvaises conditions de cuisson et dont la texture se rapproche davantage de celle des spaghettis classiques.
Les pâtes de blé classiques ont un faible indice glycémique, car le gluten ralentit la dégradation des granules d’amidon lors de la digestion. Les pâtes sans gluten, fabriquées à partir de farines de riz et de maïs, manquent souvent de cette structure, ce qui entraîne une libération plus rapide des sucres. Grâce à la diffusion des neutrons, les scientifiques de l’alimentation peuvent désormais identifier quels ingrédients et quelles conditions de cuisson reproduisent le mieux la structure du gluten.
C’est aussi une illustration de la manière dont des outils expérimentaux de pointe, principalement utilisés pour la recherche fondamentale, transforment aujourd’hui la recherche alimentaire. La diffusion des neutrons a joué un rôle essentiel dans la compréhension des matériaux magnétiques, des batteries, des polymères et des protéines. Elle permet désormais aussi d’expliquer le comportement de nos aliments du quotidien à l’échelle microscopique.
Andrea Scotti reçoit des financements de la Fondation Knut et Alice Wallenberg ainsi que du Conseil suédois de la recherche.