Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

Source: The Conversation – UK – By Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin University

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing.

These faster chips enable greater computing power by allowing devices to perform tasks more efficiently. As a result, we saw scientific simulations improving, weather forecasts becoming more accurate, graphics more realistic, and later, machine learning systems being developed and flourishing. It looked as if computing power itself obeyed a natural law.

This phenomenon became known as Moore’s Law, after the businessman and scientist Gordon Moore. Moore’s Law summarised the empirical observation that the number of transistors on a chip approximately doubled every couple of years. This also allows the size of devices to shrink, so it drives miniaturisation.

That sense of certainty and predictability has now gone, and not because innovation has stopped, but because the physical assumptions that once underpinned it no longer hold.

So what replaces the old model of automatic speed increases? The answer is not a single breakthrough, but several overlapping strategies.

One involves new materials and transistor designs. Engineers are refining how transistors are built to reduce wasted energy and unwanted electrical leakage. These changes deliver smaller, more incremental improvements than in the past, but they help keep power use under control.

Another approach is changing how chips are physically organised. Rather than placing all components on a single flat surface, modern chips increasingly stack parts on top of each other or arrange them more closely. This reduces the distance that data has to travel, saving both time and energy.

Perhaps the most important shift is specialisation. Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is.

Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity.

These are not general-purpose computers, and they are unlikely to replace conventional machines. Their potential lies in very specific areas, such as certain optimisation or simulation problems where classical computers can struggle to explore large numbers of possible solutions efficiently. In practice, these technologies are best understood as specialised co-processors, used selectively and in combination with traditional systems.

For most everyday computing tasks, improvements in conventional processors, memory systems and software design will continue to matter far more than these experimental approaches.

For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

New technologies

At the Supercomputing SC25 conference in St Louis, hybrid systems that mix CPUs (processors) and GPUs (graphics processing units) with emerging technologies such as quantum or photonic processors were increasingly presented and discussed as practical extensions of classical computing. For most everyday tasks, improvements in classical processors, memories and software will continue to deliver the biggest gains.

But there is growing interest in using quantum and photonic devices as co-
processors, not replacements. Their appeal lies in tackling specific classes of
problems, such as complex optimisation or routing tasks, where finding low-energy
or near-optimal solutions can be exponentially expensive for classical machines
alone.

In this supporting role, they offer a credible way to combine the reliability of
classical computing with new computational techniques that expand what these
systems can do.

Life after Moore’s Law is not a story of decline, but one that requires constant
transformation and evolution. Computing progress now depends on architectural
specialisation, careful energy management, and software that is deeply aware of
hardware constraints. The danger lies in confusing complexity with inevitability, or marketing narratives with solved problems.

The post-Moore era forces a more honest relationship with computation where performance is not anymore something we inherit automatically from smaller transistors, but it is something we must design, justify, and pay for, in energy, in complexity, and in trade-offs.

The Conversation

Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Moore’s law: the famous rule of computing has reached the end of the road, so what comes next? – https://theconversation.com/moores-law-the-famous-rule-of-computing-has-reached-the-end-of-the-road-so-what-comes-next-273052

The India-UK trade deal is a prime opportunity to protect to some of the world’s most vulnerable workers

Source: The Conversation – UK – By Pankhuri Agarwal, Leverhulme Early Career Research Fellow, University of Bath; King’s College London

AlexAnton/Shutterstock

A new trade agreement between India and the UK is due to come into force this year.
The deal is expected to completely remove tariffs from nearly 99% of Indian goods, including clothing and footwear, that are headed for the UK.

In both countries, this has been widely celebrated as a win for economic growth and competitiveness. And for Indian garment workers in particular, the trade agreement carries real promise.

This is because in recent years, clothing exports from India have declined sharply as well-known fashion brands moved production to places like Morocco and Turkey, which were cheaper.

India’s internal migrant workers (those who move from one region of the country to another looking for work) have been hit hardest, often waiting outside factories for days for the chance of a single shift of insecure work.

Against this backdrop, more opportunities for steadier employment and a more competitive sector under the new trade agreement looks like a positive outcome. But free trade agreements are not merely economic instruments – they shape labour markets and working conditions along global supply chains.

So, the critical question about this trade deal is not whether it will generate employment in India – it almost certainly will – but what kind of employment it will create.

Few sectors illustrate this tension more clearly than the manufacture of clothing. As one of India’s biggest exports, its garments sector is expected to be one of the primary beneficiaries of the trade deal.

But it is also among the country’s most labour-intensive and exploitative industries. From denim mills in Karnataka to knitwear and spinning hubs in Tamil Nadu, millions of Indian workers receive low wages and limited job security.

Research also shows that gender and caste-based exploitation is widespread.

So, if the trade deal goes ahead without addressing these issues, it risks perpetuating a familiar cycle where we see more orders and more jobs, but the same patterns of unfair wages, insecurity and – in some cases – forced labour.

Marginalised

For women workers, who form the backbone of garment production in India, these vulnerabilities are even sharper.

Gender-based violence, harassment and unsafe working conditions have been documented repeatedly across India’s export-oriented factories. Regimes which bound young women to factories under the promise of future benefits that often never materialised show how caste- and gender-based discrimination have long been embedded within the sector.

Even in factories that formally comply with labour laws, wages that meet basic living costs remain rare. Many workers earn wages which are not enough to pay for housing, food, healthcare and education, pushing families into debt as suppliers absorb price pressures imposed by global brands.

On the plus side, the India-UK agreement does not entirely sidestep these issues. There is a chapter which outlines commitments to the elimination of forced labour and discrimination.

But these provisions are mostly framed as guidance rather than enforceable obligation. They rely on cooperation and voluntary commitments, instead of binding standards.

While this approach is common in trade agreements, it limits this deal’s capacity to drive meaningful change. But perhaps even more striking is what has been left out.

Despite the role India’s social stratification system, known as caste, plays in shaping labour markets in India, it is entirely absent from the text of the agreement.

Yet caste determines who enters garment work and who performs the most hazardous and lowest-paid tasks. A significant proportion of India’s garment workforce comes from marginalised caste communities with limited bargaining power and few alternatives.

By addressing labour standards without acknowledging caste, the free trade agreement falls short. It could have required the monitoring of issues concerning caste and gender, and demanded grievance mechanisms and transparency measures that account for social hierarchies.

Instead, a familiar gap remains between commitments to “decent work” on paper and the reality which exists on factory floors.

Missed opportunity

If the India-UK deal is to be more than a tariff-cutting exercise, protections around caste and gender must be central to its implementation.

The deal is rightly being celebrated in both countries as an economic milestone. For the UK, it promises more resilient supply chains and cheaper imports. For India, it offers renewed export growth and the prospect of some more stable employment.

But the agreement’s long-term legitimacy will rest on whether it also delivers social justice.

India can use the deal to strengthen labour protections and ensure growth does not come at the cost of dignity and safety. The UK, as a major consumer market, can use its leverage to insist on enforceable standards for fair wages and decent work.

For trade deals do not simply move goods across borders – they shape the conditions under which those goods are produced.

The Conversation

Pankhuri Agarwal receives funding from the Leverhulme Trust as an Early Career Research Fellow.

ref. The India-UK trade deal is a prime opportunity to protect to some of the world’s most vulnerable workers – https://theconversation.com/the-india-uk-trade-deal-is-a-prime-opportunity-to-protect-to-some-of-the-worlds-most-vulnerable-workers-274055

Americans have fought back against authoritarianism at home before

Source: The Conversation – UK – By George Lewis, Professor of American History, University of Leicester

The first year of Donald Trump’s second term has been marked by increasing authoritarianism at the heart of the US federal government. He has openly defied court orders, worked beyond the established remit of executive power and is making no secret of his strongman ambitions. History tells us that such an authoritarian presence is not new and offers a blueprint for how it might be overcome.

From the 1930s to the 1970s, a congressional committee called the House Un-American Activities Committee (Huac) operated with near impunity. Granted extraordinary powers to investigate subversion and subversive propaganda, Huac sidelined political opponents, ruined careers and crushed organisations.

In popular memory, Huac remains inexorably tied to the “red scare” politics of the 1950s when cold war tensions led to intense anti-communist paranoia in the US. But in reality, it operated across five decades and its demise only came with the careful plotting of a concerted and organised campaign.

Huac derived much of its power from the vagueness of its mandate, with no objective definition of un-Americanism ever being universally agreed. Earl Warren, the chief justice of the US supreme court at the time, even openly questioned in 1957 whether un-Americanism could be defined and, thus, whether it ought to be investigated.

But the committee was not cowed by that lack of definition. For decades, Huac sought to be seen as the sole arbiter of the meaning of un-Americanism. That way, the committee could target its own enemies at will under the guise of investigating un-Americanism for the public good.

Curbing Huac’s authoritarianism was a delicate business. It had extraordinary powers, ill-defined parameters and vituperative members. The committee had also been a fixture of American life for so long that its existence seemed inevitable. The answer to overcoming its authoritarianism came in two separate stages.

Fighting against authoritarianism

First was the building of a broad coalition. Huac had many opponents both in politics and culture, the issue was uniting them behind a single cause.

Individuals, groups and protest movements that had been operating separately had to be encouraged to put their specific concerns aside and coalesce instead around an overall concern for democratic values. It was here that civil liberties protesters first forged an alliance with their civil rights counterparts at the turn of the 1960s.

Civil liberties organisations were primarily concerned with the free speech provision of the US constitution’s first amendment. Civil rights groups, on the other hand, were most concerned with the 14th and 15th amendments’ equal rights provisions. Huac’s assault on American principles was a reminder that these were amendments to the same document and it was the constitution as a whole that needed protection.

Momentum was key. A Huac memo from around that time recorded American civil rights leader John Lewis stating that “civil rights and liberties are the same”. Lewis worked across generational and geographical divides to unite sit-in students at segregated public spaces in the southern states with students who stormed Huac hearings in the west.

Gender divides also allowed women’s activists to humiliate the masculine conservatism of Huac committeemen. Poems described the committee shivering in its own manure, vinyl records captured anti-Huac protests and singers satirised its proceedings. The supreme court confronted Huac’s overreach, which activists and public intellectuals translated into popular broadsides.

However, this activism alone was insufficient. The second stage in bringing Huac’s authoritarianism to heel saw the carefully planned intervention of national mainstream politicians. Here, Congressman Jimmy Roosevelt provided bold but also tactically astute leadership. He delivered a speech from the floor of the US Capitol in 1960 that changed the movement from one designed to protest Huac’s authoritarianism to one demanding the committee’s outright abolition.

Roosevelt used the committee’s own actions against it. As he recognised, Huac’s meticulous record keeping also detailed its own failings. It spent public money on propaganda and its members, including staff director Richard Arens, were found to have been in the pay of scientific racists even as they investigated the civil rights movement. They also used designated wartime powers in peacetime.

Roosevelt stepped back, though, and concentrated on questions of principle at the heart of American democracy and the nation’s founding ideals. In his speech, Roosevelt told the House that Huac was “at war with our profoundest principles”. The un-American committee had used its powers in un-American ways.

James Roosevelt wearing a suit and glasses.
James Roosevelt was a prominent opponent of Huac in the 1950s and 60s.
Bettmann Archive / Wikimedia Commons

By appealing to matters of principle, Roosevelt was also able to appeal to principled members of the new congressional intake following elections that year which saw Democrat John F. Kennedy enter the White House.

Liberal House members had long given Huac a wide berth on account of its reputation. But riding a wave of liberalism, and encouraged by Roosevelt’s political leadership, some of that new intake now actively sought appointment to Huac so they could oppose its authoritarianism head on.

For the first time, the committee shifted from trying to frame civil rights activists as un-American to investigating the un-Americanism of the Ku Klux Klan. Its reformed membership also began opposing the scale of the congressional appropriations that had underwritten its investigations.

Its remaining conservative members were drawn into making increasingly desperate claims to maintain their national profile, but succeeded only in drawing the committee towards ridicule and irrelevance. Huac limped towards the end of the decade and was finally dissolved in 1975.

History tells those in Washington today that democratic pressures can be brought to bear on an authoritarian presence, however entrenched it may appear. Building a broad coalition is vital, as is labelling authoritarian behaviour appropriately. Denying any one individual ownership of what constitutes un-Americanism is equally important.

The record also shows that disparate groups can apply pressure most effectively when they are bound to a single issue. Here, as in the campaign against Huac, that issue is the principle of American democracy.

Roosevelt left three lessons for US citizens. First, that the momentum generated by a growing popular coalition can be harnessed in national politics. Second, that bold and principled leadership brings reward. And third, that elections can be the harbinger of significant and substantive change.

The Conversation

George Lewis has received funding from the British Academy for his research into un-Americanism.

ref. Americans have fought back against authoritarianism at home before – https://theconversation.com/americans-have-fought-back-against-authoritarianism-at-home-before-273638

Andy Burnham: what now for the King in the North?

Source: The Conversation – UK – By Alex Nurse, Reader in Urban Planning, University of Liverpool

Andy Burnham, the mayor of Greater Manchester, has been blocked from standing for parliament – a step that would have been essential to mount a leadership challenge against Keir Starmer.

Andrew Gwynne, who has been suspended for some time, has stepped down as MP for Gorton and Denton, citing ill health. A byelection will now be held in the seat, which is in the greater Manchester area – Burnham’s home turf. But the party’s National Executive Committee has voted eight to one to prevent Burnham from standing in the byelection, citing the expense of running a mayoral election to replace him as the main reason.

However, their ruling has been taken as a signal that Starmer is too worried about the threat Burnham would pose from the backbenches to allow him to return to Westminster.

Starmer is right to be worried. Burnham has been following a long history by hovering around in the background as a party leader struggles.

Margaret Thatcher spent the second half of her premiership heading off the threat from Michael Heseltine. He didn’t replace her but she was toppled and John Major assumed power as a consequence of those tussles.

Tony Blair and Gordon Brown’s rivalry was infamous and at times all-consuming. Both David Cameron and Theresa May had to deal with Boris Johnson’s ambition to occupy their job. And we know how that ended.

In some ways, Burnham is trudging a similar path to Johnson: a former MP who left parliament to take up office as the mayor of a large city, and who enjoys a national profile that perhaps exceeds that of his office. However, the similarities end there.

Burnham served in Blair’s government, before holding multiple roles within Brown’s cabinet, including as health secretary. Burnham also tested his leadership credentials on the Labour membership on two occasions – losing to Ed Miliband in 2010 and Jeremy Corbyn in 2015.

Burnham has often spoken of his disdain for the Westminster model and has done very well for himself out of being a mayor rather than an MP. It’s true that he was taking what many saw as a convenient off ramp out of Jeremy Corbyn’s shadow cabinet when he initially ran for the position, but he won the 2017 election with 63% of the vote. He increased his majority upon re-election in 2021 and has become the figurehead of the English mayors.

His most impressive credentials lie in his approach to transport. He has taken the lead on bringing buses back into public ownership – a move that has been popular among people frustrated by spiralling fare prices. His was the first city outside London to appoint a walking and cycling commissioner – something that was then copied by every other mayor. He has ultimately formulated what has become known as “the Bee network” – a fully integrated system of tram and bus lines and cycle routes.

Of course, not all of Burnham’s actions have seen successes. For example, the ten-year plan for Greater Manchester, which is overseen by his office, has become increasingly fractured as local authorities break away from it – particularly over concerns that its housing targets aren’t achievable.

However, it was during the COVID-19 pandemic that he really burnished his credentials as the so-called “King in the North” – a title that has endured in popularity longer than the TV show from which it was derived. Amid confusing advice over lockdowns and inconsistent support from national government, Burnham took to giving live press conferences on the steps of Manchester town hall railing against Westminster.

He eventually won some concessions from the Johnson government over lockdown restrictions in his region. This, perhaps for the first time, really showcased the value of a talismanic mayor who could argue for their city, and certainly reaffirmed Burnham’s position as a national player.

A king on the march?

Given his two previous tilts at the role, Burnham’s leadership ambitions have rarely been in doubt. Indeed they have always bubbled beneath the surface. Although he has little choice but to lick his wounds for now, Burnham’s status as a potential replacement for Starmer remains undiminished.

There will also, undoubtedly, be others in the Labour party who have their own leadership ambitions, and who will have mixed emotions that the main stalking horse liable to topple Starmer and instigate a leadership race has been stabled.

Perhaps in a case of life imitating art, we should remember that in Game of Thrones the King in the North is fatally undone by poor tactical decisions. The most successful example of returning to parliament and obtaining power remains Johnson.

Even so, this took nearly four years and a party that largely wanted him back. With his path to Westminster currently blocked, that timeline might leave Burnham questioning his long-term strategy.

The Conversation

Alex Nurse does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Andy Burnham: what now for the King in the North? – https://theconversation.com/andy-burnham-what-now-for-the-king-in-the-north-266455

Creatine for women: should you add this supplement into your diet?

Source: The Conversation – UK – By Justin Roberts, Professor of Nutritional Physiology, Anglia Ruskin University

Creatine supplements can be particularly beneficial for building strength. Chay_Tee/ Shutterstock

Creatine is one of the most popular sports supplements out there. It’s shown to help build muscle and improve strength, boost speed and power in athletes and benefit sports performance all round.

Research also suggests this superstar nutrient may have other health benefits, including for brain function, memory, bone health and even mood.

While creatine has been a mainstay supplement for gym enthusiasts, most of the research on this supplement’s benefits has been conducted on men. With recent increased advertising specifically promoting creatine for women, there is growing interest in whether this nutrient can also be equally beneficial for them.

It’s already clear from the research that creatine could benefit women by reducing fatigue during exercise. It may also be particularly beneficial for maintaining muscle as women get older.

Creatine is a natural compound produced in the body from several amino acids (the building blocks from protein). We can also get it from protein-rich foods, such as meat and seafood.

Creatine plays a role in short-term energy, particularly during intense exercise, helping us to recover quicker between exercises. This makes it possible to do more work each time we train, leading to around 20% greater performance gains when regularly taking the supplement.

We naturally use around 2g-4g of creatine per day. But as our bodies don’t store much creatine, this is why we need to consume it in our diet or get it from supplements. Think of it like a short-term energy store that needs topping up.

Around 1kg of raw beef or seafood would supply around 3g-5g of creatine. However, cooking can reduce creatine content. This makes it challenging to consistently get enough from the diet alone, which is where supplements can be useful.

Research also shows that vegans, vegetarians and women tend to have diets lower in creatine – meaning lower overall body stores. However, women do appear to store a bit more creatine in their muscles than men, suggesting they may respond to it slower or differently than men.

The most studied form of creatine is creatine monohydrate. This can be taken as a powder, capsule or gummy. If women consume around 3g-5g of creatine a day as a supplement, it will help gradually increase muscle creatine stores over a period of two to four weeks.

But if you’re looking to boost muscle stores faster, research shows taking around 20g of creatine a day for seven days (before dropping down to 3g-5g daily) can safely boost stores.

Creatine benefits for women

There are many factors which influence a women’s health over their lifetime. This includes hormonal changes, the gradual loss of muscle that comes with ageing, loss of bone density and slower metabolism post-menopause – as well as fluctuating energy levels and poor concentration or focus.

Resistance exercise may be beneficial in mitigating some of these changes, particularly in supporting muscle mass and function, bone health and energy levels.

An older woman wearing a pink shirt and standing outdoors drinks out of a shaker bottle used for protein or creatine shakes.
Daily creatine may have many benefits for womens’ health and fitness.
SvetikovaV/ Shutterstock

This is where creatine comes in. Doing resistance training for several weeks while taking around 3g-5g of additional creatine per day can enable you to maintain the quality and consistency of your training. This combination can be particularly beneficial for strength in mid to later life.

Women who take creatine consistently are shown to have improved muscle function, which ultimately can impact quality of life. There’s also some evidence that taking it alongside resistance training may support bone health in postmenopausal women – although not all studies agree on this.

It’s worth noting as well that creatine does not appear to lead to weight gain or cause a bulky, muscular appearance, which are often concerns for women thinking about taking the supplement.

More recently, research has been exploring whether creatine can affect brain health, cognitive function and possibly even mood in older women. Evidence also shows that in younger women, it can improve mood and cognitive function after a bad nights’ sleep.

There’s emerging evidence as well that taking 5g of creatine daily can help younger women sleep longer (particularly on days they’ve done a workout). The same dose may also improve sleep quality in perimenopausal women – possibly by supporting the energy required by the brain.

Another study also reported greater reductions in depressive symptoms in women taking 5g of creatine daily alongside antidepressants, compared to those just taking antidepressants.

Given many women report experiencing symptoms such as “brain fog”, poor concentration, stress, low energy and poor sleep during their menstrual cycle and throughout the menopause, this could make creatine a low-cost solution for many of these symptoms. However, a higher dose of creatine may be needed daily (around 5g-10g) to increase the brain’s creatine stores.

Creatine is by no means a cure-all supplement, and clearly more research on women is needed. But the research so far shows that even just a small amount of creatine daily – when paired with a healthy lifestyle and resistance training – holds promise in supporting many aspects of women’s health.

The Conversation

Professor Justin Roberts is employed by Anglia Ruskin University and Danone Research & Innovation, and has previously received external research funding unrelated to this article.

ref. Creatine for women: should you add this supplement into your diet? – https://theconversation.com/creatine-for-women-should-you-add-this-supplement-into-your-diet-272773

L’agriculture verticale peut-elle nourrir les villes ? Comment dépasser le mirage technologique

Source: The Conversation – France (in French) – By Marie Asma Ben-Othmen, Enseignante-chercheuse en Economie de l’environnement & Agroéconomie, Responsable du Master of Science Urban Agriculture & Green Cities, UniLaSalle

L’agriculture verticale a longtemps été présentée comme une solution miracle pour nourrir les mégapoles tout en réduisant leur empreinte environnementale. Mais derrière les promesses high-tech, la réalité est contrastée. Entre des succès spectaculaires en Asie et des faillites retentissantes en Europe et aux États-Unis, le modèle cherche encore sa voie.


L’agriculture verticale repose sur une idée simple : produire en intérieur et hors-sol, dans des milieux totalement contrôlés, y compris la lumière, la température, l’humidité et les nutriments, sur de vastes étagères en hauteur, au cœur des villes. À première vue, ses avantages paraissent irrésistibles. Sans pesticides, ce mode de culture consomme jusqu’à 90 % d’eau en moins grâce au recyclage – notamment l’hydroponie – et peut fonctionner 365 jours par an, avec un rendement élevé, sans dépendre des caprices du climat. Il offre ainsi la promesse d’une production fraîche et locale, directement connectée aux circuits courts.

Cet horizon a suscité un engouement mondial. Le Japon, avec la société Spread, a automatisé la production de salades indoor sur de vastes étagères, dans des univers aseptisés, à l’échelle industrielle. Singapour a inscrit les fermes verticales au cœur de son objectif « 30 by 30 », visant à couvrir localement 30 % de ses besoins alimentaires d’ici à 2030. Les pays du Golfe, comme les Émirats arabes unis et le Koweït, confrontés à la rareté des terres arables, y voient un outil stratégique alors que, aux États-Unis, des start-up ont levé des centaines de millions de dollars sur la base d’une vision d’un futur alimentaire ultratechnologique. Mais des échecs cuisants mettent aussi en évidence les limites du modèle, qui peu à peu tente de se réinventer pour y répondre.

Les ingrédients du succès

Les fermes verticales qui fonctionnent vraiment partagent un point commun : elles naissent dans des contextes où elles répondent à un besoin structurel. Dans les régions où la terre est rare, chère ou aride, la production en hauteur – ou à la verticale – répond efficacement aux contraintes géographiques.

À Singapour ou à Dubaï, par exemple, l’État joue un rôle déterminant en soutenant financièrement les infrastructures, en réduisant les risques d’investissement et en intégrant ces technologies dans les stratégies alimentaires nationales.

La réussite de ces modèles tient aussi à leur insertion dans les dynamiques locales. En effet, à Dubaï, les fermes verticales ne se contentent pas de produire, mais contribuent également à la sécurité alimentaire, à la formation technique, à l’emploi qualifié et à la sensibilisation des citoyens.

L’île-ville de Singapour s’appuie par ailleurs sur des technologies hydroponiques et aéroponiques avancées, avec des tours agricoles intégrés au bâti urbain. Ceci illustre l’adaptation de l’agriculture aux contraintes foncières et urbaines. Les progrès technologiques, notamment l’éclairage LED à haut rendement, l’automatisation poussée et l’IA permettant d’optimiser la croissance des plantes, améliorent la performance des modèles les mieux conçus.

Malgré des défis (coûts énergétiques, fragilité économique), ces fermes continuent aujourd’hui d’être considérées comme un « modèle d’avenir » pour des villes-États densément peuplées, ce qui montre que l’initiative s’inscrit dans une politique de long terme plutôt qu’à titre de simple effet de mode.

Coût, énergie et dépendance au capital-risque

Malgré ces succès, de nombreux projets ont échoué et révélé les fragilités d’un modèle bien moins robuste qu’il y paraît.

Le premier obstacle est énergétique. Éclairer, climatiser et faire fonctionner une installation entièrement contrôlée demande une quantité importante d’électricité, ce qui rend l’activité coûteuse et parfois peu écologique lorsque l’énergie n’est pas décarbonée.

Le second obstacle est économique : les marges sur les herbes aromatiques ou les salades sont faibles, et le modèle dépend souvent du capital-risque plutôt que de revenus stables. C’est ce qui a précipité les difficultés d’Infarm en Europe et d’AeroFarms aux États-Unis.

Certaines fermes se sont également retrouvées déconnectées des besoins alimentaires locaux, produisant des volumes ou des produits qui ne répondaient pas aux attentes des territoires. Le modèle, mal ancré localement, devient alors vulnérable à la moindre fluctuation des marchés financiers ou énergétiques.

De nouveaux modèles en développement

Face à ces limites, une nouvelle génération de projets émerge, cherchant à combiner technologie, intégration et demande urbaine au moyen de modèles de microfermes verticales adossées à des supermarchés, garantissant la fraîcheur, la visibilité et une réduction des coûts logistiques.

D’autres initiatives explorent les synergies énergétiques, en couplant production alimentaire et récupération de chaleur de data centers, en développant des serres photovoltaïques ou en utilisant des réseaux de chaleur urbains.

Les fermes verticales évoluent aussi vers des fonctions plus pédagogiques et démonstratives : même après sa faillite, une partie du modèle Infarm continue d’inspirer des fermes urbaines où la production sert autant à sensibiliser les citoyens qu’à fournir des produits frais. Ces approches hybrides témoignent d’une maturité croissante du secteur, qui privilégie moins la production de masse que la pertinence territoriale.

Vers une agriculture verticale plus durable ?

Pour devenir un levier crédible de la transition alimentaire, l’agriculture verticale doit clarifier sa finalité. Produire davantage ne suffit pas : il s’agit de contribuer à la résilience alimentaire des villes, d’offrir une complémentarité avec les agricultures urbaines plus « horizontales », telles que les toits productifs, les ceintures maraîchères ou les jardins partagés, et de s’inscrire dans les politiques alimentaires territoriales.

En particulier, les projets alimentaires territoriaux (PAT) peuvent, par leur ambition, fédérer les différents acteurs du territoire autour de l’alimentation. Ils jouent un rôle clé pour intégrer ces dispositifs de manière cohérente, en les articulant avec les enjeux de nutrition, d’accessibilité, de distribution et d’éducation. L’agriculture verticale ne deviendra durable que si elle est pensée dans une logique systémique, sobre sur le plan énergétique, ancrée localement et compatible avec les objectifs climatiques.

Loin d’être la panacée, elle est en revanche un laboratoire d’innovation. Là où elle réussit, c’est parce qu’elle s’inscrit dans une vision systémique de la transition alimentaire, combinant technologie, gouvernance territoriale et sobriété énergétique. Son avenir dépendra moins de la hauteur des tours que de la manière dont elle s’imbrique dans les territoires et contribue à renforcer la capacité des villes à se nourrir face aux crises climatiques et géopolitiques.

The Conversation

Marie Asma Ben-Othmen ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. L’agriculture verticale peut-elle nourrir les villes ? Comment dépasser le mirage technologique – https://theconversation.com/lagriculture-verticale-peut-elle-nourrir-les-villes-comment-depasser-le-mirage-technologique-270890

Where do seashells come from?

Source: The Conversation – USA – By Michal Kowalewski, Thompson Chair of Invertebrate Paleontology, University of Florida

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Where do seashells come from? – Ivy, age 5, Phoenix, Arizona

Seashells are so plentiful that you may sometimes take them for granted.

Scientists have estimated that just one small stretch of beaches along the Gulf of California contained at least 2 trillion shells. That is 2 followed by 12 zeros.

2,000,000,000,000 shells – in just one small stretch of coast! Imagine if every human alive today went there to collect shells. Each of them would be able to claim nearly 1,000 shells.

But where do all these shells come from, and what tales can they tell us?

We are a paleontologist and marine ecologist, and our scientific research involves looking at shells and discovering where they came from and how old they are.

Skeletons on the beach

Shells are simply skeletons of animals, the remains of dead organisms. But unlike humans and most other animals, these mollusks, such as snails, clams, oysters and mussels, have an exoskeleton, meaning it’s on the outside of their bodies.

When people talk about seashells, they usually mean shells of mollusks. And these are, indeed, the most common types of shells we find on the beach today. Many other marine animals also make skeletons, including, among others, echinoids such as sand dollars that make internal skeletons called tests, and brachiopods, also known as “lampshells.”

These marine animals build their own shells to protect their soft bodies from external threats, such as predators or changes that happen around them in their habitat. Shells can also help these sea creatures stay stable on the seafloor, grow bigger or move around more efficiently.

Just as our bones provide a scaffold to which we attach our muscles, shells provide a rigid frame to which sea creatures attach their muscles. Some mollusks, such as scallops, can even swim by using powerful muscles to vigorously flap the two valves that make their shell. Other sea creatures use muscles attached to their shells to quickly bury themselves in the sediment.

A clam on the beach buries itself in the sand then releases water and waste.

Variety is the spice of marine life

The process of making a shell is known as biomineralization. How marine animals build their shells can vary greatly depending on the species, but all of these animals have special tissues to make their shells, just as humans have special tissues to grow and strengthen our bones.

Most marine animals form their shells from calcium carbonate, which is a tough mineral also found in limestone. Some sponges and microorganisms use another compound silica. There is also a group of brachiopods that build shells using calcium phosphate, which we use to build our bones, too.

More than 50,000 mollusk species live today on our planet, and most of them make shells. But each species makes a different shell. This accounts for the huge variety of shapes and sizes in the seashells you find on the beach.

Seashells of all different colors and shapes in a pile
With over 50,000 species of mollusk, seashells come in all different shapes and sizes.
Amanda Bemis/Invertebrate Zoology Collections, Florida Museum of Natural History

Just as with bones, shells can last for a very long time. The shells of dead animals are moved around by currents and waves. Many eventually wash up on shorelines. Other shells get buried beneath the seafloor. With pressure and time, the buried seafloor sediment becomes a rock, and shells turn into fossils. In fact, seashells are among the most common types of fossils large enough to see with the naked eye.

When an experienced hunter finds a bone in the forest, they know right away whether it came from a deer, rabbit or wild boar. Similarly, when a seashell expert finds a shell, they can tell you what sea creature made it.

What shells can teach us

Besides the sheer number of sea creatures, another reason shells are so prolific is that they last for a very long time. In our research, we use a process called carbon dating to figure out how old a shell is. Mollusks and many other animals use calcium, carbon and oxygen to build their shell. There are three types of carbon – called isotopes – and one of them, known as radiocarbon, is unstable. As a shell ages, its radiocarbon decays at a constant rate. Older shells have less radiocarbon, and we scientists can estimate their age based on that fact.

This process has allowed us and other researchers to date thousands of shells collected from modern beaches and sea bottoms all around the globe. We discovered that many of those shells are hundreds or thousands of years old.

These shells are not just beautiful to look at – they’re also very useful. Like little time machines, these shells carry within them a wealth of information about the past, including details about the habitats in which they lived. Scientists like us can often tell from a shell whether the animal that created it was a predator, a plant eater or even a parasite.

By studying the chemical makeup of the shell, scientists can learn about past climates and environments. We can often even discern how the owner of a shell died and the hazards it faced during its life.

So the next time you admire shells on your favorite beach, inspect them for clues about their past lives. Does the shell contain a round hole? That reveals that the animal was killed by a drilling predator. Does it have a repair scar? It may have survived an attack by a crab. Does the shell belong to an animal that lived in a seagrass meadow that is no longer there?

Each shell is a little diary, and if you know how to read it, it can tell you exciting stories of animals and habitats from the past.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Michal Kowalewski receives funding from federal agencies (National Science Foundation) and private organizations such as Felburn Foundation and University of Florida Foundation.

Thomas K. Frazer receives funding from the National Oceanographic and Atmospheric Administration, Florida Fish and Wildlife Conservation Commission, Florida Department of Environmental Protection, Florida Department of Transportation, and South Florida Water Management District and The Ocean Conservancy.

ref. Where do seashells come from? – https://theconversation.com/where-do-seashells-come-from-270153

Artificial metacognition: Giving an AI the ability to ‘think’ about its ‘thinking’

Source: The Conversation – USA – By Ricky J. Sethi, Professor of Computer Science, Fitchburg State University; Worcester Polytechnic Institute

AIs could use some self-reflection. davincidig/iStock via Getty Images

Have you ever had the experience of rereading a sentence multiple times only to realize you still don’t understand it? As taught to scores of incoming college freshmen, when you realize you’re spinning your wheels, it’s time to change your approach.

This process, becoming aware of something not working and then changing what you’re doing, is the essence of metacognition, or thinking about thinking.

It’s your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems.

My colleagues Charles Courchaine, Hefei Qiu and Joshua Iacoboni and I are working to change that. We’ve developed a mathematical framework designed to allow generative AI systems, specifically large language models like ChatGPT or Claude, to monitor and regulate their own internal “cognitive” processes. In some sense, you can think of it as giving generative AI an inner monologue, a way to assess its own confidence, detect confusion and decide when to think harder about a problem.

Why machines need self-awareness

Today’s generative AI systems are remarkably capable but fundamentally unaware. They generate responses without genuinely knowing how confident or confused their response might be, whether it contains conflicting information, or whether a problem deserves extra attention. This limitation becomes critical when generative AI’s inability to recognize its own uncertainty can have serious consequences, particularly in high-stakes applications such as medical diagnosis, financial advice and autonomous vehicle decision-making.

For example, consider a medical generative AI system analyzing symptoms. It might confidently suggest a diagnosis without any mechanism to recognize situations where it might be more appropriate to pause and reflect, like “These symptoms contradict each other” or “This is unusual, I should think more carefully.”

Developing such a capacity would require metacognition, which involves both the ability to monitor one’s own reasoning through self-awareness and to control the response through self-regulation.

Inspired by neurobiology, our framework aims to give generative AI a semblance of these capabilities by using what we call a metacognitive state vector, which is essentially a quantified measure of the generative AI’s internal “cognitive” state across five dimensions.

5 dimensions of machine self-awareness

One way to think about these five dimensions is to imagine giving a generative AI system five different sensors for its own thinking.

  • Emotional awareness, to help it track emotionally charged content, which might be important for preventing harmful outputs.
  • Correctness evaluation, which measures how confident the large language model is about the validity of its response.
  • Experience matching, where it checks whether the situation resembles something it has previously encountered.
  • Conflict detection, so it can identify contradictory information requiring resolution.
  • Problem importance, to help it assess stakes and urgency to prioritize resources.

We quantify each of these concepts within an overall mathematical framework to create the metacognitive state vector and use it to control ensembles of large language models. In essence, the metacognitive state vector converts a large language model’s qualitative self-assessments into quantitative signals that it can use to control its responses.

For example, when a large language model’s confidence in a response drops below a certain threshold, or the conflicts in the response exceed some acceptable levels, it might shift from fast, intuitive processing to slow, deliberative reasoning. This is analogous to what psychologists call System 1 and System 2 thinking in humans.

A diagram with five rectangles surrounding an oval with arrows connecting them
This conceptual diagram shows the basic idea for giving a set of large language models an awareness of the state of its processing.
Ricky J. Sethi

Conducting an orchestra

Imagine a large language model ensemble as an orchestra where each musician – an individual large language model – comes in at certain times based on the cues received from the conductor. The metacognitive state vector acts as the conductor’s awareness, constantly monitoring whether the orchestra is in harmony, whether someone is out of tune, or whether a particularly difficult passage requires extra attention.

When performing a familiar, well-rehearsed piece, like a simple folk melody, the orchestra easily plays in quick, efficient unison with minimal coordination needed. This is the System 1 mode. Each musician knows their part, the harmonies are straightforward, and the ensemble operates almost automatically.

But when the orchestra encounters a complex jazz composition with conflicting time signatures, dissonant harmonies or sections requiring improvisation, the musicians need greater coordination. The conductor directs the musicians to shift roles: Some become section leaders, others provide rhythmic anchoring, and soloists emerge for specific passages.

This is the kind of system we’re hoping to create in a computational context by implementing our framework, orchestrating ensembles of large language models. The metacognitive state vector informs a control system that acts as the conductor, telling it to switch modes to System 2. It can then tell each large language model to assume different roles – for example, critic or expert – and coordinate their complex interactions based on the metacognitive assessment of the situation.

a woman in a long black dress conducts an orchestra
Metacognition is like an orchestra conductor monitoring and directing an ensemble of musicians.
AP Photo/Vahid Salemi

Impact and transparency

The implications extend far beyond making generative AI slightly smarter. In health care, a metacognitive generative AI system could recognize when symptoms don’t match typical patterns and escalate the problem to human experts rather than risking misdiagnosis. In education, it could adapt teaching strategies when it detects student confusion. In content moderation, it could identify nuanced situations requiring human judgment rather than applying rigid rules.

Perhaps most importantly, our framework makes generative AI decision-making more transparent. Instead of a black box that simply produces answers, we get systems that can explain their confidence levels, identify their uncertainties, and show why they chose particular reasoning strategies.

This interpretability and explainability is crucial for building trust in AI systems, especially in regulated industries or safety-critical applications.

The road ahead

Our framework does not give machines consciousness or true self-awareness in the human sense. Instead, our hope is to provide a computational architecture for allocating resources and improving responses that also serves as a first step toward more sophisticated approaches for full artificial metacognition.

The next phase in our work involves validating the framework with extensive testing, measuring how metacognitive monitoring improves performance across diverse tasks, and extending the framework to start reasoning about reasoning, or metareasoning. We’re particularly interested in scenarios where recognizing uncertainty is crucial, such as in medical diagnoses, legal reasoning and generating scientific hypotheses.

Our ultimate vision is generative AI systems that don’t just process information but understand their cognitive limitations and strengths. This means systems that know when to be confident and when to be cautious, when to think fast and when to slow down, and when they’re qualified to answer and when they should defer to others.

The Conversation

Ricky J. Sethi has received funding from the National Science Foundation, Google and Amazon.

ref. Artificial metacognition: Giving an AI the ability to ‘think’ about its ‘thinking’ – https://theconversation.com/artificial-metacognition-giving-an-ai-the-ability-to-think-about-its-thinking-270026

How the polar vortex and warm ocean intensified a major US winter storm

Source: The Conversation – USA (2) – By Mathew Barlow, Professor of Climate Science, UMass Lowell

Boston and much of the U.S. faced a cold winter blast in January 2026. Craig F. Walker/The Boston Globe via Getty Images

A severe winter storm that brought crippling freezing rain, sleet and snow to a large part of the U.S. in late January 2026 left a mess in states from New Mexico to New England. Hundreds of thousands of people lost power across the South as ice pulled down tree branches and power lines, more than a foot of snow fell in parts of the Midwest and Northeast, and many states faced bitter cold that was expected to linger for days.

The sudden blast may have come as a shock to many Americans after a mostly mild start to winter, but that warmth may have partly contributed to the ferocity of the storm.

As atmospheric and climate scientists, we conduct research that aims to improve understanding of extreme weather, including what makes it more or less likely to occur and how climate change might or might not play a role.

To understand what Americans are experiencing with this winter blast, we need to look more than 20 miles above the surface of Earth, to the stratospheric polar vortex.

A forecast for Jan. 26, 2026, shows the freezing line in white reaching far into Texas. The light band with arrows indicates the jet stream, and the dark band indicates the stratospheric polar vortex. The jet stream is shown at about 3.5 miles above the surface, a typical height for tracking storm systems. The polar vortex is approximately 20 miles above the surface.
Mathew Barlow, CC BY

What creates a severe winter storm like this?

Multiple weather factors have to come together to produce such a large and severe storm.

Winter storms typically develop where there are sharp temperature contrasts near the surface and a southward dip in the jet stream, the narrow band of fast-moving air that steers weather systems. If there is a substantial source of moisture, the storms can produce heavy rain or snow.

In late January, a strong Arctic air mass from the north was creating the temperature contrast with warmer air from the south. Multiple disturbances within the jet stream were acting together to create favorable conditions for precipitation, and the storm system was able to pull moisture from the very warm Gulf of Mexico.

A map of storm warnings on Jan. 24, 2026.
The National Weather Service issued severe storm warnings (pink) on Jan. 24, 2026, for a large swath of the U.S. that could see sleet and heavy snow over the following days, along with ice storm warnings (dark purple) in several states and extreme cold warnings (dark blue).
National Weather Service

Where does the polar vortex come in?

The fastest winds of the jet stream occur just below the top of the troposphere, which is the lowest level of the atmosphere and ends about seven miles above Earth’s surface. Weather systems are capped at the top of the troposphere, because the atmosphere above it becomes very stable.

The stratosphere is the next layer up, from about seven miles to about 30 miles. While the stratosphere extends high above weather systems, it can still interact with them through atmospheric waves that move up and down in the atmosphere. These waves are similar to the waves in the jet stream that cause it to dip southward, but they move vertically instead of horizontally.

A chart shows how temperatures in the lower layers of the atmosphere change between the troposphere and stratosphere. Miles are on the right, kilometers on the left.
NOAA

You’ve probably heard the term “polar vortex” used when an area of cold Arctic air moves far enough southward to influence the United States. That term describes air circulating around the pole, but it can refer to two different circulations, one in the troposphere and one in the stratosphere.

The Northern Hemisphere stratospheric polar vortex is a belt of fast-moving air circulating around the North Pole. It is like a second jet stream, high above the one you may be familiar with from weather graphics, and usually less wavy and closer to the pole.

Sometimes the stratospheric polar vortex can stretch southward over the United States. When that happens, it creates ideal conditions for the up-and-down movement of waves that connect the stratosphere with severe winter weather at the surface.

A stretched stratospheric polar vortex reflects upward waves back down, left, which affects the jet stream and surface weather, right.
Mathew Barlow and Judah Cohen, CC BY

The forecast for the January storm showed a close overlap between the southward stretch of the stratospheric polar vortex and the jet stream over the U.S., indicating perfect conditions for cold and snow.

The biggest swings in the jet stream are associated with the most energy. Under the right conditions, that energy can bounce off the polar vortex back down into the troposphere, exaggerating the north-south swings of the jet stream across North America and making severe winter weather more likely.

This is what was happening in late January 2026 in the central and eastern U.S.

If the climate is warming, why are we still getting severe winter storms?

Earth is unequivocally warming as human activities release greenhouse gas emissions that trap heat in the atmosphere, and snow amounts are decreasing overall. But that does not mean severe winter weather will never happen again.

Some research suggests that even in a warming environment, cold events, while occurring less frequently, may still remain relatively severe in some locations.

One factor may be increasing disruptions to the stratospheric polar vortex, which appear to be linked to the rapid warming of the Arctic with climate change.

Two globes, one showing a stable polar vortex and the other a disrupted version that brings brutal cold to the South.
The polar vortex is a strong band of winds in the stratosphere, normally ringing the North Pole. When it weakens, it can split. The polar jet stream can mirror this upheaval, becoming weaker or wavy. At the surface, cold air is pushed southward in some locations.
NOAA

Additionally, a warmer ocean leads to more evaporation, and because a warmer atmosphere can hold more moisture, that means more moisture is available for storms. The process of moisture condensing into rain or snow produces energy for storms as well. However, warming can also reduce the strength of storms by reducing temperature contrasts.

The opposing effects make it complicated to assess the potential change to average storm strength. However, intense events do not necessarily change in the same way as average events. On balance, it appears that the most intense winter storms may be becoming more intense.

A warmer environment also increases the likelihood that precipitation that would have fallen as snow in previous winters may now be more likely to fall as sleet and freezing rain.

There are still many questions

Scientists are constantly improving the ability to predict and respond to these severe weather events, but there are many questions still to answer.

Much of the data and research in the field relies on a foundation of work by federal employees, including government labs like the National Center for Atmospheric Research, known as NCAR, which has been targeted by the Trump administration for funding cuts. These scientists help develop the crucial models, measuring instruments and data that scientists and forecasters everywhere depend on.

This article, originally published Jan. 24, 2026, has been updated with details from the weekend storm.

The Conversation

Mathew Barlow has received federal funding for research on extreme events and also conducts legal consulting related to climate change..

Judah Cohen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How the polar vortex and warm ocean intensified a major US winter storm – https://theconversation.com/how-the-polar-vortex-and-warm-ocean-intensified-a-major-us-winter-storm-274243

La solitude des étudiants décrocheurs : quelques mois de formation et déjà face à l’échec

Source: The Conversation – France (in French) – By Sandra Gaviria, Professeure de sociologie, Université Le Havre Normandie

Le début des études supérieures n’est pas seulement un cap vers l’autonomie, cela peut être aussi une période de vulnérabilité, surtout lorsque les jeunes découvrent que les choix d’orientation faits en terminale ne correspondent pas à leurs attentes réelles. Se réorienter reste une épreuve, pour eux comme pour leurs familles. Explications.


Trois ans après leur première inscription en licence en 2020-2021, seuls 55 % des bacheliers sont encore en licence. Autrement dit, près d’un jeune sur deux connaît, à un moment de son cursus, une interruption, un réajustement de parcours ou une situation d’échec. Durant cette période, les sentiments éprouvés comme les solutions envisagées tendent à s’individualiser : chacun affronte seul ses doutes, ses inquiétudes et la recherche de voies possibles pour rebondir.

Au bout de quelques mois, les désillusions émergent, la solitude dans le logement ou dans l’établissement peut devenir pesante. Chaque année, en fin de premier semestre, « les décrocheurs », ces étudiants qui découvrent que la formation obtenue via Parcoursup ne correspond pas à leurs attentes ou qui ne se sentent pas bien dans leur vie étudiante, refont surface. Les notes, les partiels et les premiers bilans les obligent à se rendre à l’évidence.

Les établissements mettent en place de nouvelles initiatives pour essayer d’y remédier. Plusieurs travaux ont montré que ce phénomène ne peut être réduit à un simple échec scolaire, d’autres facteurs étant en jeu. En effet, il s’agit d’un phénomène souvent multicausal, où plusieurs éléments se conjuguent au cours de ces premiers mois d’études.

Des démarches de réorientation complexes

Pour une partie de ces jeunes femmes et de ces jeunes hommes commence alors la longue et souvent complexe démarche de réorientation. D’autres entrent dans une période d’attente indéterminée, sans véritable projet, dans l’espoir diffus de trouver leur voie, leur place ou simplement une direction. Cela s’accompagne du sentiment de « ne servir à rien », l’impression de ne pas avoir une place dans ce monde. Marie, 20 ans, explique son ressenti après trois mois passés à Sciences Po :

« J’ai eu tout le temps de me rendre compte que les cours ne m’intéressaient pas autant que ce que je pensais et, en plus, qu’il n’y avait pas la vie étudiante ou les associations à côté pour compenser. En janvier, j’ai décidé d’arrêter les cours à Sciences Po et, à ce moment-là, ce n’était pas très marrant. C’était, je pense, un début de dépression, où j’avais l’impression que rien n’avait de sens, ni les études ni la société. »

Pour certains jeunes, ces doutes s’ajoutent à des contextes personnels déjà fragiles, marqués par des problèmes familiaux, des difficultés matérielles, des histoires personnelles difficiles ou un mal-être antérieur qui empêche d’avancer dans les études. C’est aussi le moment où des fragilités du passé peuvent refaire surface. Carine, inscrite en sciences du langage, aujourd’hui infirmière, l’explique :

« Alors, moi, j’avais demandé les écoles d’infirmière dès la fin du bac, sauf que j’étais sur liste d’attente, et je ne me voyais pas faire autre chose que ça… J’avais une inscription à l’université, sauf que moi, je suis dyslexique et dysorthographique, du coup, je savais que j’allais décrocher tout de suite… Enfin, après, ça dépend des endroits, mais je savais qu’il y aurait eu trop de monde et que ça n’allait pas le faire au niveau de mon apprentissage. »

Le difficile repérage des jeunes en difficulté

Si le phénomène du décrochage est visible et comptabilisé statistiquement, les solutions restent encore limitées pour ces jeunes qui sortent du cadre des études après seulement quelques mois. Lorsqu’ils abandonnent leurs études supérieures, les parcours deviennent difficilement lisibles. Certains s’engagent dans un service civique, d’autres se retrouvent dans la catégorie des « NEET » (« not in employment education or training », « ni en emploi, ni en formation, ni en études »).

Cette répartition entre différentes catégories statistiques ne permet pas un réel suivi des trajectoires. Le repérage de ces jeunes reste difficile, en particulier lorsqu’ils ne sollicitent aucun dispositif d’accompagnement public. Certains peuvent se réorienter, mais devront « rattraper » le travail du premier semestre au second semestre. Les dispositifs sont souvent complexes à mobiliser, peu visibles, ou trop tardifs pour répondre à cette période de vide existentiel et de sentiment d’échec.




À lire aussi :
Choix d’études, orientation professionnelle : « Donnons aux jeunes le droit de se tromper »


À l’incertitude vécue par les étudiants s’en ajoute une autre, plus silencieuse, qui touche leurs parents. Après une année de terminale éprouvante, marquée par la pression du bac – où leurs enfants avaient eu le sentiment de « jouer leur vie » à chaque épreuve, les choix imposés par Parcoursup et la crainte de l’échec, ils se retrouvent face à leurs enfants en souffrance, qui doutent de leur vie et d’eux-mêmes.

Commence un temps où chacun tente de savoir s’il faut encourager la poursuite des études, accompagner un changement de filière ou financer (pour ceux qui le peuvent) une année dans l’attente d’un nouveau projet.

Des différences de ressources sociales

Les enquêtes soulignent la diversité des trajectoires de ces jeunes et mettent en évidence le rôle du milieu social dans la probabilité de décrocher ou, au contraire, de poursuivre des études supérieures. Les ressources familiales, les histoires personnelles et la capacité à se projeter dans l’avenir façonnent des parcours profondément différents.

Il y a une individualisation dans la recherche des solutions. Pour ceux qui sont dotés de ressources économiques et culturelles suffisantes, des alternatives sont envisagées : une prépa privée, une école hors Parcoursup. Trouver la bonne option devient un parcours assez solitaire, où les ressources relationnelles et économiques sont mobilisées pour commencer la chasse aux informations. Ainsi, ces jeunes accompagnés par leurs familles concoctent des recettes variées où interviennent de multiples professionnels, tels que psys, coachs en orientation…

Pour les autres, issus de milieux moins bien dotés, la situation est plus difficile. Les jeunes risquent de se retrouver sans solution claire, oscillant entre petits boulots précaires (le temps de trouver ce qu’ils veulent faire), longues périodes d’attente et, pour une minorité, l’accompagnement des structures du service public.

Un manque de lisibilité des dispositifs

Nous observons la difficulté de tous les parents face à l’impossibilité pour leurs enfants de réaliser leurs projets. Certains n’ont pas les compétences ni les moyens pour les accompagner, d’autres pas d’idées ou de connaissance des dispositifs. Les jeunes, de leur côté, ont aussi des souffrances communes telles que se sentir « en dehors du tapis roulant », à l’arrêt dans la course collective et étant en « retard » par rapport aux autres.

L’analyse de l’abandon après ces premiers mois dans l’enseignement supérieur met en lumière le manque de lisibilité des dispositifs d’accompagnement des jeunes qui n’ont pas trouvé leur voie ou n’ont pas pu accéder à leurs vœux. Pour une partie d’entre eux, cette période restera celle d’un simple ajustement. Pour d’autres, elle marque la première rupture d’un parcours plus long et irrégulier. Cette période entraîne, parfois, une accentuation de leurs dépendances – alcool, drogues, jeux vidéo – et de l’incertitude sur le long terme.

Le début des études supérieures n’est pas seulement le début de l’autonomie, mais aussi un temps de vulnérabilité sous-estimé. Le sentiment d’échec face aux études cristallise l’ensemble des insécurités des jeunes et les plonge dans l’impression d’être seuls face au monde. Ils ont le sentiment que les autres avancent tandis qu’eux restent à l’arrêt, sans possibilité de se projeter, uniquement traversés par des questions : qu’est-ce que j’aime ? Qu’est-ce que je peux faire ? Comment puis-je réussir ?

The Conversation

Sandra Gaviria ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. La solitude des étudiants décrocheurs : quelques mois de formation et déjà face à l’échec – https://theconversation.com/la-solitude-des-etudiants-decrocheurs-quelques-mois-de-formation-et-deja-face-a-lechec-271871