China’s interest in the next Dalai Lama is also about control of Tibet’s water supply

Source: The Conversation – UK – By Tom Harper, Lecturer in International Relations, University of East London

As the 14th Dalai Lama celebrates his 90th birthday with thousands of Tibetan Buddhists, there’s already tension over how the next spiritual leader will be selected. Controversially, the Chinese government has suggested it wants more power over who is chosen.

Traditionally, Tibetan leaders and aides seek a young boy who is seen as the chosen reincarnation of the Dalai Lama. It is possible that after they do this, this time Beijing will try to appoint a rival figure.

However, the current Dalai Lama, who lives in exile in India, insists that the process of succession will be led by the Swiss-based Gaden Phodrang Trust, which manages his affairs. He said no one else had authority “to interfere in this matter” and that statement is being seen as a strong signal to China.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


Throughout the 20th century, Tibetans struggled to create an independent state, as their homeland was fought over by Russia, the UK and China. In 1951, Tibetan leaders signed a treaty with China allowing a Chinese military presence on their land.

China established the Tibetan Autonomous Region in 1965, in name this means that Tibet is an autonomous region within China, but in effect it is tightly controlled. Tibet has a government in exile, based in India, that still wants Tibet to become an independent state.

This is a continuing source of tension between the two countries. India also claims part of Tibet as its own territory.

Beijing sees having more power over the selection of the Dalai Lama as an opportunity to stamp more authority on Tibet. Tibet’s strategic position and its resources are extremely valuable to China, and play a part in Beijing’s wider plans for regional dominance, and in its aim of pushing back against India, its powerful rival in south Asia.

The Dalai Lama celebrates his 90th birthday as many Tibetans living in China fear talking about independence.

Tibet provides China with a naturally defensive border with the rest of southern Asia, with its mountainous terrain providing a buffer against India. The brief Sino-Indian war of 1962 when the two countries battled for control of the region, still has implications for India and China today, where they continue to dispute border lands.

As with many powerful nations, China has always been concerned about threats, or rival power bases, within its neighbourhood. This is similar to how the US has used the Monroe Doctrine to ensure its dominance over Latin America, and how Russia seeks to maintain its influence over former Soviet states.

Beijing views western criticism of its control of Tibet as interference in its sphere of influence.




Read more:
India and Pakistan tension escalates with suspension of historic water treaty


Another source of contention is that Beijing traditionally views boundaries such as the McMahon line defining the China-India border as lacking legitimacy, a border drawn up when China was at its weakest in the 19th century. Known in China as the “century of humiliation”, this was characterised by a series of unequal treaties, which saw the loss of territory to stronger European powers.

This continues to a source of political tensions in China’s border regions including Tibet. This is a controversial part of China’s historical memory and continues to influence its ongoing relationship with the west.

Demand for natural resources

Tibet’s importance to Beijing also comes from its vast water resources. Access to more water is seen as increasingly important for China’s wider push towards self-sufficiency which has become imperative in the face of climate change. This also provides China with a significant geopolitical tool.

For instance, the Mekong River rises in Tibet and flows through China and along the borders of Myanamar and Laos and onward into Thailand and Cambodia. It is the third longest river in Asia, and is crucial for many of the economies of south-east Asia. It is estimated to sustain 60 million people.

China’s attempts to control water supplies, particularly through the building of huge dams in Tibet, has added to regional tensions. Around 50% of the flow to the Mekong was cut off for part of 2021, after a Chinese mega dam was built. This caused a lot of resentment from other countries which depended on the water.

Moves by other nations to control access to regional water supplies in recent years show how water is now becoming a negotiating tool. India attempted to cut off Pakistan’s water supply in 2025 as part of the conflict between the two. Control of Tibet allows China to pursue a similar strategy, which grants Beijing leverage in its dealings with New Delhi, and other governments.

A map of Tibet and surrounding countries.

Shutterstock.

Another natural resource is also a vital part of China’s planning. Tibet’s significant lithium deposits are crucial for Chinese supply chains, particularly for their use in the electric vehicle industry. Beijing is attempting to reduce its reliance on western firms and supplies, in the face of the present trade tensions between the US and China, and Donald Trump’s tariffs on Chinese goods.

Tibet’s value to China is a reflection of wider changes in a world where water is increasingly playing an important role in geopolitics. With its valuable natural resources, China’s desire to control Tibet is not likely to decrease.

The Conversation

Tom Harper does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. China’s interest in the next Dalai Lama is also about control of Tibet’s water supply – https://theconversation.com/chinas-interest-in-the-next-dalai-lama-is-also-about-control-of-tibets-water-supply-255843

Lioness Lucy Bronze uses ‘cycle syncing’ to get an edge on her competition — here’s how the practise works

Source: The Conversation – UK – By Mollie O’Hanlon, PhD Candidate, Exercise Physiology, Nottingham Trent University

Bronze has said ‘cycle syncing’ has been important for her performance. Jose Breton- Pics Action/ Shutterstock

England footballer Lucy Bronze recently said in an interview that “cycle syncing” gives her an edge on the pitch. This practice involves aligning your training schedule to the different phases of your menstrual cycle.

Cycle syncing has become increasingly popular in recent years – especially among athletes who are looking to get an edge over the competition. Even Chelsea women’s football team have put this new approach to use, tailoring training schedules according to each player’s menstrual cycle.

For the average person, tailoring your workouts to your menstrual cycle is probably not going to have much of an impact. But for a professional athlete such as Bronze, cycle syncing could be a gamechanging strategy in shaping her elite performance.


Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.


The menstrual cycle begins and ends with menstruation (a period). While the length of the menstrual cycle varies for each person, it’s usually around 28 days.

The menstrual cycle is underpinned by fluctuations in levels of the female sex hormones oestrogen and progesterone. This is why the cycle is divided into three key phases: early follicular, late follicular and the luteal phase.

The early follicular phase usually lasts around seven days and begins with the start of your period. This is when hormone levels are at their lowest.

The late follicular phase follows on from the first seven days, and is where ovulation happens – usually around day 14 of the cycle, though this will depend on cycle length. Ovulation is when the egg is released and you’re at your most fertile.

After that comes the luteal phase (lasting around 12-14 days), when progesterone peaks to prepare the body for pregnancy. If pregnancy doesn’t happen, hormones drop and the cycle begins again.

It’s no secret that mood and energy levels can shift – sometimes significantly – throughout the menstrual cycle. This is why some female athletes have begun using cycle syncing. By tailoring training schedules to match hormonal fluctuations, women are gaining a deeper understanding of their bodies and the symptoms they experience throughout each phase – empowering them to train smarter, not harder.

Bronze said the strategy has transformed her performance, saying that during certain phases of her cycle she feels “physically capable of more and can train harder”.

Despite these testimonials, scientists are yet to reach a definitive conclusion on how the menstrual cycle affects athletic performance.

Lucy Bronze smiles during a match.
Bronze is just one of many female athletes putting ‘cycle syncing’ to the test.
Christian Bertrand/ Shutterstock

So far, there’s some suggestion that there may be a slight dip in performance (specifically to strength and endurance) during the early follicular phase. However, these effects are minimal – and highly dependent on the person. It’s also not entirely clear what mechanisms underpin these small performance dips that some women experienced.

Other research suggests that certain aspects of the neuromuscular system (the network of nerves and muscles that make movement possible) – specifically how our muscles generate force – is altered during the luteal phase. Research has also found that certain muscles may fatigue less quickly during this phase as well.

This implies that during the luteal phase, there may be changes in signals from the brain and spinal cord to the skeletal muscles. However, no changes in the neuromuscular function have been observed.

Part of the reason it’s so difficult for researchers to gather enough evidence to draw firm conclusions on the menstrual cycle’s potential effects on athletic performance is because of the huge variability in menstrual cycle characteristics, which makes it difficult to study. Phase length, hormone levels and symptoms can differ widely between women – and even from cycle to cycle.

The small effects seen in these studies will have little effect on how most of us train or exercise. But for an elite athlete, these minuscule differences could have an effect on their training and competition, which may be why so many are willing to give the practice a try.

So while it isn’t entirely clear how much influence certain menstrual cycle phases have on performance, how you feel during different phases could certainly affect your ability to train at your best.

Around 77% of female athletes experience negative symptoms in the days leading up to and during menstruation. Fatigue, feeling less motivated and even experiencing digestive issues such as bloating and nausea, could all affect your ability to train at your best.

Trying cycle syncing

If you’re still interested in giving cycle syncing a try to see if it has any effect for you, the best place to start is by tracking your menstrual cycle. This will help you understand your body, how you feel in each phase of your cycle and what effect certain symptoms have on your training.

It’s recommended you track your cycle for at least three months before making any changes to your training to establish a baseline and spot trends over time.

For example, if you notice you often feel fatigued when training in your luteal phase, it may help to focus on ensuring you fuel well with carbohydrates before and during workouts. Or on days where you feel more energetic and motivated to train, you might be able to push yourself a bit harder in your workouts.

Whether you’re playing for England in the Euros or simply working towards your own fitness goals, understanding your cycle can help you train smarter, manage your symptoms better and stay consistent with your training.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Lioness Lucy Bronze uses ‘cycle syncing’ to get an edge on her competition — here’s how the practise works – https://theconversation.com/lioness-lucy-bronze-uses-cycle-syncing-to-get-an-edge-on-her-competition-heres-how-the-practise-works-260153

How M&S responds to its cyber-attack could have a serious impact on its future – and its customers

Source: The Conversation – UK – By Aybars Tuncdogan, Reader in Digital Innovation and Information Security, King’s College London

raymond orton/Shutterstock

The cyber-attack on Marks & Spencer will lead to an estimated £300 million hit to the company’s profits this year. It now aims to have online shopping at the store back to normal by August, more than three months after IT systems were compromised.

Fans of M&S clothing and food will be relieved after all of the uncertainty. But that level of uncertainty, as well as the huge cost, is surely a sign that big retailers, which millions of people rely on, need to change how they think about – and invest in – cybersecurity.

It has to be an absolute priority. After all, few marketing strategies or HR initiatives can save a company £300 million in just six weeks. But perhaps a more sophisticated cybersecurity department could have done just that.

To be fair, M&S faced a relatively rare, high-impact ordeal. Most cyber-attacks of this nature don’t affect customers so directly, and much of the recovery typically happens behind the scenes.

But M&S shoppers saw online orders collapse, contactless payments fail and refunds, gift cards and loyalty points not functioning. Disruption in stock-management and warehousing led to empty shelves and food waste.

On June 27, M&S issued a public apology and a £5 digital gift card to affected customers. But research suggests that the most important element of keeping customers onside is the quality of the recovery process, and whether normal service is eventually resumed.

To get back to normal service, it is possible that a ransom was paid to the cyber attackers, but M&S has refused to confirm or deny this. (One survey found that many organisations hit by cyber attacks agreed to pay a ransom – and then suffered a subsequent breach, often from the very same culprits.)

But even when normal service returns, when hackers steal customer data, as they did with M&S, research suggests that this information is often reused by criminals in identity theft and phishing. A study even found that victims of data breaches are more likely to have mortgage applications denied.

From what we know about the breach at M&S, it seems that the cyber-attackers simply used a phishing technique to get the support desk of a third-party contractor to reset the password of an admin-level account. That said, although in this case the main vulnerability was human, the lesson to be learnt here is that sometimes just one vulnerability can shake the whole system to its core.

This is why business owners need to think of cybersecurity not just as a tedious and inconvenient IT issue, but as a core function of the business. Otherwise, as the M&S case illustrates, it is simply not possible for the rest of the corporate structure to operate.

Testing times

So cybersecurity targets must be incorporated into every department to ensure collective defence. And organisations also need to stress-test the different aspects of their systems.

That could be checking on human responses, but it should also include technology (like a vulnerability in the web server), physical barriers (a poorly secured server room door) and HR procedures (failure to revoke ex-employee access).

Laptop in use with with graphic of padlock and security images.
Lock down your laptop.
Thapana_Studio/Shutterstock

These lines of defence have to be stress-tested regularly and from multiple angles, rather than being considered an annual checkbox activity for compliance.

Scenario-based tests – essentially a cyber fire-drill — such as internal threat simulations and response exercises, can provide useful insights into an organisation’s readiness to detect, respond to and recover from cyber-attacks.

It’s also important that organisations learn to communicate clearly once a breach occurs. Research into responses to data breaches suggests that any backlash is sharper when the company seems to be trying to hide the breach, which may later be publicised by the criminals instead.

Consumers should also remember that they are not powerless. We may not be able to prevent a data breach, but all of us can help to stop attackers from infiltrating our online worlds by something as simple as not re-using the same passwords.

By remaining sceptical, we can prevent attackers from using the information they stole to phish us later. And by thinking carefully about what personal data we share with companies, we can reduce the impact of future breaches.

The Conversation

Aybars Tuncdogan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How M&S responds to its cyber-attack could have a serious impact on its future – and its customers – https://theconversation.com/how-mands-responds-to-its-cyber-attack-could-have-a-serious-impact-on-its-future-and-its-customers-260429

From Kabul to the catwalk – the surprising global history behind fashion’s fur revival

Source: The Conversation – UK – By Magnus Marsden, Professor of Social Anthropology, University of Sussex

The winter season of 2024-25 marked a resurgence of fur clothing – both faux and real – in fashion across Europe and North America. Shearling jackets and embroidered “Penny Lane coats” featured widely in reports on the latest fashion trends. Vintage fur coats are also back in vogue.

To many, the resurgence came as a surprise. The anti-fur movement, especially influential in the 1980s, continues to shape perceptions of fur. In the 2010s, cities including New York and Los Angeles banned the use of fur to make clothes. The UK meanwhile banned the farming of fur-bearing animals, and, alongside the EU, has committed itself to legislating against all fur imports.

Just last year the town of Worthing, in England, debated whether their mayor should wear ceremonial robes trimmed with fur or not. Despite these trends, many young people have embraced the renewed trend of wearing real fur.

Some clothes made from animal skins became popular during the counter-cultural movement of the 1960s, but historically, fur has mostly marked status, wealth and luxury. Today, many critics interpret fur’s return to fashion as a cultural expression of rightwing politics.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


Fur is prominent in the “boom boom” fashion trend, which emphasises excess and “male-coded values”. It has been described by fashion journalists as “over-the-top and unashamed about its own greed and lack of wokeness”.

Fur clothing is a reminder of the moral tensions between need and desire, and luxury and excess. In addition to being inter-generational, these debates are also about gender. For much of the 20th century, fur coats symbolised femininity, erotic power and class position in the west. But by the 1980s, advertising campaigns depicted women who wore fur as either stupid and unthinking or thinking and unspeakably cruel, leading many to jettison it.

Anti-fur protests were held across the US in 1994.

Fur’s return to fashion has injected old debates with new significance. Some young people are willing to wear faux fur because it does not involve killing animals. But others argue that, because it is made from synthetic material, faux fur is actually more environmentally damaging and prefer to wear the real thing. They claim that wearing vintage fur is a form of “sustainable consumption” but are challenged by those who argue that this fashion trend ultimately justifies killing animals to make clothes.

The boom boom trend is said to embody a contemporary expression of 1980s “conspicuous consumerism”, but in an era of economic austerity the adoption of fur by young people suggests the clothes they wear identify their desires rather than their financial reality.

A global history of fur

Today, as in the 1980s, the perspectives, interests and experiences of non-Europeans are often unheard in debates around fur. A decline of fur-bearing animal populations in North America and Siberia from the early 19th century, led to a global expansion in fur farming.




Read more:
How central Asian Jews and Muslims worked together in London’s 20th-century fur and carpet trade


From the 1850s, for example, Central Asia supplied furs to Europe and North America. Local artisans cured the pelts of karakul lambs – a native breed – to yield a rich and glossy fur. In central and south Asia, men of high status wore karakul hats; in Europe and America, they were mostly used to make women’s coats.

After the Russian revolution of 1917, many nomadic and semi-nomadic pastoralists, who raised sheep and other animals, left central Asia and moved with their flocks to neighbouring Afghanistan. The trade in karakul fur grew in the country, and foreign currency reserves came to depend on lambskins sold at auctions in London and New York.

In the 1960s, sheepskin coats made in Afghanistan – known as “Afghans” – became popular in the west, being worn by stars including Brian Jones of the Rolling Stones. The 1969 British edition of Vogue featured an interview with an icon of “oriental chic”, the “beautiful, dashing, intelligent, adventurous” Afghan socialite, Safia Tarzi, who lived in Paris, and ran a boutique clothing shop in Kabul.

The Afghan coat enjoyed a resurgence in 2000 having been worn by the character Penny Lane (Kate Hudson) in the film Almost Famous.




Read more:
Friday essay: how ‘Afghan’ coats left Kabul for the fashion world and became a hippie must-have


In the 1980s, the anti-fur campaign contributed to a declining market for karakul. For decades, rumours of Central Asian shepherds extracting lambs from the wombs of sheep to ensure a steady yield of delicate pelts had circulated. Moral opposition to the practice was not confined to the west.

During my research on globally dispersed activists, intellectuals and merchants from Afghanistan, a man from Afghanistan, now based in London, told me that his father banned his family from wearing karakul hats because sheep and their lambs were treated cruelly.

In the 1990s, civil war destroyed much of the infrastructure of the karakul industry in Afghanistan, but a trickle of pelts reached auction houses located in Frankfurt, Copenhagen and Helsinki.

In the 2000s, international development organisations attempted to revive the trade, though sales never returned to anyway near the levels of the 1970s. By the 2010s, families in northern Afghanistan struggling economically opted to send sons to travel illegally to Turkey to find work as shepherds for commercially oriented Turkish farmers.

Promotional videos of fashion houses occasionally touch on the Penny Lane coat’s ties to Afghanistan, but media coverage of fur fashions rarely address its historical connections to central Asia.

The Conversation

Magnus Marsden received funding from the Arts and Humanities Research Council including for the research upon which this article is based.

ref. From Kabul to the catwalk – the surprising global history behind fashion’s fur revival – https://theconversation.com/from-kabul-to-the-catwalk-the-surprising-global-history-behind-fashions-fur-revival-256382

¿Qué le sucede a nuestro cerebro cuando vemos vídeos a velocidades más rápidas de lo normal?

Source: The Conversation – (in Spanish) – By Marcus Pearce, Reader in Cognitive Science, Queen Mary University of London

Pressmaster/Shutterstock

Muchos de nosotros hemos adquirido el hábito de escuchar pódcast, audiolibros y otros contenidos en línea a velocidades de reproducción más altas. Para los jóvenes, incluso podría ser la norma. Por ejemplo, una encuesta realizada a estudiantes de California reveló que el 89 % cambiaba la velocidad de reproducción de las clases online, mientras que en los medios de comunicación han aparecido numerosos artículos sobre cómo se ha generalizado el visionado rápido.

Es fácil pensar en las ventajas de ver las cosas más rápido. Te permite consumir más contenido en el mismo tiempo o repasar el mismo contenido varias veces para sacarle el máximo partido.

Esto podría ser especialmente útil en un contexto educativo, donde podría liberar tiempo para consolidar conocimientos, hacer pruebas prácticas, etc. Ver vídeos rápidamente también es potencialmente una buena forma de asegurarse de mantener la atención y el interés durante todo el tiempo que duran, evitando así que la mente se distraiga.

Pero ¿qué hay de las desventajas? Resulta que también hay más de una.

Cuando una persona se expone a información oral, los investigadores distinguen tres fases de la memoria: codificar la información, almacenarla y, posteriormente, recuperarla. En la fase de codificación, el cerebro necesita cierto tiempo para procesar y comprender el flujo de palabras que recibe. Las palabras deben extraerse y su significado contextual debe recuperarse de la memoria en tiempo real.

Las personas suelen hablar a una velocidad de unas 150 palabras por minuto, aunque duplicar la velocidad a 300 o incluso triplicarla a 450 palabras por minuto sigue estando dentro del rango de lo que podemos considerar inteligible. La cuestión es más bien la calidad y la longevidad de los recuerdos que formamos.

La información entrante se almacena temporalmente en un sistema de memoria llamado memoria de trabajo. Esto permite que los fragmentos de información se transformen, combinen y manipulen hasta alcanzar una forma lista para ser transferida a la memoria a largo plazo. Dado que nuestra memoria de trabajo tiene una capacidad limitada, si llega demasiada información demasiado rápido, esta puede desbordarse. Esto provoca una sobrecarga cognitiva y la pérdida de información.

Visualización rápida y recuperación de información

Un metaanálisis reciente examinó 24 estudios sobre el aprendizaje a partir de vídeos de conferencias. Los estudios variaban en su diseño, pero en general consistían en reproducir una videoconferencia a un grupo a velocidad normal (1x) y reproducir la misma videoconferencia a otro grupo a una velocidad mayor (1,25x, 1,5x, 2x y 2,5x).

Al igual que en un ensayo controlado aleatorio utilizado para probar tratamientos médicos, los participantes fueron asignados aleatoriamente a cada uno de los dos grupos. A continuación, ambos grupos realizaron una prueba idéntica después de ver el vídeo para evaluar sus conocimientos sobre el material. Las pruebas consistían en recordar información, responder a preguntas de opción múltiple para evaluar su capacidad de recuerdo, o ambas cosas.

Botones de reproducción
La reproducción más rápida puede no ayudar al estudio.
V.Studio

El metaanálisis mostró que aumentar la velocidad de reproducción tenía efectos cada vez más negativos en el rendimiento de la prueba. A velocidades de hasta 1,5 veces, el coste era muy pequeño. Pero a partir de 2 veces, el efecto negativo era de moderado a grande.

Para poner esto en contexto, si la puntuación media de un grupo de estudiantes era del 75 %, con una variación típica de 20 puntos porcentuales en cualquier dirección, aumentar la velocidad de reproducción a 1,5x reduciría el resultado medio de cada persona en 2 puntos porcentuales. Y aumentar la velocidad a 2,5x supondría una pérdida media de 17 puntos porcentuales.

La edad importa

Curiosamente, uno de los estudios incluidos en el metaanálisis también investigó a adultos mayores (de 61 a 94 años) y descubrió que se veían más afectados por ver contenidos a velocidades más rápidas que los adultos más jóvenes (de 18 a 36 años). Esto puede reflejar un debilitamiento de la capacidad de memoria en personas por lo demás sanas, lo que sugiere que los adultos mayores deberían visualizar los contenidos a velocidad normal o incluso a velocidades de reproducción más lentas para compensar.

Sin embargo, aún no sabemos si se pueden reducir los efectos negativos de la reproducción rápida haciéndolo con regularidad. Por lo tanto, podría ser que los adultos más jóvenes simplemente tengan más experiencia con la reproducción rápida y, por lo tanto, sean más capaces de hacer frente al aumento de la carga cognitiva. Del mismo modo, esto significa que no sabemos si las personas más jóvenes pueden mitigar los efectos negativos sobre su capacidad para retener información utilizando con más frecuencia la reproducción más rápida.

Otra incógnita es si ver vídeos a velocidades de reproducción más altas tiene efectos a largo plazo sobre la función mental y la actividad cerebral. En teoría, estos efectos podrían ser positivos, como una mayor capacidad para manejar una mayor carga cognitiva. O podrían ser negativos, como una mayor fatiga mental derivada del aumento de la carga cognitiva, pero actualmente carecemos de pruebas científicas para responder a esta pregunta.

Una última observación es que, incluso si reproducir el contenido a, por ejemplo, 1,5 veces la velocidad normal no afecta al rendimiento de la memoria, hay evidencia que sugiere que la experiencia es menos agradable. Eso puede afectar a la motivación y la experiencia de las personas a la hora de aprender cosas, lo que podría hacer que encontraran más excusas para no hacerlo. Por otro lado, la reproducción más rápida se ha popularizado, por lo que quizá, una vez que la gente se acostumbre, no haya ningún problema. Esperemos que en los próximos años comprendamos mejor estos procesos.

The Conversation

Marcus Pearce no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. ¿Qué le sucede a nuestro cerebro cuando vemos vídeos a velocidades más rápidas de lo normal? – https://theconversation.com/que-le-sucede-a-nuestro-cerebro-cuando-vemos-videos-a-velocidades-mas-rapidas-de-lo-normal-260870

How a lottery-style refund system could boost recycling

Source: The Conversation – Canada – By Jiaying Zhao, Associate Professor, Psychology, University of British Columbia

Imagine you’re standing at a bottle depot with an empty pop can. You can get a dime back, or you can take a chance at winning $1,000. Which would you choose?

Every year, the world produces two trillion beverage containers but only 34 per cent of glass bottles, 40 per cent of plastic bottles and 70 per cent of aluminium cans are recycled.

To increase recycling rates, many countries have adopted deposit refund systems, where you pay a small deposit, say 10 cents, when you buy an eligible beverage container and get this deposit back when you return it to a local depot.

Through this system, approximately 80 per cent of containers in British Columbia and almost 85 per cent of containers in Alberta are recovered. Still, that leaves millions of containers as litter, in landfills or incinerated every year, contributing to pollution and greenhouse gas emissions.

With Canada’s goal of zero plastic waste by 2030 drawing near, a new approach to recycling beverage containers could make a difference.

We recently conducted a research experiment to find out if more people would recycle more often if they had a chance to win a prize.

A lottery-style refund to boost recycling

Psychology research shows that people tend to prefer a small chance to win a large reward over a guaranteed small reward. For example, people would more often prefer a small chance to win $5,000 over receiving a $5 reward.

Applying this insight to recycling, we turned the small guaranteed refund of $0.10 in B.C. and Alberta into a 0.01 per cent chance of getting $1,000. We set up recycling tables at food courts in Vancouver and at a RibFest event in Spruce Grove, Alta.

When people brought their beverage containers to us to recycle, we presented them with five options for a refund. They could get their guaranteed 10 cents, or a chance to win a larger amount of money, the highest option being $1,000.

We found that people preferred the chance to win $1,000 over the other options, and they felt the happiest after making this choice.

To see if the lottery option actually increased recycling, we conducted an experiment where we told people ahead of time that they would get their guaranteed 10-cent refund or that they had a chance to win $1,000 for each bottle they brought to our study.

We found that people brought 47 per cent more beverage containers when we offered them a chance to win $1,000 than when we offered them the guaranteed refund.

Overall, our findings suggest that offering a chance to win a larger amount of money can meaningfully boost beverage container recycling. The excitement of a potential big win can motivate people who may not be enticed by the typical small, guaranteed refund.

Choice matters

A one-size-fits-all approach won’t work. People recycle for different reasons. They also have different risk tolerances, and some may rely on the guaranteed refund for additional income. To capture diverse preferences and needs, it’s vital that the lottery-style refund is offered in addition to the guaranteed refund, not instead of it.

It would also be beneficial to include smaller, more frequent prizes alongside the grand prize, so people win relatively frequently to keep motivations high.

This is Norway’s approach to their recycling lottery, with 39 per cent of people choosing the lottery option when they recycle. In 2023, Norway’s recycling lottery achieved a 92.3 per cent container return rate.

Importantly, our research does not capture people who collect large bags of containers to return to the depot. It’s possible that this demographic may have different preferences for the refund, and future research should examine this group in particular.

Green lottery for good

The lottery-style refund has the same expected payout as the 10-cent refund per bottle. This means that, on average, people will take home the same amount of money as with the guaranteed option, without incurring additional losses or gains. This benevolent factor distinguishes the lottery-style refund from other types of lotteries or gambling that often profit off the players.

Since the only way to enter this lottery-style refund is to recycle beverage containers, it’s impossible to directly re-enter any winnings into the lottery. There are also no near-misses, losses disguised as wins, exciting lights and sounds or other sensory stimulation often associated with gambling.

Some might be apprehensive about potential gambling dangers of creating a lottery system. However, there has not been a single case linking the recycling lottery to gambling addiction. There is also no evidence that purchases of beverage containers would increase as a result of the lottery-style refund.

Our study’s transparent design, with clear odds, ensures fairness, unlike casino games built to take players’ cash. For this approach to be successful, deposit refund systems must maintain this transparency in lottery-style program operations and payouts.

If done right, offering a chance to win a higher amount of money for recycling can meaningfully increase recycling rates, contribute to a circular economy and allow people to choose the refund option that works best for them.

The Conversation

Jiaying Zhao receives funding from the Social Sciences and Humanities Research Council of Canada.

Jade Radke receives funding from the Social Sciences and Humanities Research Council of Canada Doctoral Fellowship and the University of British Columbia Indigenous Graduate Fellowship.

ref. How a lottery-style refund system could boost recycling – https://theconversation.com/how-a-lottery-style-refund-system-could-boost-recycling-259896

The Great Lakes are powerful. Learning about ‘rip currents’ can help prevent drowning

Source: The Conversation – Canada – By Chris Houser, Professor in Department of Earth and Environmental Science, and Dean of Science, University of Waterloo

Between 2010 and 2017, there were approximately 50 drowning fatalities each year associated with rough surf and strong currents in the Great Lakes.

In addition to the personal loss experienced by family and friends, these drownings create an annual economic burden on the regional economy of around US$105 million, and that doesn’t include the direct costs of search and rescue.

Types of rip currents

Rip currents — commonly referred to as rips or colloquially as rip tides — are driven by the breaking of waves. These currents extend away from the shoreline and can flow at speeds easily capable of carrying swimmers far from the beach.

Structural rips are common throughout the Great Lakes (Grand Haven on the eastern shore of Lake Michigan, for example) and develop when groynes, jetties and rock structures deflect the alongshore current offshore, beyond the breaking waves. Depending on the waves and the structure, a shadow rip can also develop on the other side of the groyne or jetty.

Rips can also develop anywhere that variations in the bathymetry (the topography of the sand underwater) — such as nearshore bars — causes wave-breaking to vary along the beach, which makes the water thrown landward by the breaking waves return offshore as a concentrated flow at the water’s surface. These are known as channel or bathymetric rips and are they can form along sand beaches in the Great Lakes.

While it can be difficult to spot a channel rip, they can be identified by an area of relatively calm water between breaking waves, a patch of darker water or the offshore flow of water, sediment and debris.

A person caught in a rip is transported away from shore into deeper water, but they are not pulled under the water. If they are a weak swimmer or try to fight the current, they may panic and fail to find a way out of the rip and back to shore before submerging.

Rip current hazards

Most rip fatalities occur on unsupervised beaches or on supervised beaches when and where lifeguards are not present. While many popular beaches near large urban centres have lifeguards, many beaches don’t. Along just the east coast of Lake Huron, there are more than 40 public beaches, including Goderich, Bayfield, Southampton and Sauble Beach, but only two have lifeguard programs (Sarnia and Grand Bend).

Simple warning signs are used on many beaches, but visitors either don’t pay attention or don’t know how to interpret the warning.

Non-local visitors are a high-risk group for drownings. They are less likely to make safe swimming choices than residents or regular beach-goers, because visitors are generally unfamiliar with the beach and its safety measures, have poor knowledge of beach hazards like rip currents and breaking waves and are overconfident in their swimming ability.

Recent findings from a popular beach on Lake Huron suggest that those with less experience at the beach tend to make decisions of convenience rather than based on beach safety. Residents with greater knowledge of the local hazards tend to avoid swimming near where the rip can develop.

But even when people are aware of rip currents and other beach hazards, they may not make the right decisions. Despite the presence of warnings, people’s actions are greatly influenced by the behaviour of others, peer pressure and group-think. The social cost of not entering the water with the group may appear to outweigh the risk posed by entering the water.

Rip channel and current on Lake Huron. (Chris Houser)

The behaviour of beach users is affected by confirmation bias, a cognitive shortcut where a person selectively pays attention to evidence confirming their pre-existing beliefs and ignores evidence to the contrary. When someone enters the water and does not encounter strong waves or currents, they’re more likely to engage in risky behaviour on their next visit to that beach or a similar beach.

Vacationers and day visitors can stay safe only if they are aware that there is the potential for rip currents and rough surf at beaches in the Great Lakes. Just because a beach is accessible and has numerous attractions does not mean it is safe.

Advocating for beach safety

In the United States, the National Oceanographic and Atmospheric Administration runs programs designed to educate beach users about surf and rip hazards. But Canada hasn’t implemented a national beach safety strategy.

Education about rips and dangerous surf falls on the shoulders of advocates, many of whom have been impacted by a drowning in the Great Lakes. The Great Lakes Surf Rescue Project has been tracking and educating school and community groups about rip currents and rough surf in the Great Lakes since 2010.

Several new advocacy groups have started in recent years, including Kincardine Beach Safety on Lake Huron and the Rip Current Information Project on Lake Erie. Given that there is limited public interest in surf-related drownings and limited media coverage, these advocacy groups are helping to increase awareness of rip currents and rough surf across the Great Lakes.

To ensure a safe trip to the beach, beachgoers should seek out more information about rip currents and other surf hazards in the Great Lakes.

The Conversation

Chris Houser receives funding from NSERC.

ref. The Great Lakes are powerful. Learning about ‘rip currents’ can help prevent drowning – https://theconversation.com/the-great-lakes-are-powerful-learning-about-rip-currents-can-help-prevent-drowning-260060

The toxic management handbook: six guaranteed ways to make your best employees flee

Source: The Conversation – France – By George Kassar, Full-time Faculty, Research Associate, Performance Analyst, Ascencia Business School

If performance management is not implemented properly, it can demotivate and drive out employees. PeopleImages.comYuri A/Shutterstock

Who said that an organization’s main resource and true competitive advantage lies in its employees, their talent or their motivation? After all, maybe your real goal is to empty out your offices, permanently discourage your staff and methodically sabotage your human capital.

If that’s the case, research in performance management offers everything you need.

Originally rooted in early 20th-century rationalization methods, performance management has become a cornerstone of modern management. It has evolved to adapt to contemporary HR needs, focusing more on employee development, engagement and strategic alignment. In theory, it should help guide team efforts, clarify expectations and support individual development. But if poorly implemented, it can become a powerful tool to demotivate, exhaust and push out your most valuable employees.

Here’s how to scare off your best talent. Although the following guidelines are meant to be taken tongue-in-cheek, they remain active in the daily work of some managers.

Management by ‘vague’ objectives

Start by setting vague, unrealistic or contradictory goals. Above all, avoid giving goals meaning, linking them to a clear strategy or backing them with appropriate resources. In short, embrace the “real” SMART goals: stressful, arbitrary, ambiguous, repetitive, and totally disconnected from the field!

According to research in organizational psychology, this approach guarantees anxiety, confusion and disengagement among your teams, significantly increasing their intention to leave the company.

Silence Is Golden

Avoid all forms of dialogue and communication. Never give feedback. And if you absolutely must, do it rarely and irregularly, make sure it’s disconnected from actual work, and preferably in the form of personal criticism. The absence of regular, task-focused and actionable feedback leaves employees in uncertainty, catches them off-guard during evaluations and gradually undermines their engagement.

How your employees interpret your intentions and feedback matters most. Be careful though: if feedback is perceived as constructive, it may actually boost motivation and learning engagement. But if the same feedback is seen as driven by a manager’s personal agenda (or, ego-based attribution), it backfires, leading to demotivation, withdrawal and exit.

A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!

Performance evaluation ‘trials’

Hold annual performance review meetings in which you focus solely on mistakes and completely ignore successes or invisible efforts. Be rigid, critical and concentrate only on weaknesses. Make sure to take full credit when the team succeeds; after all, without you, nothing would have been possible. On the other hand, when results fall short, don’t hesitate to highlight errors, assign individual blame and remind them that “you did warn them!”

This kind of performance evaluation, better described as a punitive trial, ensures deep demotivation and accelerates team turnover.

Internal competition, maxed out

Promote a culture of rivalry among colleagues: circulate internal rankings regularly, reward only the top performers, systematically eliminate the lowest ranked without even thinking of helping them improve, devalue the importance of cooperation and let internal competition do the rest. After all, these are the core features of the “famous” method popularized by the late Jack Welch at General Electric.

If you notice a short-term boost of motivation, don’t worry. The long-term effects of Welch’s “vitality curve” will be far more harmful than beneficial. Fierce internal competition is a great tool for destroying trust among teammates and creating a persistently toxic atmosphere, leading to an increase in the number of voluntary departures.

Ignore wellbeing and do not listen, no matter what

We’ve already established that feedback and dialogue should be avoided. But if, by misfortune, they do occur, make sure not to listen to complaints or warning signs related to stress or exhaustion. Offer no support or assistance, and of course, completely ignore the right to disconnect.

By neglecting mental health and refusing to help your employees find meaning in their work – especially when they perform tasks seen as meaningless, repetitive or emotionally draining – you directly increase the risk of burnout and chronic absenteeism.

In addition, always favour highly variable and poorly designed performance bonuses: this will heighten income instability and kill off whatever engagement remains.




À lire aussi :
Meditation and mindfulness at work are welcome, but do they help avoid accountability for toxic culture?


The subtle art of wearing people down

Want to take your talent-repelling skills even further? Draw inspiration from what research identifies as practices and experiences belonging to the three major forms of workplace violence. These include micromanagement, constant pressure, lack of recognition, social isolation and others that generate long-term suffering. Though often invisible, their reoccurence gradually wears employees down mentally, then physically, until they finally break.


Obviously, these tips are meant to be taken ironically.

Yet, unfortunately, these toxic practices are all too real in the daily routines of certain managers. If the goal is truly to retain talent and ensure lasting business success, it is essential to centre performance management practices around meaning, fairness and the genuine development of human potential.

The Conversation

George Kassar ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. The toxic management handbook: six guaranteed ways to make your best employees flee – https://theconversation.com/the-toxic-management-handbook-six-guaranteed-ways-to-make-your-best-employees-flee-260733

Antidepressant withdrawal: new review downplays symptoms but misses the mark for long-term use

Source: The Conversation – UK – By Mark Horowitz, Visiting Clinical Research Fellow in Psychiatry, UCL

marevgenna/Shutterstock.com

A new review of antidepressant withdrawal effects – written by academics, many of whom have close ties to drug manufacturers – risks underestimating the potential harms to long-term antidepressant users by focusing on short-term, industry-funded studies.

There is growing recognition that stopping antidepressants – especially after long-term use – can cause severe and sometimes debilitating withdrawal symptoms, and it is now acknowledged by the UK government as a public health issue.

One of the main reasons this issue took decades to recognise after the release of modern antidepressants onto the market is because medical guidelines, such as those produced by Nice (England’s National Institute for Health and Care Excellence), had for many years declared withdrawal effects to be “brief and mild”.

This description was based on studies run by drug companies, where people had only taken the medication for eight to 12 weeks. As a result, when patients later showed up with severe, long-lasting symptoms, many doctors didn’t take them seriously because these experiences contradicted what the guidelines led them to expect.

Our recent research helps explain this mismatch. We found a clear link between how long someone takes antidepressants and how likely they are to experience withdrawal symptoms – and how severe these symptoms are.

We surveyed NHS patients and found that people who had used antidepressants for more than two years were ten times more likely to have withdrawal effects, five times more likely for those effects to be severe, and 18 times more likely for them to be long lasting compared with those who had taken the drugs for six months or less.

For patients who used antidepressants for less than six months, withdrawal symptoms were mostly mild and brief. Three-quarters reported no or mild symptoms, most of which lasted less than four weeks.

Only one in four of these patients was unable to stop when they wanted to. However, for long-term users (more than two years), two-thirds reported moderate or severe withdrawal effects, with one-quarter reporting severe withdrawal effects. Almost one-third of long-term users reported symptoms that lasted for more than three months. Four-fifths of these patients were unable to stop their antidepressants despite trying.

About 2 million people on antidepressants in England have been taking them for over five years, according to a BBC investigation. And in the US at least 25 million people have taken antidepressants for more than five years. What happens to people in eight-to-12-week studies is a far cry from what happens to millions of people when they stop.

Studying what happens to people after just eight to 12 weeks on antidepressants is like testing car safety by crashing a vehicle into a wall at 5km/h – ignoring the fact that real drivers are out on the roads doing 60km/h.

History repeating itself?

Against this backdrop, a review has just been published in Jama Psychiatry. Several of the senior authors declare payments from drug companies. In what looks like history repeating itself, the review draws on short-term trials – many funded by the pharmaceutical industry – that were similar to those used to shape early treatment guidelines. The authors conclude that antidepressants do not cause significant withdrawal effects.

Their main analysis is based on eleven trials that compared withdrawal symptoms in people who had stopped antidepressants with those who had continued them or stopped taking a placebo. Six of these trials had people on antidepressants for eight weeks, four for 12 weeks and just one for 26 weeks.

They reported a slightly higher number of withdrawal symptoms in people who had stopped antidepressants, which they say does not constitute a “clinically significant” withdrawal syndrome. They also suggest the symptoms could be explained by the “nocebo effect” – where negative expectations cause people to feel worse.

In our view, the results are likely to greatly underestimate the risk of withdrawal for the millions of people on these drugs for years. The review found no relationship between the duration of use of antidepressants and withdrawal symptoms, but there were too few long-term studies to test this association properly.

The review probably underestimates, in our view, short-term withdrawal effects too by assuming that the fact that people experience withdrawal-like symptoms when stopping a placebo or continuing an antidepressant cancels out withdrawal effects from antidepressants. But this is not a valid assumption.

We know that antidepressant withdrawal effects overlap with side-effects and with everyday symptoms, but this does not mean they are the same thing. People stopping a placebo report symptoms such as dizziness and headache, because these are common occurrences. However, as was shown in another recent review, symptoms following discontinuation of a placebo tend to be milder than those experienced when stopping antidepressants, which can be intense enough to require emergency care.

So deducting the rate of symptoms after stopping a placebo or continuing an antidepressant from antidepressant withdrawal symptoms is likely to underestimate the true extent of withdrawal.

The review also doesn’t include several well-designed drug company studies that found high rates of withdrawal symptoms. For example, an American study found that more than 60% of people who stopped antidepressants (after eleven months) experienced withdrawal symptoms.

The authors suggest that depression after stopping antidepressants is probably a return of the original condition, not withdrawal symptoms, because similar rates of depression were seen in people who stopped taking a placebo. But this conclusion is based on limited and unreliable data (that is, relying on participants in studies to report such events without prompting, rather than assessing them systematically) from just five studies.

We hope uncritical reporting of a review based on the sort of short-term studies that led to under-recognition of withdrawal effects in the first place, does not disrupt the growing acceptance of the problem and slow efforts by the health system to help potentially millions of people who may be severely affected.

The authors and publisher of the new review have been approached for comment.

The Conversation

Mark Horowitz is the author of the Maudsley Deprescribing Guidelines which outlines how to safely stop antidepressants, benzodiazepines, gabapentinoids and z-drugs, for which he receives royalties. He is co-applicant on the RELEASE and RELEASE+ trials in Australia funded by the NHMRC and MRFF examining hyperbolic tapering of antidepressants. He is co-founder and consultant to Outro Health, a digital clinic which helps people to safely stop no longer needed antidepressants in the US. He is a member of the Critical Psychiatry Network, an informal group of psychiatrists.

Joanna Moncrieff was a co-applicant on a study of antidepressant discontinuation funded by the UK’s National Institute for Health Research. She is co-applicant on the RELEASE and RELEASE+ trials in Australia funded by the NHMRC and MRFF examining hyperbolic tapering of antidepressants. She receives modest royalties for books about psychiatric drugs. She is co-chair person of the Critical Psychiatry Network, an informal group of psychiatrists.

ref. Antidepressant withdrawal: new review downplays symptoms but misses the mark for long-term use – https://theconversation.com/antidepressant-withdrawal-new-review-downplays-symptoms-but-misses-the-mark-for-long-term-use-260708

Exportation du modèle des « notes de la communauté » de X vers Meta, TikTok et YouTube : ce que ça va changer

Source: The Conversation – in French – By Laurence Grondin-Robillard, Professeure associée à l’École des médias et doctorante en communication, Université du Québec à Montréal (UQAM)

En s’engageant dans le sillage de X, Meta pourrait avoir précarisé la fiabilité de l’information sur ses plateformes. (Shutterstock)

En février 2024, Meta réduisait la découvrabilité du contenu jugé « politique » sur Instagram et Threads afin de limiter l’exposition des utilisateurs à des publications controversées et de favoriser une expérience positive. Moins d’un an plus tard, Mark Zuckerberg annonçait plutôt l’inverse : la fin du programme de « vérification des faits », remplacé par les « notes de la communauté » comme sur X (anciennement Twitter) ainsi qu’un assouplissement du côté des politiques de modération.

Meta souhaitait « restaurer la liberté d’expression » sur ses plates-formes.

Les notes de la communauté sont un système de modération dit « participatif » permettant aux utilisateurs d’ajouter des annotations pour corriger ou contextualiser des publications. D’un média socionumérique à l’autre, les conditions pour devenir un contributeur de cette communauté varient peu : être majeur, actif sur la plate-forme depuis un certain temps et n’avoir jamais enfreint ses règles.

Sans tambour ni trompette, même YouTube et TikTok essayent désormais ce type de modération aux États-Unis. Dévoilé comme une réponse innovante aux défis posés par la circulation de fausses nouvelles, ce modèle mise sur l’autonomisation des utilisateurs pour arbitrer la qualité de l’information. Pourtant, cette tendance révèle un mouvement plus large : le désengagement progressif des médias socionumériques face à la vérification des faits et au journalisme.

D’ailleurs, que sait-on vraiment des notes de la communauté ?

Professeure associée et doctorante en communication à l’Université du Québec à Montréal, je m’intéresse aux transformations qui redéfinissent nos rapports aux technologies et à l’information, tout en reconfigurant les modes de gouvernance des médias socionumériques.

La modération communautaire : ce que dit la recherche

Les notes de la communauté demeurent une fonctionnalité très récente. Connues sous le nom initial de Birdwatch sur Twitter, elles sont déployées à la suite de l’assaut du Capitole en janvier 2021 avec un premier groupe de 1000 contributeurs aux États-Unis. L’accès est progressivement élargi à un échantillon atteignant environ 10 000 participants en mars 2022.

Après le rachat de Twitter par Elon Musk la même année et les licenciements massifs qui en ont suivi, notamment dans les équipes de modération, ce système devient primordial dans la stratégie de modération décentralisée de la plate-forme.

La littérature scientifique traitant de la question est limités, non seulement parce que le modèle est récent, mais également parce que la plate-forme X est son unique objet d’étude. Cependant, elle met en lumière des éléments intéressants sur ce type de modération.

D’abord, les notes de la communauté contribueraient à freiner la circulation de la mésinformation, réduisant jusqu’à 62 % les repartages. Elles augmenteraient également de 103,4 % les probabilités que les utilisateurs suppriment le contenu ciblé en plus de diminuer son engagement global.

Toutefois, il importe de distinguer mésinformation et désinformation. Les études se concentrent sur la première, car l’intention malveillante propre à la désinformation est difficile à démontrer méthodologiquement. Celle-ci est même absente des catégories imposées aux noteurs par X, qui doivent classifier les contenus comme misinformed (mésinformé), potentially misleading (potentiellement trompeur) et not misleading (non trompeur). Ce cadrage restreint contribue à invisibiliser un phénomène pourtant central dans les dynamiques de manipulation de l’information.




À lire aussi :
De Twitter à X : Comment Elon Musk façonne la conversation politique américaine


Ensuite, les utilisateurs jugeraient les notes de la communauté plus crédibles que les simples étiquettes de fausses nouvelles ou de désinformation, car elles fournissent un contexte explicatif. De plus, les contributeurs se concentreraient davantage sur les publications de comptes influents, ce qui pourrait limiter la portée de la mésinformation.

Enfin, la recherche souligne la complémentarité entre vérification des faits et notes de la communauté. Ces dernières s’appuient fréquemment sur des sources professionnelles, particulièrement pour les contenus complexes, et prolongent le travail amorcé par les professionnels.

Les vérificateurs et journalistes assurent rigueur, rapidité, fiabilité, tandis que les notes, plus lentes à se diffuser, bénéficient d’un capital de confiance sur une plate-forme où journalisme et médias d’information sont souvent contestés. Leur rôle conjoint s’impose donc comme une évidence, contrairement aux idées prônées par Musk et Zuckerberg.

L’illusion d’une communauté au service de la rentabilité

Les bénéfices tirés de l’adoption de ce modèle par les géants du Web sont loin d’être négligeables : non seulement on mise sur les utilisateurs eux-mêmes pour contrer la « désinformation », mais on stimule en même temps leur activité et leur engagement au sein de la plate-forme.

Or, plus les usagers y passent du temps, plus leur attention devient monétisable pour les annonceurs, et donc profitable pour ces médias socionumériques. Ce modèle permet en outre de réaliser des économies substantielles en réduisant les besoins en personnel de modération et en limitant les investissements dans des programmes de vérifications des faits.

Malgré son apparente ouverture, ce système, comme déployé sur X, n’est pas réellement « communautaire » au sens où peut l’être un projet comme Wikipédia. Il ne repose ni sur la transparence des contributions ni sur un processus collaboratif et un but commun.

En réalité, il s’agit davantage d’un système algorithmique de tri, soit un filtre sélectif fondé sur des critères de visibilité optimisés pour préserver un équilibre perçu entre opinions divergentes. Bien que les notes soient factuelles, elles ne sont rendues visibles qu’à condition de franchir une série d’étapes comme celle de l’algorithme dit de « pontage » (bridging algorithm), qui n’affiche une note à l’ensemble des utilisateurs que si elle est approuvée à la fois par des utilisateurs aux opinions opposées.

En pratique, cette exigence freine considérablement la capacité du système à faire émerger les corrections mêmes pertinentes. Selon une analyse de Poynter, moins de 10 % des notes proposées sur X deviennent visibles. Ce taux aurait d’ailleurs chuté après une modification de l’algorithme en février dernier, une semaine après qu’Elon Musk s’est plaint d’une note réfutant de la désinformation anti-ukrainienne.

De plus, il n’existe aucune mesure concernant l’exactitude ou la qualité des notes. Leur visibilité dépend uniquement de leur perception comme « utile » par des utilisateurs issus de courants idéologiques variés. Or, ce n’est pas parce qu’un consensus se forme autour d’une note qu’elle reflète nécessairement un fait.

L’information de qualité n’est pas la priorité

Les rhétoriques de « liberté d’expression » portées par ceux qui contrôlent les canaux de diffusion sur les médias socionumériques relèvent au mieux du contresens, au pire de l’hypocrisie. Les géants du Web, par le biais d’algorithmes opaques, décident de la visibilité et de la portée des notes de la communauté.

Ces mécanismes et discours alimentent l’érosion de la confiance envers le journalisme et la vérification des faits, car sur ces médias socionumériques, la qualité de l’information importe moins que sa capacité à générer de l’attention et à circuler. Le cas de Meta au Canada en est révélateur. En bloquant l’accès aux médias d’information en réponse à la Loi C-18, l’entreprise a démontré qu’elle pouvait agir presque impunément. Même en période électorale, les investissements publicitaires ont afflué, y compris de la part des mêmes partis et élus qui avaient pourtant dénoncé ledit blocage.




À lire aussi :
J’ai testé l’abonnement Premium de X et Meta Verified sur Instagram : voici mes constats sur les crochets d’authentification


Face à cette réalité, la lutte à la « désinformation » est un combat noble, mais inégal, contre un ennemi insaisissable, alimenté par la mécanique impitoyable des algorithmes et de l’idéologie d’une broligarchie bien ancrée.

Comme le notaient déjà en 2017 les professeurs et économistes américains Hunt Allcott et Matthew Gentzkow, les fausses nouvelles prospèrent parce qu’elles sont moins coûteuses à produire que les vraies, plus virales et davantage gratifiantes pour certains publics. Tant que les plates-formes continueront de privilégier la circulation de contenu au détriment de la qualité, la bataille contre la « désinformation » restera profondément déséquilibrée quelle que soit la stratégie.

Repenser la liberté d’expression à l’ère des algorithmes

Si l’exportation des notes de la communauté au-delà des frontières américaines se confirme, elle représentera un progrès uniquement pour les propriétaires de ces plates-formes. Le modèle se présente comme ouvert, mais il repose sur une délégation contrôlée, balisée par des algorithmes qui filtrent toujours ce qui mérite d’être vu.

Ce n’est pas la communauté qui décide : c’est le système qui choisit ce qu’elle est censée penser.

En cédant une partie du travail journalistique à ces dispositifs opaques, nous avons affaibli ce qui garantit la qualité de l’information : exactitude, rigueur, impartialité, etc. Loin d’une démocratisation, c’est une dépolitisation de la modération qui s’opère où tout devient question de rentabilité, même les faits.

Elon Musk affirme « Vous êtes les médias maintenant ». La question à se poser désormais est la suivante : avons-nous vraiment une voix libre, ou sommes-nous de simples variables formatées dans un algorithme ?

La Conversation Canada

Laurence Grondin-Robillard ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Exportation du modèle des « notes de la communauté » de X vers Meta, TikTok et YouTube : ce que ça va changer – https://theconversation.com/exportation-du-modele-des-notes-de-la-communaute-de-x-vers-meta-tiktok-et-youtube-ce-que-ca-va-changer-255680