L’ultra-trail, entre recherche de performance et quête intérieure

Source: The Conversation – in French – By Olivier Bessy, Professeur émérite, chercheur au laboratoire TREE-UMR-CNRS 6031, Université de Pau et des pays de l’Adour (UPPA)

L’ultra-trail, qui implique de courir au moins 80 kilomètres, fascine et séduit un public toujours plus nombreux. Que recherchent les coureurs dans cette course faite d’efforts prolongés et de longues traversées intérieures ?


Au sein des pratiques du trail (marche-course sur sentier), l’ultra-trail se détache de manière singulière depuis les années 1990-2000. Cette discipline va révolutionner la course à pied en milieu naturel en un quart de siècle, en proposant une offre croissante et en répondant aux attentes de toujours plus de coureurs en quête d’extrême et d’ailleurs. Mais plus que le nombre relatif d’ultra-trails organisés et d’ultra-traileurs recensés, c’est bien l’imaginaire associé à cette pratique qui contribue à son engouement actuel, en dépassant l’univers des adeptes et en attirant les médias.

Vivre un ultra-trail est devenu le nec plus ultra pour une certaine frange de la population, car cette discipline incarne en les hybridant les imaginaires de performance, d’aventure, de quête de soi, de solidarité et de nature.

Les trois grands événements pionniers en témoignent. L’ultra-trail du Mont-Blanc (UTMB) bat chaque année le record de demandes d’inscriptions (75 000 demandes en 2025) et de personnes présentes dans l’espace Mont-Blanc pendant la semaine de l’événement (100 000 personnes), sans oublier le nombre considérable de spectateurs en ligne sur Live Trail. Il en est de même pour Le Grand Raid de La Réunion qui a aussi atteint cette année des sommets de fréquentation (7 143 inscrits pour 60 000 demandes) dont 2 845 inscrits pour la mythique Diagonale des fous. Les Templiers qui se disputent dans les Causses autour de Millau (Aveyron) complètent le trio mythique. En 2025, on dénombre 15 000 coureurs inscrits (pour 90 000 demandes) dont 2 800 au Grand Trail des Templiers, la course reine et 1 500 à l’Endurance ultra-trail.

Mais pourquoi tant de personnes choisissent de réaliser des efforts aussi extrêmes ? Nous avons cherché à le comprendre en réalisant une centaine d’entretiens et 300 micro-questionnaires auprès de participants à la Diagonale des Fous et à l’UTMB entre 2021 et 2024.

Le paradigme de l’hypermodernité

Au tournant des années 1980-1990, la société hypermoderne valorise la recherche de la performance, l’intensification de son mode de vie et la mise en spectacle de soi-même.

Le culte de la performance est devenu le modèle dominant de production de son existence : chacun est invité à explorer ses limites, à faire la preuve de son excellence et à s’inventer. La pratique de l’ultra-trail répond au culte inquiet du « moi performant » pour reprendre l’expression d’Alain Courtine qui traverse les sociétés occidentales en se différenciant du « moi soumis » et fait écho à la généralisation de « l’héroïsme de masse ».

L’ultra-traileur aime vivre des sensations extrêmes en se confrontant à des situations inhabituelles aux confins de ses possibilités. Il aime gérer l’imprévisible et flirter symboliquement avec la mort afin de réenchanter sa vie. L’ultra-trail compense un quotidien trop monotone et se révèle comme antidote à une identité défaite. Vivre ne suffit plus, il faut se sentir exister.

La mise en spectacle de soi complète ce tableau. L’ultra-traileur recherche généralement une scène inégalée de visibilité de son exploit. Plus c’est dur, plus il pense devenir un héros aux yeux des autres et pouvoir en retirer des bénéfices à fort rendement symbolique sur les réseaux sociaux et la scène de la vie sociale. C’est l’individu qui devient le théâtre premier d’exploration, objet-sujet d’expérience.

Ce paradigme est gouverné par une logique d’accélération propre à l’économie capitaliste mondialisée qui demande à chacun d’optimiser ses ressources pour gagner en efficacité, en engageant une course contre le temps, en ayant recours aux dernières innovations technologiques mais aussi en adoptant de manière croissante des conduites dopantes. Ces dernières sont observables dans la prise d’anti-inflammatoires non stéroïdiens (AINS) qui sont consommés dans le but de maximaliser les chances de relever le défi engagé, alors qu’il s’agit de médicaments qui peuvent être néfastes pour la santé en situation d’effort extrême.

Eddy, 42 ans, Parisien et cadre supérieur dans une grande entreprise témoigne :

« Se mesurer à la Diagonale des Fous, il n’y a pas mieux pour se prouver qu’on est encore capable de réaliser des choses intenses. C’est un défi de l’extrême et du courage qui me convient parfaitement, car tu te fixes un temps et tu cherches à performer par tous les moyens pour le réaliser. »

Le paradigme de la transmodernité

Face à ces excès d’existence et aux nouveaux enjeux sociétaux, une autre forme culturelle émerge durant la décennie 2010 : la transmodernité telle que la nomme la philosophe Rosa-Maria Rodriguez Magda. Inspirée par les travaux des sociologues Edgar Morin et Hartmut Rosa, elle fait cohabiter deux modèles : celui ancien mais toujours actif du technocapitalisme et celui émergent de l’écohumanisme. La transmodernité renouvelle la vision de l’habiter notre monde en hybridant les modèles de référence. La quête de sens devient alors centrale pour réguler cette tension contemporaine en cherchant à fabriquer de la cohérence dans nos modes de vie. L’écohumanisme s’observe dans les signes ténus d’un nouvel art de vivre visible dans un nouveau rapport à soi et au temps, un nouveau rapport aux autres et à l’environnement.

Un nouveau rapport à soi et au temps s’instaure dans la mesure où l’expérience ultra-trail s’inscrit sur un continuum allant de la recherche de la performance à la quête intérieure et alternant moments d’accélération et de décélération. Courir un ultra-trail s’inscrit dans une temporalité longue qui favorise les moments de décélération propices à « l’entrée en résonance ». Cette approche est reprise par le sociologue Romain Rochedy qui analyse l’ultra-trail « comme un espace de décélération ».

D’après mes recherches, un nouveau rapport aux autres se manifeste aussi. Si l’ultra-traileur s’enferme souvent dans sa bulle pour aller le plus vite possible ou continuer simplement à avancer, il est aussi de plus en plus soucieux de partager des moments de solidarité, d’émotion et de communion collective. Participer à un ultra-trail permet en effet de tisser des liens sociaux favorisés par l’effacement de la personne au profit du langage du corps et de l’expérience collective vécue qui transcendent les différences.

Un nouveau rapport à la nature apparaît enfin. Dans leur pratique, les ultra-traileurs alternent la domestication d’une nature considérée comme une adversaire et l’immersion dans une nature enveloppante représentée comme une partenaire. La pratique de l’ultra-trail peut être appréhendée comme une plongée dans les profondeurs de « la » et de « sa » nature car elle offre à chacun la possibilité de se construire un rapport intime entre « le faire avec » et « l’être avec » la nature.

Éric, 48 ans, Toulousain et kinésithérapeute déclare :

« Chaque ultra-trail auquel je participe, je le vis comme une introspection. Je suis à l’écoute de mon corps. Je prends mon temps afin de le déguster à la recherche d’émotions particulières. Je tisse des relations privilégiées avec les autres coureurs et la nature. Toutes ces épreuves favorisent des trajectoires individuelles qui constituent au final un tout collectif. C’est un moment de cohésion sociale qui n’est pas courant. »

Un laboratoire sociétal

L’ultra-trail reflète les paradoxes de notre époque tiraillée entre deux paradigmes : l’accélération hypermoderne et la décélération transmoderne. Participer à ce type d’épreuve renforce le sentiment d’identité pour des personnes en quête de repères plus consistants, comme si l’exploration de ses limites physiques venait remplacer les limites de sens que ne donne plus l’ordre social. Il s’agit alors de reprendre en main son destin, de tisser un fil qui relie le soi réel à un soi possible admirable.

L’ultra-trail devient ainsi une métaphore de notre époque, car il symbolise l’ambivalence des modèles de référence mobilisés. Cette pratique permet à chaque postulant de sortir de soi en allant au-delà de ses repères habituels et en revenir pour être davantage résilient par rapport aux affres de la vie. Elle rime alors avec une forme de renaissance. C’est pourquoi la quête de l’ultra est si forte aujourd’hui.


Olivier Bessy est l’auteur de Courir sans limites. La révolution de l’ultra-trail (1990-2025) (Outdoor éditions, 2025).

The Conversation

Olivier Bessy ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. L’ultra-trail, entre recherche de performance et quête intérieure – https://theconversation.com/lultra-trail-entre-recherche-de-performance-et-quete-interieure-268152

A, B, C or D – grades might not say all that much about what students are actually learning

Source: The Conversation – USA (2) – By Joshua Rowe Eyler, Assistant Professor of Teacher Education, University of Mississippi

Letter grades have long been part of the fabric of the American educational system. iStock/Getty Images Plus

Grades are a standard part of the American educational system that most students and teachers take for granted.

But what if students didn’t have just one shot at acing a midterm, or even could talk with their teachers about what grade they should receive?

Alternative grading has existed in the U.S. for decades, but there are more educators trying out forms of nontraditional grading, according to Joshua Eyler, a scholar of teacher education. Amy Lieberman, education editor at The Conversation U.S., spoke with Eyler to better understand what alternative grading looks like and why more educators are thinking creatively about assessing learning.

Why are some scholars and educators reconsidering grading practices?

For more than 80 years, students at least in seventh grade through college in the U.S. have generally earned one grade for a particular assignment, and a student’s cumulative grades are then averaged at the end of the semester. The final grade gets placed on a student’s transcript.

In some ways, all of the attention is on the grade itself.

Some educators, including me, are trying to rethink the way we grade. Traditional grading is not always an accurate – or the best – way to demonstrate mastery and learning.

Many college faculty across the U.S., as well as some K-12 teachers and districts, are currently experimenting with different approaches and models of grading – typically doing this work on their own but sometimes also in coordination with their schools.

A group of young people are seen from behind walking in front of lockers and carrying backpacks.
High school students walk down the halls of Bonny Eagle High School in Standish, Maine, in 2020.
Shawn Patrick Ouellette/Portland Press Herald via Getty Images

Why is this idea now gaining steam?

Scholars have been researching grades for many decades – there are foundational papers from the early 20th century that scholars today still discuss.

More recently, alternative grading picked up steam in the past 15 to 20 years. Researchers like me have been focused on how grades affect learning.

Grades have been found to decrease students’ intrinsic motivation, and an overemphasis on grades has been shown to alter learning environments at all levels, leading to academic misconduct – meaning cheating.

Grades have also been shown to cultivate a fear of failure among students, at all ages, and inhibit them from taking intellectual risks and expressing creativity. We want students to be bold, creative thinkers and to try out new ideas.

Are there other challenges that alternative grading is trying to correct?

Grades mirror and magnify inequities that have always been a part of American educational systems.

Students who come from K-12 schools with fewer resources, for example, often do not have many textbooks. They often have few, if any, AP courses. These students can develop what researchers call “opportunity gaps.” They do not have the same educational opportunities that students at schools with more resources have.

When students from low-resourced high schools go to college, they can receive worse grades than kids who come from better-resourced schools receive – typically because of these opportunity gaps.

Some people would say that this means these students with low grades are not ready for college. In reality, the grades reflect these students’ past educational experiences – not their potential in college. Once those less-than-stellar grades appear on these students’ transcripts in their first and second years of college, it becomes really hard for students to hit milestones that they need to reach for particular majors.

If we thought about learning a bit differently, those students might have a better shot at reaching their goals.

What do alternative grading models look like in practice?

There are a lot of different grading approaches people are trying, but I would say in the past 10 to 15 years, the movement has really exploded and there is a lot of discussion about it throughout higher education.

With standards-based grading, a biology teacher, for example, would set out a certain number of content- and skill-based standards that they want students to achieve – like understanding photosynthesis. The student’s grade is based on how many of those standards they show competency in by the end of the semester.

A student could show competency in a variety of ways, like a set of exam questions, homework problems or a group project. It is not limited to one type of assessment to demonstrate learning. This grading approach acknowledges that learning is a deeply complicated process that unfolds at different rates for different students.

Other models could look like offering unlimited retakes on tests. Students may have to qualify for the retake by correcting all of the questions they got wrong on a previous exam. Or, teachers set up new assignments that draw on older standards students have previously met, so students have a second shot.

Portfolio-based grading is common in the arts and in writing programs. A student has a lot of time to turn in an assignment and then get feedback on it from their teacher – but no grade. The student eventually puts together a portfolio with the best of their assignments, and the portfolio as an entirety receives a grade.

Another method is called collaborative grading, or ungrading, where students don’t get grades throughout the semester. Instead, they get feedback from their teachers and complete self-assessments. At the end of the semester, the student and teacher collaboratively determine a grade.

What is stopping alternative grading from becoming more widespread?

There have been bursts of activity with grading reform over the past 100 years. The 1960s are a great example of such a period of activity. This is when gradeless colleges like The Evergreen State College were founded.

Social media has helped this particular recent iteration gain traction, as educators can more easily communicate with other people who are grading in different ways.

We are seeing the beginnings of a movement where individuals are trying to do something on this issue. But the issue has not yet drawn together coalitions of people who agree they want change on grading.

Alternative forms of grading have caught on in some private schools, and they have not gained traction in other private schools. The same is true with public schools. Some challenges include logistical support from administrations in K-12 and colleges, teacher buy-in and parental support – especially in K-12 settings.

There is nothing more baked into the fabric of education than the idea of grades. Talking about reforming grading shakes this foundation a little, and that is why it is important to discuss what the alternatives are.

The Conversation

Joshua Rowe Eyler does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A, B, C or D – grades might not say all that much about what students are actually learning – https://theconversation.com/a-b-c-or-d-grades-might-not-say-all-that-much-about-what-students-are-actually-learning-269066

My prescription costs what?! Pharmacists offer tips that could reduce your out-of-pocket drug costs

Source: The Conversation – USA (3) – By Sujith Ramachandran, Associate Professor of Pharmacy Administration, University of Mississippi

Out-of-pocket costs to fill prescriptions can vary widely. Malte Mueller/fStop via Getty Images

Even when Americans have health insurance, they can have a hard time affording the drugs they’ve been prescribed.

About 1 in 5 U.S. adults skip filling a prescription due to its cost at least once a year, according to KFF, a health research organization. And 1 in 3 take steps to cut their prescription drug costs, such as splitting pills when it’s not medically necessary or switching to an over-the-counter drug instead of the one that their medical provider prescribed.

As pharmacy professors who research prescription drug access, we think it’s important for Americans to know that it is possible to get prescriptions filled more affordably, as long as you know how before you go to the pharmacy.

Cost of copays ranges widely

When you have health insurance and have to pay for a prescription drug at the pharmacy, you’re usually covering the cost of your copay. This is the amount patients or their caregivers are expected to pay after insurance covers the rest of the tab.

If you get your health insurance through Medicaid, the government program that covers low-income Americans and people with disabilities, you should not have to pay anything at all to obtain prescription drugs. If there is a copay, it should be low – probably less than US$5.

And if you’re insured through Medicare, the government program that mainly covers people who are 65 and older, or get your coverage through a private health insurance company, it’s important to understand what to expect when you visit a pharmacy.

Most private insurance companies charge US$5 to $50 for prescription drug copays. The copays are tiered based on what the drug costs. Brand-name and specialty medications have higher copays; older generics have lower copays.

Some generic drugs and vaccines may even require no copay at all. While a copay is a flat fee, it can change over the course of the year based on whether or not you have met your deductible. The deductible is the amount of money you have to pay out of pocket before your insurance starts covering your prescriptions. Before your deductible is fully paid, you may be responsible for the full cost of your medications. After you’ve met your deductible for the year, you will only be required to pay the copay.

As newer, more expensive drugs enter the market, cost-sharing at the pharmacy has increasingly shifted from a copay to coinsurance.

In contrast with a flat copay, coinsurance means your insurance company will cover a certain percentage of the drug’s cost, and you’ll pay the rest. Since the patient’s share is based on a percentage of the medication’s price, coinsurance often results in higher out-of-pocket costs than copays do.

New help for patients with Medicare coverage

Two new government programs could help make prescription drugs more affordable for millions of older Americans.

Starting in 2026, people who are insured through Medicare will pay no more than $2,100 out of pocket on prescription drugs over the year. That cap may be much lower than $2,100 due to a quirk in Medicare’s rules. Prescriptions filled after someone has paid the maximum allowable amount will cost them nothing at all.

In addition, the government launched the Medicare Prescription Payment Plan in 2025. This program, which is available to people over 65, helps spread what patients spend out of pocket on prescription drugs throughout the year, making that expense more predictable and easier to budget for.

Early data indicates that very few Americans are enrolled in the Medicare Prescription Payment Plan. Patients insured through private companies do not have similar opportunities.

Consumers should find out if they qualify for state or federal programs on their medications.

Coupons and discount cards

What if you can’t afford a copay for your prescription drug?

Before giving up on ever getting it, ask the pharmacist about your options.

It may be worth trying to use a free online tool, such as RxAssist, sponsored by the Robert Wood Johnson Foundation, or a discount card from GoodRx, which is a publicly traded company.

GoodRx cards are free. They help people compare local pharmacy prices and to locate coupons that make prescriptions more affordable.

GoodRx works by searching for the lowest available price for the prescription at various pharmacies. Other copay coupons provided by the drug manufacturer may also work similarly by lowering the cost of the medication. On some occasions, the cash price at the pharmacy may actually be cheaper than the copay, and the pharmacist should be able to help you navigate these options.

Here’s what you should know before giving GoodRx a try:

  1. GoodRx collects individual data on patients, raising significant privacy concerns.

  2. Some pharmacies do not accept GoodRx. You may have to visit more than one pharmacy to be able to activate its discounts.

  3. These cards may make the most sense for uninsured or underinsured patients, but do not always help those who have insurance because you might not get a better price. What’s more, if you use a discount card, the amount you pay may not count toward your insurance deductible for the year.

You should weigh the caveats closely depending on your circumstance.

A male pharmacist scanning a pharmacy product for his customer.
Your pharmacist can help you navigate the various discount offerings.
CG Tan/E+ via Getty Images

Prescription assistance programs

Prescription assistance programs provide another cost-saving tool for Americans.

Drugmakers, nonprofits and government agencies sponsor those programs, which help patients who are uninsured or underinsured – even if they are on Medicare – fill prescriptions either at a discount or for free.

These programs include manufacturer-specific programs as well as charitable pharmacies like Dispensary of Hope, NOVA Scripts Central and the Patient Advocate Foundation. Qualifying criteria vary for these programs, but typically you must have a low income and be a citizen or a legal U.S. resident.

The Patient Access Network Foundation and RxAssist, two nonprofits that help Americans pay their medical bills, also offer helpful tools to identify programs that could work for you.

Assistance from these programs could cut your copay or even provide a prescription drug at no cost.

Separately, the Trump administration announced in November 2025 that a new White House prescription drug pricing program will soon begin to connect consumers to companies that have agreed to sell certain prescription drugs at a big discount.

Many experts don’t expect the program, known as TrumpRx, to help people who have health insurance. Instead, it could be most likely to help those with no insurance at all. The new government program is slated to begin to roll out in 2026.

Direct-to-consumer models

Beyond coupons and assistance programs, a more radical shift is in the works: direct-to-consumer platforms and cash-payment models.

In 2025, several manufacturers offered to sell medications directly to patients on websites and patient portals at cash prices. For example, the drug manufacturer Eli Lilly is offering its popular weight-loss medication, Zepbound, on its website.

These websites have out-of-pocket costs that can run upward of $300 a month, making them too high for many, if not most, Americans to afford. And insurance companies have so far refused to cover them.

To be sure, the systems underlying these programs are still being built. We believe that the Trump administration would need to make a bigger effort to make it easier for millions of Americans to be able to afford filling their prescriptions.

The Conversation

Sujith Ramachandran received funding from Robert Wood Johnson Foundation, and provides consulting services for the National Community Pharmacists Association for work related to this topic.

Adam Pate does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. My prescription costs what?! Pharmacists offer tips that could reduce your out-of-pocket drug costs – https://theconversation.com/my-prescription-costs-what-pharmacists-offer-tips-that-could-reduce-your-out-of-pocket-drug-costs-268067

Gazing into the mind’s eye with mice – how neuroscientists are seeing human vision more clearly

Source: The Conversation – USA – By Bilal Haider, Associate Professor of Biomedical Engineering, Georgia Institute of Technology

Mice have complex visual systems that can clarify how vision works in people. Westend61/Getty Images

Despite the nursery rhyme about three blind mice, mouse eyesight is surprisingly sensitive. Studying how mice see has helped researchers discover unprecedented details about how individual brain cells communicate and work together to create a mental picture of the visual world.

I am a neuroscientist who studies how brain cells drive visual perception and how these processes can fail in conditions such as autism. My lab “listens” to the electrical activity of neurons in the outermost part of the brain called the cerebral cortex, a large portion of which processes visual information. Injuries to the visual cortex can lead to blindness and other visual deficits, even when the eyes themselves are unhurt.

Understanding the activity of individual neurons – and how they work together while the brain is actively using and processing information – is a long-standing goal of neuroscience. Researchers have moved much closer to achieving this goal thanks to new technologies aimed at the mouse visual system. And these findings will help scientists better see how the visual systems of people work.

The mind in the blink of an eye

Researchers long thought that vision in mice appeared sluggish with low clarity. But it turns out visual cortex neurons in mice – just like those in humans, monkeys, cats and ferrets – require specific visual features to trigger activity and are particularly selective in alert and awake conditions.

My colleagues and I and others have found that mice are especially sensitive to visual stimuli directly in front of them. This is surprising, because mouse eyes face outward rather than forward. Forward-facing eyes, like those of cats and primates, naturally have a larger area of focus straight ahead compared to outward-facing eyes.

Microscopy image of stacks of neurons
This image shows neurons in the mouse retina: cone photoreceptors (red), bipolar neurons (magenta), and a subtype of bipolar neuron (green).
Brian Liu and Melanie Samuel/Baylor College of Medicine/NIH via Flickr

This finding suggests that the specialization of the visual system to highlight the frontal visual field appears to be shared between mice and humans. For mice, a visual focus on what’s straight ahead may help them be more responsive to shadows or edges in front of them, helping them avoid looming predators or better hunt and capture insects for food.

Importantly, the center of view is most affected in aging and many visual diseases in people. Since mice also rely heavily on this part of the visual field, they may be particularly useful models to study and treat visual impairment.

A thousand voices drive complicated choices

Advances in technology have greatly accelerated scientific understanding of vision and the brain. Researchers can now routinely record the activity of thousands of neurons at the same time and pair this data with real-time video of a mouse’s face, pupil and body movements. This method can show how behavior interacts with brain activity.

It’s like spending years listening to a grainy recording of a symphony with one featured soloist, but now you have a pristine recording where you can hear every single musician with a note-by-note readout of every single finger movement.

Using these improved methods, researchers like me are studying how specific types of neurons work together during complex visual behaviors. This involves analyzing how factors such as movement, alertness and the environment influence visual activity in the brain.

For example, my lab and I found that the speed of visual signaling is highly sensitive to what actions are possible in the physical environment. If a mouse rests on a disc that permits running, visual signals travel to the cortex faster than if the mouse views the same images while resting in a stationary tube – even when the mouse is totally still in both conditions.

In order to connect electrical activity to visual perception, researchers also have to ask a mouse what it thinks it sees. How have we done this?

The last decade has seen researchers debunking long-standing myths about mouse learning and behavior. Like other rodents, mice are also surprisingly clever and can learn how to “tell” researchers about the visual events they perceive through their behavior.

For example, mice can learn to release a lever to indicate they have detected that a pattern has brightened or tilted. They can rotate a Lego wheel left or right to move a visual stimulus to the center of a screen like a video game, and they can stop running on a wheel and lick a water spout when they detect the visual scene has suddenly changed.

Mouse drinking from a metal water spout
Mice can be trained to drink water as a way to ‘tell’ researchers they see something.
felixmizioznikov/iStock via Getty Images Plus

Mice can also use visual cues to focus their visual processing to specific parts of the visual field. As a result, they can more quickly and accurately respond to visual stimuli that appear in those regions. For example, my team and I found that a faint visual image in the peripheral visual field is difficult for mice to detect. But once they do notice it – and tell us by licking a water spout – their subsequent responses are faster and more accurate.

These improvements come at a cost: If the image unexpectedly appears in a different location, the mice are slower and less likely to respond to it. These findings resemble those found in studies on spatial attention in people.

My lab has also found that particular types of inhibitory neurons – brain cells that prevent activity from spreading – strongly control the strength of visual signals. When we activated certain inhibitory neurons in the visual cortex of mice, we could effectively “erase” their perception of an image.

These kinds of experiments are also revealing that the boundaries between perception and action in the brain are much less separate than once thought. This means that visual neurons will respond differently to the same image in ways that depend on behavioral circumstances – for example, visual responses differ if the image will be successfully detected, if it appears while the mouse is moving, or if it appears when the mouse is thirsty or hydrated.

Understanding how different factors shape how cortical neurons rapidly respond to visual images will require advances in computational tools that can separate the contribution of these behavioral signals from the visual ones. Researchers also need technologies that can isolate how specific types of brain cells carry and communicate these signals.

Data clouds encircling the globe

This surge of research on the mouse visual system has led to a significant increase in the amount of data that scientists can not only gather in a single experiment but also publicly share among each other.

Major national and international research centers focused on unraveling the circuitry of the mouse visual system have been leading the charge in ushering in new optical, electrical and biological tools to measure large numbers of visual neurons in action. Moreover, they make all the data publicly available, inspiring similar efforts around the globe. This collaboration accelerates the ability of researchers to analyze data, replicate findings and make new discoveries.

Technological advances in data collection and sharing can make the culture of scientific discovery more efficient and transparent – a major data informatics goal of neuroscience in the years ahead.

If the past 10 years are anything to go by, I believe such discoveries are just the tip of the iceberg, and the mighty and not-so-blind mouse will play a leading role in the continuing quest to understand the mysteries of the human brain.

The Conversation

Bilal Haider receives funding from NIH and the Simons Foundation.

ref. Gazing into the mind’s eye with mice – how neuroscientists are seeing human vision more clearly – https://theconversation.com/gazing-into-the-minds-eye-with-mice-how-neuroscientists-are-seeing-human-vision-more-clearly-268334

The next frontier in space is closer than you think – welcome to the world of very low Earth orbit satellites

Source: The Conversation – USA – By Sven Bilén, Professor of Engineering Design, Electrical Engineering and Aerospace Engineering, Penn State

The closer a satellite − like this telecommunications one − orbits to Earth, the more atmospheric drag it faces. janiecbros/iStock via Getty Images Plus

There are about 15,000 satellites orbiting the Earth. Most of them, like the International Space Station and the Hubble Telescope, reside in low Earth orbit, or LEO, which tops out at about 1,200 miles (2,000 kilometers) above the Earth’s surface.

But as more and more satellites are launched into LEO – SpaceX’s Starlink internet constellation alone will eventually send many thousands more there – the region’s getting a bit crowded.

Which is why it’s fortunate there’s another orbit, even closer to Earth, that promises to help alleviate the crowding. It’s called VLEO, or very low Earth orbit, and is only 60 to 250 miles (100 to 400 kilometers) above the Earth’s surface.

As an engineer and professor who is developing technologies to extend the human presence beyond Earth, I can tell you that satellites in very low Earth orbit, or VLEO, offer advantages over higher altitude satellites. Among other benefits, VLEO satellites can provide higher-resolution images, faster communications and better atmospheric science. Full disclosure: I’m also a co-founder and co-owner of Victoria Defense, which seeks to commercialize VLEO and other space directed-energy technologies.

Advantages of VLEO

The images from very low Earth orbit satellites are sharper because they simply see Earth more clearly than satellites that are higher up, sort of like how getting closer to a painting helps you see it better. This translates to higher resolution pictures for agriculture, climate science, disaster response and military surveillance purposes.

End-to-end communication is faster, which is ideal for real-time communications, like phone and internet service. Although the signals still travel the same speed, they don’t have as far to go, so latency decreases and conversations happen more smoothly.

Much weather forecasting relies on images of clouds above the Earth, so taking those pictures closer means higher resolution and more data to forecast with.

Because of these benefits, government agencies and industry are working to develop very low Earth orbit satellites.

The holdup: Atmospheric drag

You may be wondering why this region of space, so far, has been avoided for sustained satellite operations. It’s for one major reason: atmospheric drag.

Space is often thought of as a vacuum. So where exactly does space actually start? Although about 62 miles up (100 kilometers) – known as the the von Kármán line – is widely considered the starting point, there’s no hard transition where space suddenly begins. Instead, as you move away from Earth, the atmosphere thins out.

Where space begins is relatively arbitrary, but most consider it to be about 62 miles (100 kilometers) high.

In and below very low Earth orbit, the Earth’s atmosphere is still thick enough to slow down satellites, causing those at the lowest altitudes to deorbit in weeks or even days, essentially burning up as they fall back to Earth. To counteract this atmospheric drag and to stay in orbit, the satellite must constantly propel itself forward – like how riding a bike into the wind requires continuous pedaling.

For in-space propulsion, satellites use various types of thrusters, which provide the push needed to keep from slowing down. But in VLEO, thrusters need to be on all, or nearly all, of the time. As such, conventional thrusters would quickly run out of fuel.

Fortunately, the Earth’s atmosphere in VLEO is still thick enough that atmosphere itself can be used as a fuel.

Innovative thruster technologies

That’s where my research comes in. At Penn State, in collaboration with Georgia Tech and funded by the U.S. Department of Defense, our team is developing a new propulsion system designed to work at 43 to 55 miles up (70 to 90 kilometers). Technically, these altitudes are even below very low Earth orbit – making the challenge to overcome drag even more difficult.

Our approach collects the atmosphere using a scoop, like opening your mouth wide as you pedal a bike, then uses high-power microwaves to heat the collected atmosphere. The heated gas is then expelled through a nozzle, which pushes the satellite forward. Our team calls this concept the air-breathing microwave plasma thruster. We’ve been able to demonstrate a prototype thruster in the lab inside a vacuum chamber that simulates the atmospheric pressure found at 50 miles (80 km) high.

This approach is relatively simple, but it holds potential, especially at lower altitudes where the atmosphere is thicker. Higher up, where the atmosphere is thinner, spacecraft could use different types of VLEO thrusters that others are developing to cover large altitude ranges.

Our team isn’t the only one working on thruster technology. Just one example: The U.S. Department of Defense has partnered with defense contractor Red Wire to develop Otter, a VLEO satellite with its version of atmosphere-breathing thruster technology.

Another option to keep a satellite in VLEO, which leverages a technology I’ve worked on throughout my career, is to tie a lower-orbiting satellite to a higher-orbiting satellite with a long tether. Although NASA has never flown such a system, the proposed follow-on mission to the tether satellite system missions flown in the 1990s was to drop a satellite into much lower orbit from the space shuttle, connected with a very long tether. We are currently revisiting that system to see whether it could work for VLEO in a modified form.

Other complications

Overcoming drag, though the most difficult, is not the only challenge. Very low Earth orbit satellites are exposed to very high levels of atomic oxygen, which is a highly reactive form of oxygen that quickly corrodes most substances, even plastics.

The satellite’s materials also must withstand extremely high temperatures, above 2,732 degrees Fahrenheit (1,500 degrees Celsius), because friction heats it up as it moves through the atmosphere, a phenomenon that occurs when all spacecraft reenter the atmosphere from orbit.

The potential of these satellites is driving research and investment, and proposed missions have become reality. Juniper research estimates that $220 billion will be invested in just the next three years. Soon, your internet, weather forecasts and security could be even better, fed by VLEO satellites.

The Conversation

Sven Bilén founder and co-owner of Victoria Defense, which seeks to commercialize VLEO and other space technologies. He receives funding from DARPA and NASA related to VLEO technologies.

ref. The next frontier in space is closer than you think – welcome to the world of very low Earth orbit satellites – https://theconversation.com/the-next-frontier-in-space-is-closer-than-you-think-welcome-to-the-world-of-very-low-earth-orbit-satellites-258252

If tried by court-martial, senator accused of ‘seditious behavior’ would be deprived of several constitutional rights

Source: The Conversation – USA – By Joshua Kastenberg, Professor of Law, University of New Mexico

U.S. Sen. Mark Kelly, D-Ariz., speaks to reporters in Washington, D.C. on Dec. 4, 2025. AP Photo/Kevin Wolf

The Department of Defense in late November 2025 announced that it would investigate U.S. Sen. Mark Kelly, a retired Navy captain and NASA astronaut, for what Secretary of Defense Pete Hegseth has called seditious behavior. The threat of investigation came after Kelly and five other Democrats, all with military backgrounds, released a video reminding U.S. service members they can disobey illegal orders issued by the Trump administration.

“No one has to carry out orders that violate the law, or our Constitution,” the lawmakers said, without specifying the orders the U.S. service members may have received. “Know that we have your back … don’t give up the ship.”

In response to the video, President Donald Trump accused the lawmakers of “seditious behavior” that could be “punishable by death.”

Sedition is a federal crime, but as a military law scholar who served as a judge in the U.S. Air Force, I believe the Democratic lawmakers articulated a correct view of military law. That is, service members subject to the Uniform Code of Military Justice have a duty to not obey unlawful orders.

There are several unique features to military law that have no analog to civilian criminal law, and if Kelly were court-martialed he would be deprived of several fundamental constitutional rights.

Military justice

In a civilian criminal trial the government normally has the burden of proof on all matters. But in a court-martial, a service member who argues that an order is unlawful has the burden of proving its unlawfulness. And the Supreme Court, in its 1827 opinion in Martin v. Mott, gave this view some credence, arguing that the president, as commander in chief, should not be questioned during a national emergency.

Second, ordinary citizens are protected by a constitutional requirement that the prosecution must convince all jurors of the defendant’s guilt beyond a reasonable doubt. A court-martial has only a two-thirds threshold to establish guilt. And the jurors – called members – are not the accused service member’s peers.

Indeed, the court-martial members are military personnel who outrank the accused service member and are picked to serve by senior commanding officers. Military judges are also uniformed officers and, like the rest of the military, are subject to the chain of command.

At times, senior officers have inserted themselves into the military justice system and tried to direct a court-martial to convict an accused service member. This has created the problem of unlawful command influence, the improper use of superior authority to interfere with the court-martial process.

A man speaks to another man wearing a white cap.
Defense Secretary Pete Hegseth has asked the Navy secretary to review Kelly’s comments to troops for ‘potentially unlawful conduct.’
AP Photo/Daniel Kucin Jr.

Kelly is still theoretically subject to the Uniform Code of Military Justice and could be court-martialed because he is a military retiree. This concept of a lifetime military jurisdiction did not exist when the Constitution was instituted in 1789. It came into existence during an emergency session of Congress in 1861.

The Supreme Court has never held that lifetime jurisdiction is constitutional. But in 2022 the U.S. Court of Appeals for the District of Columbia did, in a 2-1 decision.

It reasoned that if the Constitution’s creators had thought such a jurisdiction were a threat to the republic, they would have prohibited it. The dissenting judge in that case pointed out the frightening possibility of a president using the Uniform Code of Military Justice to curb free speech.

Lines of defense

Kelly is different than an ordinary retiree, and this case is bigger than a single senator. That’s because it goes to the heart of what the Constitution’s framers intended by preserving liberty through a republican form of government.

In 1648, Oliver Cromwell, who had become a military dictator over England, used the army to curb the Magna Carta – a revolutionary basic rights document dating to 1215 – and the ability of Parliament to debate matters and pass laws. The Constitution is designed to prevent anything coming close to such an occurrence.

So, what would Kelly’s defense likely be, other than that he exercised free speech and gave a correct recitation of the law?

Kelly’s first defense might be that under the Constitution, the president, as commander in chief, has no power to court-martial or otherwise administratively penalize him. Doing so would diminish Congress’ authority.

In 1974, the Supreme Court determined in Schlesinger v. Reservists Committee that although the Constitution prohibits a member of Congress from holding a position in the executive branch, citizens had no standing to sue in the federal courts to prevent this from occurring. Taken literally, the clause means that no member of Congress could hold a military commission and be beholden to the commander in chief, since this would erode Congress’ independence and authority.

Kelly’s second defense could be that after the Constitution and statutory law, the military law is governed by tradition, or the military’s own past practices, which used to be referred to as “lex non scripta.”

American history is replete with retired officers criticizing presidents or even joining in hate groups that accused a president of being beholden to subversive interests. Past presidents have ignored these men.

They include George Van Horn Moseley, who sided with pro-Nazi groups and accused President Franklin Roosevelt of being a communist. Retired generals Albert Coady Wedemeyer and Bonner Fellers formed organizations that undermined Presidents Harry Truman and Dwight Eisenhower.

A black and white photo shows Chinese and American military leaders.
Maj. Gen. Albert C. Wedemeyer greets Chinese miltary leaders in southwest China, on Jan. 18, 1945.
AP Photo

None of these men were court-martialed or administratively penalized.

Finally, Kelly could argue in federal court that the military has no jurisdiction over him because of the issue of unlawful command influence. One only needs to look at Hegseth’s statements in the case to see the specter of this problem in regard to Kelly.

When Congress formulated the Uniform Code of Military Justice, it criminalized unlawful command influence. But as military law scholar Rachel VanLandingham has pointed out, no person has ever been prosecuted for violating the prohibition.

Kelly could argue that there are no safeguards in his case to ensure a fair hearing and that the case should move from military courts to federal courts. The federal judge assigned the case can then ponder whether siding with the administration’s claims is a step toward establishing a Cromwellian future and away from the Constitution’s protection of a republican form of government.

Of course, Congress could put a stop to any persecution of Kelly by informing the president that he is acting contrary to the Constitution and explaining to do so is a high crime or misdemeanor.

During the Vietnam War, scholar Robert Sherrill said that “military justice is to justice what military music is to music.” In the past, military justice has been able to accomplish fair trials of military members, but it is dangerously open to influence by military leaders, all the way up to the commander in chief.

If there is to be an exercise in accountability for Kelly, it could more fairly be administered through a real constitutional analysis conducted by the independent federal judicial branch – or through a congressional intervention. Without either occurring, we may as a nation find ourselves a closer step toward a Cromwellian future.

The Conversation

Joshua Kastenberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. If tried by court-martial, senator accused of ‘seditious behavior’ would be deprived of several constitutional rights – https://theconversation.com/if-tried-by-court-martial-senator-accused-of-seditious-behavior-would-be-deprived-of-several-constitutional-rights-271990

The North Pole keeps moving – here’s how that affects Santa’s holiday travel and yours

Source: The Conversation – USA (2) – By Scott Brame, Research Assistant Professor of Earth Science, Clemson University

Could this be the next Blitzen? Feeding a reindeer in Lapland, Finland, north of the Arctic Circle. Roberto Moiola/Sysaworld/Moment via Getty Images

When Santa is done delivering presents on Christmas Eve, he must get back home to the North Pole, even if it’s snowing so hard that the reindeer can’t see the way.

He could use a compass, but then he has a challenge: He has to be able to find the right North Pole.

There are actually two North Poles – the geographic North Pole you see on maps and the magnetic North Pole that the compass relies on. They aren’t the same.

The two North Poles

The geographic North Pole, also called true north, is the point at one end of the Earth’s axis of rotation.

Try taking a tennis ball in your right hand, putting your thumb on the bottom and your middle finger on the top, and rotating the ball with the fingers of your left hand. The place where the thumb and middle finger of your right hand contact the tennis ball as it spins define the axis of rotation. The axis extends from the south pole to the north pole as it passes through the center of the ball.

A compass with S, E, N, W and other markings
Compasses use a magnetized needle to align with Earth’s magnetic field. To find true north, a compass must be adjusted for the declination of its location, meaning the angle difference between true north and magnetic north for that spot.
Tim Reckmann/Wikimedia Commons, CC BY

Earth’s magnetic North Pole is different.

Over 1,000 years ago, explorers began using compasses, typically made with a floating cork or piece of wood with a magnetized needle in it, to find their way. The Earth has a magnetic field that acts like a giant magnet, and the compass needle aligns with it.

The magnetic North Pole is used by devices such as smartphones for navigation – and that pole moves around over time.

Why the magnetic north pole moves around

The movement of the magnetic North Pole is the result of the Earth having an active core. The inner core, starting about 3,200 miles below your feet, is solid and under such immense pressure that it cannot melt. But the outer core is molten, consisting of melted iron and nickel.

Heat from the inner core makes the molten iron and nickel in the outer core move around, much like soup in a pot on a hot stove. The movement of the iron-rich liquid induces a magnetic field that covers the entire Earth.

As the molten iron in the outer core moves around, the magnetic North Pole wanders.

Lines show how the magnetic pole has moved
The magnetic North Pole has wandered since the late 1500s, picking up speed in the recent century. The dates reflect observations from expeditions. The others are based on models, with data from NOAA. The map shows northern Canada’s islands. The edge of Greenland is visible to the far right side.
Cavit/Wikimedia Commons, CC BY

For most of the past 600 years, the pole has been wandering around over northern Canada. It was moving relatively slowly, around 6 to 9 miles per year, until around 1990, when its speed increased dramatically, up to 34 miles per year.

It started moving in the general direction of the geographic North Pole about a century ago. Earth scientists cannot say exactly why other than that it reflects a change in flow within the outer core.

Getting Santa home

So, if Santa’s home is the geographic North Pole – which, incidentally, is in the ice-covered middle of the Arctic Ocean – how does he correct his compass bearing if the two North Poles are in different locations?

No matter what device he might be using – compass or smartphone – both rely on magnetic north as a reference to determine the direction he needs to move.

While modern GPS systems can tell you precisely where you are as you make your way to grandma’s house, they cannot accurately tell which direction to go without your device knowing the direction of magnetic north.

Lorenz King/Wikimedia Commons
Scientists work at a temporary research station near the Geographic North Pole in 1990.
Lorenz.King@geogr.uni-giessen.de/Wikimedia Commons, CC BY

If Santa is using an old-fashioned compass, he’ll need to adjust it for the difference between true north and magnetic north. To do that, he needs to know the declination at his location – the angle between true north and magnetic north – and make the correction to his compass. The National Oceanic and Atmospheric Administration has an online calculator that can help.

If you are using a smartphone, your phone has a built-in magnetometer that does the work for you. It measures the Earth’s magnetic field at your location and then uses the World Magnetic Model to correct for precise navigation.

Whatever method Santa uses, he may be relying on magnetic north to find his way to your house and back home again. Or maybe the reindeer just know the way.

The Conversation

Scott Brame does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The North Pole keeps moving – here’s how that affects Santa’s holiday travel and yours – https://theconversation.com/the-north-pole-keeps-moving-heres-how-that-affects-santas-holiday-travel-and-yours-271488

How rogue nations are capitalizing on gaps in crypto regulation to finance weapons programs

Source: The Conversation – Global Perspectives – By Nolan Fahrenkopf, Research Fellow at Project on International Security, Commerce and Economic Statecraft, University at Albany, State University of New York

Two years after Hamas attacked Israel on Oct. 7, 2023, families of the victims filed suit against Binance, a major cryptocurrency platform that has been plagued by scandals.

In a Nov. 24, 2025, filing by representatives of more than 300 victims and family members, Binance and its former CEO – recently pardoned Changpeng Zhao – were accused of willfully ignoring anti-money-laundering and so-called “know your customer” controls that require financial institutions to identify who is engaging in transactions.

In so doing, the suit alleged that Binance and Zhao – who in 2023 pleaded guilty to money laundering violations – allowed U.S.-designated terrorist entities such as Hamas and Hezbollah to launder US$1 billion. Binance has declined to comment on the case but issued a statement saying it complies “fully with internationally recognized sanctions laws.”

The problem the Binance lawsuit touches upon goes beyond U.S.-designated terrorist groups.

As an expert in countering the proliferation of weapons technology, I believe the Binance-Hamas allegations could represent the tip of the iceberg in how cryptocurrency is being leveraged to undermine global security and, in some instances, U.S. national security.

Cryptocurrency is aiding countries such as North Korea, Iran and Russia, and various terror- and drug-related groups in funding and purchasing billions of dollars worth of technology for illicit weapons programs.

Though some enforcement actions continue, I believe the Trump administration’s embrace of cryptocurrency might compromise the U.S.’s ability to counter the illicit financing of military technology.

In fact, experts such as professor Yesha Yadav, professor Hilary J. Allen and Graham Steele, anti-corruption advocacy group Transparency International and even the U.S. Treasury itself warn it and other legislative loopholes could further risk American national security.

A tool to evade sanctions

For the past 13 years, the Project on International Security, Commerce, and Economic Statecraft, where I serve as a research fellow, has conducted research and led industry and government outreach to help countries counter the proliferation of dangerous weapons technology, including the use of cryptocurrency in weapons fundraising and money laundering.

Over that time, we have seen an increase in cryptocurrency being used to launder and raise funds for weapons programs and as an innovative tool to evade sanctions.

Efforts by state actors in Iran, North Korea and Russia rely on enforcement gaps, loopholes and the nebulous nature of cryptocurrency to launder and raise money for purchasing weapons technology. For example, in 2024 it was thought that around 50% of North Korea’s foreign currency came from crypto raised in cyberattacks.

Two men in hoods sit in front of computer screens.
Modern-day bank robbers?
iStock/Getty Images Plus

A digital bank heist

In February 2025, North Korea stole over $1.5 billion worth of cryptocurrency from Bybit, a cryptocurrency exchange based in the United Arab Emirates. Such attacks can be thought of as a form of digital bank heist. Bybit was executing regular transfers of cryptocurrency from cold offline wallets – like a safe in your home – to “warm wallets” that are online but require human verification for transactions.

North Korean agents duped a developer working at a service used by Bybit to install malware that granted them access to bypass the multifactor authentication. This allowed North Korea to reroute the crypto transfers to itself. The funds were moved to North Korean-controlled wallets but then washed repeatedly through mixers and multiple other crypto currencies and wallets that serve to hide the origin and end location of the funds.

While some funds have been recovered, many have disappeared.

The FBI eventually linked the attack to the North Korean cyber group TraderTraitor, one of many intelligence and cyber units engaging in cyberattacks.

Lagging behind on security

Cryptocurrency is attractive because of the ease with which it can be acquired and transferred between accounts and various digital and government-issued currencies, with little to no requirements to identify oneself.

And as countries such as Russia, Iran and North Korea have become constricted by international sanctions, they have turned to cryptocurrency to both raise funds and purchase materials for weapons programs.

Even stablecoins, promoted by the Trump administration as safer and backed by hard currency such as the U.S. dollar, suffer from extensive misuse linked to funding illicit weapons programs and other activities.

Traditional financial networks, while not immune from money laundering, have well-established safeguards to help prevent money being used to fund illicit weapons programs.

But recent analysis shows that despite enforcement efforts, the cryptocurrency industry continues to lag behind when it comes to enforcing anti-money-laundering safeguards. In at least some cases this is willful, as some crypto firms may attempt to circumvent controls for profit motives, ideological reasons or policy disputes over whether platforms can be held accountable for the actions of individual users.

It isn’t only the raising of these funds by rogue nations and terrorist groups that poses a threat, though that is often what makes headlines. A more pressing concern is the ability to quietly launder funds between front companies. This helps actors avoid the scrutiny of traditional financial networks as they seek to move funds from other fundraising efforts or firms they use to purchase equipment and technology.

The incredible number of crypto transactions, the large number of centralized and decentralized exchanges and brokers, and limited regulatory efforts have made crypto incredibly useful for laundering funds for weapons programs.

This process benefits from a lack of safeguards and “know your customer” controls that banks are required to follow to prevent financial crimes. These should, I believe, and often do apply to entities large and small that help move, store or transfer cryptocurrency known as virtual asset service providers, or VASPs. However, enforcement has proven difficult as there are an incredibly large number of VASPs across numerous jurisdictions. And jurisdictions have fluctuating capacity or willingness to implement controls.

The cryptocurrency industry, though supposedly subject to many of these safeguards, often fails to implement the rules, or it evades detection due to its decentralized nature.

Digital funds, real risk

The rewards for rogue nations and organizations such as North Korea can be great.

Ever the savvy sanctions evader, North Korea has benefited the most from its early vision on the promise of crypto. The reclusive country has established an extensive cyber program to evade sanctions that relies heavily on cryptocurrency. It is not known how much money North Korea has raised or laundered in total for its weapons program using crypto, but in the past 21 months it has stolen at least $2.8 billion in crypto.

Iran has also begun relying on cryptocurrency to aid in the sale of oil linked to weapons programs – both for itself and proxy forces such as the Houthis and Hezbollah. These efforts are fueled in part by Iran’s own crypto exchange, Nobitex.

Russia has been documented going beyond the use of crypto as a fundraising and laundering tool and has begun using its own crypto to purchase weapons material and technology that fuel its war against Ukraine.

A threat to national security

Despite these serious and escalating risks, the U.S. government is pulling back enforcement.

The controversial pardon of Binance founder Changpeng Zhao raised eyebrows for the signal it sends regarding U.S. commitment to enforcing sanctions related to the cryptocurrency industry. Other actions such as deregulating the banking industry’s use of crypto and shuttering the Department of Justice’s crypto fraud unit have done serious damage to the U.S.’s ability to interdict and prevent efforts to utilize cryptocurrencies to fund weapons programs.

The U.S. has also committed to ending “regulation by prosecution” and has withdrawn numerous investigations related to failing to enforce regulations meant to prevent tactics used by entities such as North Korea. This includes abandoning an admittedly complicated legal case regarding sanctions against a “mixer” allegedly used by North Korea.

These actions, I believe, send the wrong message. At this very moment, cryptocurrency is being illicitly used to fund weapons programs that threaten American security. It’s a real problem that deserves to be taken seriously.

And while some enforcement actions do continue, failing to implement and enforce safeguards up front means that crypto will continue to be used to fund weapons programs. Cryptocurrency has legitimate uses, but ignoring the laundering and sanctions-evasion risks will damage American national interests and global security.

The Conversation

Nolan Fahrenkopf is a research fellow at the Center for Policy Research at the University at Albany, which receives grants related to nonproliferation from the U.S. Department of State and Department of Energy.

ref. How rogue nations are capitalizing on gaps in crypto regulation to finance weapons programs – https://theconversation.com/how-rogue-nations-are-capitalizing-on-gaps-in-crypto-regulation-to-finance-weapons-programs-269060

2 superpowers, 1 playbook: Why Chinese and US bureaucrats think and act alike

Source: The Conversation – Global Perspectives – By Daniel E. Esser, Associate Professor of International Studies, American University

An official walks past the U.S. and Chinese national flags on April 6, 2024. Pedro Pardo/AFP via Getty Images

The year 2025 has not been a great one for U.S.-Chinese relations. Tit-for-tat tariffs and the scramble over rare earth elements has dampened economic relations between the world’s two leading economies. Meanwhile, territorial disputes between China and American allies in the Indo-Pacific region have further deepened the intensifying military rivalry.

This rift has often been portrayed as a clash of opposing ideological systems: democracy versus autocracy; economic liberalism versus state-led growth; and individualism versus collectivism.

But such framing relies on a top-down look at the two countries premised on statements and claims of powerful leaders. What it obscures is that both superpowers are administered by the same kind of professionals: career bureaucrats.

We are an international team of researchers investigating bureaucratic preferences and behavior. Earlier this year, we hosted a two-day workshop with participants from China, the United States and other countries to compare bureaucratic agencies’ responses to global challenges.

Our research and that of others shows that, despite the ideological standoff at the leadership level, officials in China and the U.S. are shaped by comparable incentives and dynamics that lead them to act in surprisingly similar ways. In other words, when it comes to the women and men who carry out the actual work of government – from drafting regulation to enforcing compliance – China and the U.S. aren’t really that different.

Separated by politics, not practice

That’s not to suggest there aren’t differences in aspects of China’s and the U.S.’s bureaucratic base.

China’s system is more centralized, with a larger civil service of around 8 million employees as of 2024. The U.S. bureaucracy is more decentralized across federal, state and local levels and employs fewer bureaucrats, with around 3 million federal employees in 2024.

Still, comparative research on bureaucracies around the world shows that civil servants act similarly when confronted with complex problems, regardless of political system or policy field.

Whether they are municipal bureaucrats in Brazil, foreign aid officials in Germany, Norway and South Korea, or international civil servants at the United Nations, they all operate within the constraints of politically embedded organizations while pursuing their individual careers. In other words, they want to get ahead in their jobs while navigating constantly changing political winds.

Bureaucrats in the U.S. and China also navigate changing demands from their political leaders while seeking to gain expertise and progress in their careers.

Managing public expectations

Foreign aid, environmental management and pandemic governance in the U.S. and China provide telling examples of these parallels.

At first glance, the approaches of China and the U.S. to the use of foreign aid may appear as complete opposites. The former established the China International Development Cooperation Agency in 2018. Since then it has expanded and evolved its engagement abroad.

By contrast, the U.S. abolished USAID earlier in 2025, slashed its foreign aid budget, and moved remaining staff members into the State Department.

It would therefore seem that the U.S. and China are on opposing trajectories. Yet, the current moment obscures similarities between foreign aid bureaucrats in the two countries. Their tasks entail satisfying political objectives, overseeing taxpayer-funded projects abroad, and managing domestic public expectations.

The expertise required of these bureaucrats is to increase their country’s “soft power” while avoiding the appearance of wasting scarce funds abroad amid looming domestic needs.

With foreign aid admonished by the Trump administration as wasteful politics, officials in Washington are under unprecedented pressure to pursue financial diplomacy that recognizably serves U.S. interests while supporting foreign leaders whom the president considers allies. This agenda shift moves the U.S. closer to the Chinese foreign aid principle of seeking mutual benefits.

Meanwhile, Chinese aid officials are pivoting away from prioritizing large-scale infrastructure projects and toward a purported “small but beautiful projects” approach that centers on the well-being of beneficiaries. This pivot aligns their thinking with “softer” topics emblematic of U.S. foreign aid until 2024.

A sign saying USAID is seen behind glass.
Foreign aid practices in Washington and Beijing are converging.
Pete Kiehart for The Washington Post via Getty Images

The logic of blame avoidance

The case of bureaucratic responses to environmental pollution scandals is equally instructive. Again, one might expect bureaucrats in the U.S. and China, operating within different governance systems, to approach the problem differently.

In practice, however, bureaucrats in both countries are often motivated by an urge to avoid blame.

Rather than building on policy success stories, they tend to seek to deflect criticism for policy failures onto others. The underlying reason is so-called asymmetric payoffs: Success stories may lead to short-term public acclaim; policy failures jeopardize entire careers.

In China, the anti-air pollution measures introduced in Hebei province, which borders the capital Beijing, provide a prime example of the logic of blame avoidance. When the central government in 2017 urged provincial officials to reduce air pollution by banning coal heating, the officials’ overzealous implementation was motivated by a desire to shield themselves from potential blame from national leadership.

As a result, the needs of Hebei residents were ignored, with schoolchildren shivering in unheated classrooms. Rather than assuming the blame, both national and local officials shifted the focus onto middle-class Beijing residents, who were pilloried in the media for prioritizing clean air over the well-being of others.

Meanwhile in the U.S., the city of Flint, Michigan, had been reeling from decades of industrial decay and financial distress. The state government appointed an emergency manager who implemented cost-cutting measures, including switching the city’s water source from Lake Huron to the Flint River. This change resulted in lead contamination and widespread health impacts, escalating into a national scandal. As in Hebei, all parties – from state regulators to local officials and environmental agencies – blamed each other in an attempt to avoid responsibility.

Careerism as constraint

Parallel bureaucratic behaviors also became apparent during the COVID-19 pandemic. In China and the U.S. alike, public officials worked at the forefront of implementing public health guidelines. The Chinese response was said to benefit from an “authoritarian advantage,” allowing its authorities to impose drastic measures rapidly and comprehensively.

However, evidence-based policymaking was constrained by political preferences and bureaucratic careerism – the drive of officials to prioritize actions that help them get promoted.

It produced similar dynamics to those observed in the more decentralized U.S. setting. In both China and the U.S., bureaucrats were risk averse and anxious not to fall out with supervisors and political leaders.

A line of men in suits with masks on.
Chinese bureaucrats faced the same constraints as their U.S. counterparts during the COVID-19 pandemic.
Frayer/Getty Images

The Chinese approach resulted in a decrease in public trust, a phenomenon that has also been unfolding in the U.S.

And much like their American counterparts, Chinese bureaucrats initially scrambled together information from a cacophony of political and expert voices. This indecision blunted their response to the viral outbreak in the decisive early days of the pandemic, even though it was eventually replaced by an official narrative emphasizing efficiency and success. In both systems, bureaucratic delays had detrimental consequences for public health.

An anchor of stability

Amid the heightened geopolitical tensions between Beijing and Washington, it is important to remember that all powers rely on capable administrations to implement political directives. Politics set the tone, but bureaucrats shape reality.

And the modus operandi of Chinese and American bureaucrats has remained strikingly stable over the years – driven primarily by incentives rather than ideology. This similarity is increasingly being reflected by converging leadership styles at the top of each political system.

U.S. President Donald Trump resembles Chinese President Xi Jinping in his campaign-style politics and the cult of personality that many political observers see developing around him.

There is a definite upside to similar bureaucratic behavior. It renders the two superpowers more predictable in periods of increasingly heated political rhetoric.

For national leaders’ proclamations to have any effect, large bureaucratic organizations need to translate political content into national and international action. Not only does this take time and resources, but erratic announcements are dissipated by bureaucratic routines.

And that provides an anchor of stability in volatile times.

The Conversation

While working for the German Institute of Development and Sustainability, Daniel E. Esser received funding from the Federal Ministry for Economic Cooperation and Development.

Heiner Janus works for the German Institute of Development and Sustainability (IDOS), which receives funding from the German Federal Ministry for Economic Cooperation and Development.

Mark Theisen works for the German Institute of Development and Sustainability (IDOS), which receives funding from the German Federal Ministry for Economic Cooperation and Development.

Tim Röthel works for the German Institute of Development and Sustainability (IDOS), which receives funding from the German Federal Ministry for Economic Cooperation and Development.

ref. 2 superpowers, 1 playbook: Why Chinese and US bureaucrats think and act alike – https://theconversation.com/2-superpowers-1-playbook-why-chinese-and-us-bureaucrats-think-and-act-alike-266305

La pertinence des chiffres en question : l’exemple du coût d’une journée d’hospitalisation

Source: The Conversation – France (in French) – By Laurent Mériade, Professeur des universités en sciences de gestion – Agrégé des facultés – IAE – CleRMa, Université Clermont Auvergne (UCA)

Dans les journaux, à la télévision ou au Parlement, des chiffres sont souvent sortis comme des arguments irréfutables. Mais de quoi parlent les chiffres ? Le pouvoir magique qui leur était associé il y a plusieurs siècles a-t-il complètement disparu ? Un chiffre est-il toujours incontestable ? Une réflexion encore plus indispensable à l’heure de l’IA toute-puissante.


Dépenses de santé, émissions de gaz à effet de serre, aides publiques aux entreprises, taux de délinquance, coût de la dette, de la fraude fiscale, du système de retraite, des aides sociales… Les dirigeants usent et souvent abusent de chiffres pour justifier leurs décisions ou encore leurs manières de faire ou, parfois, de ne pas faire.

En ce moment, à l’occasion de la discussion de la loi de finances pour 2026 au Parlement, nous assistons tous les jours à des batailles de « chiffres » entre parlementaires. Cependant, peut-on réellement s’y fier ? Et quelles significations leur accorder ?

Des chiffres à la valeur symbolique

Les mathématiciens attribuent, depuis l’époque mésopotamienne (environ 3000 av. notre ère) trois principales significations aux chiffres : économique, idéologique et mystique.

Les tablettes d’argile cunéiforme retrouvées à cette époque représentent à 80 % des textes administratifs de nature d’abord économique comportant principalement des données chiffrées. Il peut s’agir des dimensions d’un champ ou d’une maison, des rations de nourriture, des effectifs de soldats d’une armée, ou encore les volumes d’un stock.




À lire aussi :
La gouvernance par les nombres ne façonne-t-elle pas trop les politiques publiques ?


Les nombres remplissaient aussi un rôle idéologique, notamment en fonction de leur importance. On peut citer le nombre 3 600 qui signifie à la fois « totalité » et « innombrable ». Les chiffres pouvaient également posséder une signification mystique. Les anciens Mésopotamiens associaient certains nombres à des divinités. Par exemple, on utilisait ou insérait le 15 (associé à la déesse de l’amour et de la guerre, Ishtar) pour montrer la puissance du nombre.

Un chiffre, trois dimensions

D’une manière étonnamment similaire à celle des Mésopotamiens, de nos jours, les gestionnaires considèrent aussi que leurs matières premières que représentent les chiffres sont à la fois le résultat d’une technique de calcul, d’une philosophie, et d’une représentation de la réalité. Cette signification en trois dimensions qui jalonne l’histoire des chiffres pousse à analyser plus en détail les manières dont ces chiffres (ou nombres) sont produits, alors même que nous disposons de peu d’outils pour évaluer la valeur scientifique de ces chiffrements.

Dans l’un de nos derniers articles, nous montrons que la valeur scientifique de ces chiffrements provient avant tout de leur pertinence, qui correspond à une mesure de l’utilité d’une réponse. Elle est une indication de l’importance de cette réponse pour un objectif important. Les chiffres pertinents sont ceux étroitement liés à un problème et les ignorer modifierait le problème.

Globalement, les travaux en gestion ou en économie retiennent trois formes de pertinence – pratique, théorique, sociétale – souvent rattachées aux vertus intellectuelles d’Aristote :

  • la pertinence pratique correspond à l’utilité d’un chiffre pour une question ou un problème particulier (la techne aristotélicienne) ;

  • la pertinence théorique correspond à la connaissance intellectuelle produite par ce chiffre (l’episteme) ;

  • la pertinence sociétale ou sociale correspond au savoir pratique commun produit (la phronesis).

L’étude de la pertinence des chiffres consiste souvent à interroger les utilisateurs de ces chiffres. Il est cependant difficile de juger de cette pertinence pour le plus grand nombre, car leurs utilisateurs les communiquent souvent en fonction de leurs propres subjectivités et intérêts particuliers.

La part invisible des chiffres

À travers l’analyse de la production des chiffres des dépenses et coûts hospitaliers en France, nous montrons que les pertinences pratique, théorique et sociétale des chiffres sont avant tout déterminées par leurs méthodes de calcul. Ces méthodes sont souvent la partie invisible des chiffres.

En France, par exemple pour l’année 2023, les coûts hospitaliers totaux annuels (publics et privés) étaient évalués environ à 100 milliards d’euros dans l’Objectif national de dépenses d’assurance maladie (Ondam) voté par l’Assemblée nationale, 122 milliards d’euros par la Direction de la recherche, des études, de l’évaluation et des statistiques (Drees) et à 248 milliards d’euros par l’Institut national de la statistique et des études économiques (Insee).

Ces dépenses hospitalières sont dépendantes du coût d’une journée d’hospitalisation. En France, depuis la réforme de la tarification à l’activité (T2A), les tarifs d’hospitalisation sont facturés aux usagers ou à leurs caisses d’assurance maladie par journée de présence du patient dans l’hôpital. Pour l’Agence technique de l’information sur l’hospitalisation (ATIH), le coût moyen d’une journée d’hospitalisation dans un hôpital public français est environ de 700 euros, mais il est de 600 euros dans les services de médecine, 950 euros dans les services de chirurgie et environ de 2 000 euros en soins intensifs.

Le juste prix de l’hospitalisation existe-t-il ?

Cette journée d’hospitalisation est facturée aux patients ou à leurs caisses d’assurance maladie en moyenne 1 400 euros dans un service de médecine, mais elle est facturée en chirurgie en moyenne 1 700 euros et en moyenne 3 000 euros en soins intensifs. Pour le cabinet EY, en 2025, ce coût d’une journée d’hospitalisation est encore différent. Il est en moyenne de 873 euros en médecine et de 365 euros dans un service de soins et de réhabilitation (SSR).

Comme le dénonçait le député (et médecin) Cyrille Isaac-Sibille lors d’une récente commission des affaires sociales de l’Assemblée nationale : « Plus personne ne sait évaluer les dépenses de santé ! »

La pertinence des méthodes de calcul avant celle des chiffres

Dans notre étude, nous montrons que la fiabilité des chiffres produits provient avant tout de la pertinence de leur méthode de calcul. Pour cela, dans un groupe de travail constitué de gestionnaires, de chefs de service, de cadres de santé et de médecins d’un centre français de lutte contre le cancer chargés de l’analyse des chiffres produits dans cet établissement, nous identifions que la pertinence de la méthode de calcul du coût d’une journée d’hospitalisation peut être mesurée par quatre principaux critères : causalité, traçabilité, exhaustivité et représentativité (ou CTER model).

  • La causalité consiste à comprendre et à expliquer l’origine des chiffres calculés c’est-à-dire la force du lien entre le chiffre calculé et ses éléments de calcul. Par exemple, dans le cas des coûts d’une journée d’hospitalisation, s’assurer du lien entre le chiffre calculé et les principaux déterminants de ce chiffre : nombre ou temps de personnel de santé présents près du patient dans une journée, durée du séjour, coût d’un repas, nombre et coût des prises médicamenteuses ou des actes d’imagerie, etc.

  • La traçabilité permet de s’assurer de la fiabilité des informations de calcul et de leur possibilité de recueil. Par exemple, pour le calcul du coût d’une journée d’hospitalisation, être certain de pouvoir recueillir des informations fiables concernant la durée exacte d’un séjour hospitalier, le coût d’un repas pris durant ce séjour, le coût exact d’un acte d’imagerie médicale, le nombre de médicaments pris et leurs coûts, etc.

  • L’exhaustivité de la méthode est déterminée par le niveau de détail des informations utilisées pour calculer un chiffre. Si pour calculer le chiffre final on utilise des calculs de moyennes (par exemple, les durées moyennes de séjour pour le coût d’une journée d’hospitalisation), l’exhaustivité est faible. Si l’on utilise des éléments spécifiques (par exemple, la durée spécifique de chaque séjour hospitalier), l’exhaustivité est plus importante. Plus on utilise des éléments de calcul spécifiques et détaillés plus l’exhaustivité est grande. Plus on utilise des valeurs moyennes ou médianes, moins l’exhaustivité est grande.

Le respect de ces trois premiers critères permet de s’assurer de la précision des chiffres produits. On peut considérer qu’un chiffre est imprécis s’il ne respecte pas, ou s’il ne le fait que partiellement, ces trois principes.

  • Enfin, la représentativité évalue le rapport entre le chiffre calculé et les valeurs représentées par ses éléments de calcul. Elle détermine le nombre de valeurs (économique, sociale, sociétale, éthique, etc.) que ces éléments de calcul peuvent illustrer et permettre de gérer. Si le chiffre est calculé à partir d’éléments uniquement économiques (par exemple le coût d’un repas, d’une imagerie ou d’une heure de personnel), la représentativité est faible. En revanche, si le calcul d’un chiffre mobilise également des éléments techniques (la durée exacte du séjour, le nombre de kg de linge utilisés, de repas pris, d’imageries médicales, des personnels mobilisés ou de médicaments prescrits) qui nous informent sur les valeurs sociales, sociétales ou éthiques d’une journée d’hospitalisation alors, la représentativité est plus importante et la méthode de calcul est jugée plus pertinente.
Xerfi Canal, 2022.

Développer des outils de vérification

Face aux flux de données chiffrées que l’on reçoit tous les jours, il parait essentiel de vérifier la pertinence des méthodes de calcul avant celle des chiffres qu’elles produisent. La mobilisation des quatre critères de pertinence des méthodes de calcul produits par nos travaux (CTER) peut permettre de concevoir cette justification. Dans la plupart des situations, cela devrait être à l’utilisateur de ces chiffres (celui qui les communique) de justifier un ou plusieurs de ces critères (a minima la causalité ou la traçabilité).

Mais on pourrait également imaginer le développement de dispositifs ou outils de vérification de ces critères de pertinence des méthodes de calcul à l’image du fact-checking qu’opèrent déjà des sites d’information spécialisés sur les chiffres communiqués ou utilisés (AFP Factuel, Le vrai ou faux de France Info, Désintox d’Arte). En intégrant ces critères dans leurs algorithmes, les outils d’intelligence artificielle (IA) peuvent être également un précieux recours dans ces opérations de vérification des méthodes de calcul avant celles des chiffres.

The Conversation

Laurent Mériade a reçu des financements de l’Agence nationale de la recherche (ANR) et de l’Union Européenne (FEDER) pour mener ses travaux de recherche notamment dans le cadre de la chaire de recherche “Santé & Territoires” de l’Université Clermont Auvergne dont il est co-titulaire.

ref. La pertinence des chiffres en question : l’exemple du coût d’une journée d’hospitalisation – https://theconversation.com/la-pertinence-des-chiffres-en-question-lexemple-du-cout-dune-journee-dhospitalisation-266834