Source: The Conversation – Global Perspectives – By Sarah Hellewell, Senior Research Fellow, The Perron Institute for Neurological and Translational Science, and Research Fellow, Faculty of Health Sciences, Curtin University
A recent episode of the The Kardashians shared some startling news about Kim Kardashian’s brain.
Discussing Kim’s recent brain scan, her doctor pointed out “holes” on her brain scan he said were related to “low activity”.
While this sounds incredibly sad and concerning, doctors and scientists have doubts about the technology used and its growing commercialisation.
I study brain health, including imaging the brain to look for early signs of disease.
Here’s what I think about this technology, whether it can really find holes in our brains, and if should we be getting these scans to check our own.
The type and extent of this aneurysm is unclear. And there doesn’t seem to be a clear link between her aneurysm and this recent news.
But we do know the latest announcement came after a different type of imaging, known as single-photon emission tomography (known as SPECT).
This involves injecting radioactive chemicals into the blood and using a special camera which creates 3D images of organs, including the brain. This type of imaging was developed in 1976 and was first used in the brain in 1990.
SPECT scans can be used to track and measure blood flow in organs, and are used by doctors to diagnose and guide treatment for conditions affecting the brain, heart and bones.
While SPECT does has some clinical use under limited circumstances, there is not good evidence for SPECT scans outside these purposes.
Enter the world of celebrities and private clinics
The clinic featured in The Kardashian episode offers SPECT to its clients, including the Kardashian-Jenners.
SPECT images have mass appeal due to their aesthetically pleasing pastel colours, widespread promotion on social media, and claims these scans can be used to diagnose any number of conditions. These include stress (as in Kim’s case), Alzheimer’s, ADHD, brain injury, eating disorders, sleep problems, anger and even marital problems.
But the scientific evidence to support the use of SPECT as a diagnostic tool for an individual and for so many conditions has led many doctors, scientists and former patients to criticise the work of such clinics as scientifically unfounded and “snake oil”.
Scans could potentially show changes in blood flow, though these may be common across conditions. Blood flow can also vary depending on the area of the brain examined, time of day, and even how well-rested a person is.
Areas in which blood flow is reduced have been described as “holes”, “dents” or “dings” on such SPECT scans.
In Kim’s case, this reduced blood flow was explained as “low activity” of the brain. Her doctor suggested the frontal lobes of her brain were not working as they should be, due to chronic stress.
But there is no scientific evidence to link these changes in blood flow to stress or functional outcomes. In fact, there is no single technique with scientific support to link changes in brain function to symptoms or outcomes for an individual.
These scans aren’t cheap
Doctors have several concerns about people without symptoms seeking SPECT as a diagnostic tool. First, people are injected with radioactive materials without a defined clinical reason.
Patients may also undergo treatment, or be recommended to take particular supplements, based on a diagnosis from SPECT that is scientifically unfounded.
And as SPECT scans are not recognised as a medical requirement, patients pay upwards of US$3,000 for a SPECT scan, with dietary supplements costing extra.
Do I need a scan like this?
While imaging tools such as SPECT and MRI may be genuinely used to diagnose many conditions, there is no medical need for healthy people to have them.
Such scans for healthy people are often described as “opportunistic”, with a double meaning: they may possibly find something in a person with no symptoms, but at several thousand dollars a scan, they take advantage of people’s health anxieties and can lead to unnecessary use of the health-care system.
It can be tempting to follow in the footsteps of the stars and look for diagnoses via popularised and widely advertised scans. But it’s important to remember the best medical care is based on solid scientific evidence, provided by experts who use best-practice tools based on decades of research.
Sarah Hellewell receives funding from the Medical Research Future Fund for MRI-based research.
In public discourse, we spend a great deal of collective energy debating the accuracy of facts. We fact-check politicians, monitor social media for misinformation, and prioritise data-driven decision-making in our workplaces. This focus is vital; the distinction between truth and falsehood is the bedrock of a functioning society.
However, by focusing so intently on factual accuracy, we risk overlooking another fundamental distinction: the difference between a fact and an opinion.
A statement of fact is relatively easy to verify: it is either true or not. But a claim’s objectivity – is it a verifiable objective statement or a subjective expression of belief? – is far more complex. This is why our minds process and encode opinions in a fundamentally different way to facts.
The stakes of objectivity
Objectivity is not a mere linguistic nuance; it lies at the foundation of important policy and legal debates. For instance, in defamation lawsuits against US media figures like Tucker Carlson and Sidney Powell, legal defences have hinged on whether statements could “reasonably be interpreted as facts” or were merely “opinions.” Similarly, social media platforms have struggled with whether to fact-check posts labelled as opinions, a policy that has recently complicated efforts to combat climate change denialism.
The distinction matters because it frames how we disagree. When a claim is clearly an opinion – for instance, “the current administration is failing the working class” – one may agree or disagree, but we understand that there is room for disagreement and neither side is inherently right nor wrong.
However, a factual statement – “The official US poverty rate was 10.6% in 2024” – leaves little room for debate. It necessitates the existence of a source, and an objectively correct response.
As a result, beliefs about claim objectivity can stifle receptiveness to conflicting perspectives. This, in turn, fuels interpersonal conflict and drives political polarisation.
The information we value
Despite these high stakes, there has been limited research on the cognitive implications of claim objectivity. In a recent series of 13 pre-registered experiments involving 7,510 participants, conducted with UCLA Anderson’s Stephen Spiller and published in the Journal of Consumer Research, we investigated how claim objectivity affects a specific and crucial type of memory: source memory.
Our findings suggest that the human mind does not treat facts and opinions equally. When it comes to remembering who said what, objective facts are at a distinct disadvantage.
We can illustrate this with an example. A doctor makes the factual claim that “the measles vaccine prevented an estimated 56 million deaths between 2000 and 2021.” Another doctor might say something similar, but give an opinion instead of data: “I believe vaccination is an easy way to prevent unnecessary suffering.”
In our research, we tested this dynamic, using medical claims about a fictitious disease to control for prior knowledge. We found that people are significantly more likely to remember the original source of an opinion than that of a fact.
Crucially, this is not because opinions are simply “catchier” or easier to remember in general. Across all 13 of our experiments, we also measured “recognition memory” – the ability to remember that a statement was made at all. We found no consistent difference in recognition memory between facts and opinions. Participants remembered seeing factual claims and opinions equally well. However, they struggled to link the factual claims back to the correct source.
Why does this disconnect occur? Source memory is a form of associative memory. It relies on the brain’s ability to bind distinct components of an experience – what was said and who said it – into a coherent web of interconnected elements during the initial encoding of information.
We propose that the strength of this binding depends on one thing: what the claim tells us about its source.
Both facts and opinions provide information about the source, but they do so to different degrees. If a political candidate says “The United States Agency for International Development (USAID) was created by the Foreign Assistance Act of 1961,” we learn that they know about legislative history. But if that same candidate says, “I believe shuttering USAID has been a moral catastrophe for our nation and the world,” we learn far more about them. We learn about their values, their priorities, and their stance on America’s role in the world.
Because opinions generally provide more information about the speaker than facts do, our brains encode stronger links between sources and opinions than between sources and facts.
Studies in developmental psychology and neuroscience support this. Research has found that when encoding opinions compared to facts, there is greater activation in the brain regions involved in theory of mind – the ability to represent the thoughts and mental states of others.
When we hear an opinion, we are building a richer mental model of the speaker. This additional social information strengthens the associative links formed during encoding.
But what happens when opinions tell us nothing about a source? We tested this mechanism by presenting participants with book reviews. When participants believed the sources were the authors of the reviews, they remembered the sources of opinions far better than facts. However, when we told participants the sources were merely “re-tellers” reading randomly selected reviews, the source memory advantage for opinions disappeared, performing on par with facts.
We also tested source memory for facts that reveal something about a source, such as personal statements like “I was born in Virginia”. In these cases, source memory was just as accurate as it was for opinions like “chocolate ice cream tastes better than vanilla”. It was also more accurate than for general facts about the world, such as “Stockholm is the capital of Sweden”.
The visibility paradox
These findings present a major challenge for experts and leaders. Authorities are often advised to “stick to the facts” to maintain credibility, but our findings suggest that by presenting only facts, experts risk being forgotten as the sources of important information.
This may pose a problem for the credibility of information – in an age of rampant misinformation and growing polarisation, remembering who said what is increasingly important to avoid conflict and ensure accuracy.
For experts, the goal is often to anchor facts in reality. Our research suggests that sharing opinions can help people to accurately attribute relevant information to credible sources. By sharing what they believe about the data – rather than just the data itself – experts can provide the social cues that our brains need to more strongly bind the information to its source. While facts play an important role in the battle against misinformation, opinions may be just as critical – and they don’t go unnoticed.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
This research was conducted in part thanks to the generous support of the UCLA Anderson Morrison Center for Marketing and Data Analytics.
Russian President Vladimir Putin attends a meeting with U.S. representatives Steve Witkoff and Jared Kushner (both not pictured) on Dec. 2, 2025.Alexander Kazakov/ AFP via Getty Images
Start-and-stop negotiations for a deal to end the war in Ukraine have been injected with new intensity after U.S. President Donald Trump’s administration unveiled a 28-point peace proposal.
It is far from clear whether the latest flurry of diplomacy, which on Dec. 2, 2025, saw Trump’s envoys Steve Witkoff and Jared Kushner meet with Russian President Vladimir Putin, will force the warring parties any closer to a resolution in the grinding, nearly four-year-long conflict.
Yet even if negotiators can broker a welcome deal to stop the current fighting, they will immediately be faced with the challenges of sustaining and implementing it.
Our research as scholars focusing on peace monitoring and Ukraine suggests that one thing is key in managing mistrust between parties involved in any peace plan: multifaceted third-party monitoring.
The University of Notre Dame’s Peace Accords Matrix, – the largest collection of implementation data on intrastate peace agreements – shows clear evidence that built-in safeguards, such as monitoring and verification by third parties, can increase success rates in peace agreements by more than 29% – meaning no resumption of fighting in the first five years of an accord.
Peace Accords Matrix team members regularly provide support to ongoing peace processes and in the design and implementation of agreements. We believe the program’s research could be applied to the challenges facing future peace in Ukraine.
Lessons from Colombia
The Peace Accords Matrix team’s work in Colombia is instructive on how an effective monitoring mechanism could be shaped in Ukraine.
Notre Dame’s Kroc Institute for International Peace Studies was tasked with carrying out on-the-ground and real-time monitoring of the 2016 peace deal between the Colombian government and the Revolutionary Armed Forces of Colombia, better known as FARC.
The Peace Accords Matrix’s 30-staffer team in Colombia has served as an independent body monitoring 578 peace accord commitments in areas such as rural reform, political participation and securing justice for victims. These staffers have, for example, traveled to reintegration camps to speak to former combatants in verifying United Nations data on the number of weapons surrendered and destroyed, among other accord targets.
Armed with quantitative and qualitative data, matrix members regularly meet with stakeholders – including victims, former guerrillas and politicians – to assess the status of implementation and to identify areas that need to be prioritized.
Over the past decade, the work has highlighted when and where there has been insufficient progress in boosting livelihoods and leadership opportunities for women and ethnic minorities.
This reporting has prompted new attention toward implementing these obligations laid out in the accord.
What does Ukraine need?
Our experience shows that when it comes to securing a lasting peace in Ukraine, it is imperative that a mandate for robust monitoring is spelled out clearly and realistically. To be effective, a monitoring body must have the independence to fully report and document violations.
That’s just the first step. Consider the failure of the Minsk agreements, signed in 2014 and 2015 to end fighting in the Donbas region of Ukraine between Ukrainian troops and Russian-backed separatists.
Those accords failed in part because the monitoring mission, led by the Organization for Security and Co-operation in Europe, lacked any defined mechanism to press for any action or change once violations – and there were many – had been established.
While the organization’s Special Monitoring Mission may have contributed to some temporary de-escalation in the Donbas conflict, ultimately Russia was able to exploit the weaknesses of the Minsk agreements and commit hostile acts, laying the groundwork for the current war.
Research suggests that monitoring works best when it extends beyond physical ceasefire lines to encompass the cyber domain, too. Moscow has carried out extensive cyberattacks on Ukrainian infrastructure throughout the conflict. Such aggression could continue invisibly despite a ceasefire, allowing one party to pre-position capabilities for future attacks or to conduct espionage without triggering traditional monitoring mechanisms.
Unlike conventional military activities, such cyber hostilities are inherently difficult to monitor and verify. A comprehensive monitoring arrangement will need to grapple with these threats, requiring carefully designed information-sharing protocols with the few international actors capable of monitoring the online activities of both sides.
A bigger tent
A key element of ensuring a durable peace is building trust between conflict parties over time. With the right mandate and authority, monitoring bodies can create space and structure for follow-on dialogue as implementation obstacles emerge. Durable peace processes require fine-tuning to adapt to changing political realities on the ground.
Involving public stakeholders in the implementation of a peace agreement is another key element, our research shows. Third-party monitoring can provide the framework for soliciting outside perspectives and participation.
Over the past decade, Ukrainian nongovernmental organizations have steadily developed expertise in monitoring and accountability in areas including elections, procurement, humanitarian operations and potential war crime activity.
Building on this experience by involving broader segments of civil society – including the country’s highly trusted faith-based communities – would strengthen the legitimacy of third-party monitoring in the eyes of the domestic public and assuage uneasy acceptance of any peace accord.
Ready on Day 1
While the United Nations and other multinational bodies are well placed to support some core monitoring tasks, those planning for peace now should, we believe, consider the benefits of involving a wider range of third-party actors. Indeed, many Ukrainians are skeptical that institutions of which Russia is a member can carry out their work with the needed independence.
As we have seen with the Peace Accords Matrix’s experience, the involvement of an independent research institution can open up new possibilities for monitoring.
And ideally, monitoring missions should be ready to go from Day 1, or as close to that as possible.
Comparative research has shown that the speed at which a monitoring mission starts its work can affect its relevance. Yet, many monitoring bodies are wracked by delays due to lack of planning, support and resources.
The current 28-point peace plan being mulled by Russia and Ukraine makes only a brief mention of monitoring, by a “Peace Council, headed by President Donald J. Trump.”
But our experience shows that prioritizing third-party monitoring and delving into the details of how it would be carried out – even as ceasefire negotiations are ongoing – can help ensure the success of a future deal.
It would serve as a vital signal to Ukrainians that, unlike the aftermath of the Minsk agreements, this time the international community will continue to engage and act to ensure their country’s peace.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
En 2024, le port du Havre (Seine-Maritime), géré par l’établissement public Haropa Port, est le septième plus grand port européen en termes de trafic total annuel en tonnes.Alexandre Prevot/Shutterstock
Une étude menée de 2018 à 2022 sur les ports du Havre, de Marseille et de Dunkerque met en lumière leur performance face aux 49 principaux ports européens. S’ils affichent des indicateurs en deçà de leurs concurrents – taux de navires obligés de passer au mouillage, durée moyenne des escales ou taux moyen de manutention –, leur progression est bien réelle.
Malgré des investissements et des atouts géographiques indéniables, les ports français présentent des performances en retrait par rapport à leurs grands concurrents européens.
C’est ce que montre notre étude menée de 2018 à 2022. À partir des données Automatic Identification System (AIS) et de la plateforme Port Performance de S&P Global, nous avons comparé les performances des trois principaux ports à conteneurs français – Haropa-Le Havre (Seine-Maritime), Marseille-Fos (Bouches-du-Rhône) et Dunkerque (Nord) – à celles des 49 ports européens ayant traité plus de 500 000 équivalents vingt pieds, l’unité de mesure du transport maritime, en 2022.
Croissance du Havre, de Marseille et de Dunkerque
Les trois ports français étudiés présentent des profils contrastés.
Haropa-Le Havre, premier port à conteneurs national, affiche un trafic de 3,01 millions d’équivalents vingt pieds (EVP) en 2022. Soit une croissance de 5 % par rapport à 2018. Marseille-Fos, deuxième, atteint 1,5 million d’EVP avec une croissance de 8 %. Dunkerque, longtemps tourné vers le vrac, connaît une véritable percée avec 745 000 EVP, soit une hausse de 76 % sur la période.
Cette progression s’explique par le renforcement des liaisons opérées par CMA CGM, acteur majeur du pavillon français. À Dunkerque, la compagnie représente près des trois quarts des escales de porte-conteneurs. Le port nordiste accueille désormais des unités de très grande capacité de plus de 18 000 EVP, illustrant sa montée en gamme sur les lignes maritimes principales.
Peu d’engorgement pour l’accès maritime des navires
Les ports français se distinguent positivement par leur accessibilité maritime.
En 2022, en moyenne en Europe, un tiers des navires étaient obligés de passer au mouillage avant d’accoster. À Marseille, ce chiffre n’était que de 4 %. Au Havre, 24 % des navires ont attendu en rade, un chiffre en hausse mais encore proche de la moyenne régionale, et à Dunkerque ils étaient 16 %. Ce dernier se distingue également sur le temps d’attente moyen qui n’est que de 8,2 heures contre 25 heures en moyenne en Europe.
Ces chiffres traduisent une fluidité maritime et une faible congestion. C’est une force dans un contexte où de nombreux ports européens subissent régulièrement des engorgements et où l’attente signifie une perte d’argent pour les armateurs de lignes régulières conteneurisées.
Taux de navires passant au mouillage (passant au moins quinze minutes en zone d’ancrage) en 2022. S&P Global Market Intelligence, Fourni par l’auteur
Un rythme de 35 conteneurs par heure
Le revers de la médaille apparaît à quai. La durée moyenne des escales reste plus élevée en France que dans le reste de l’Europe.
Au Havre, elle dépasse de six heures la moyenne de ses concurrents de la Manche et de mer du Nord. À Marseille et à Dunkerque, les durées se rapprochent davantage de celles observées dans leurs bassins géographiques respectifs, mais sans les dépasser.
L’un des principaux facteurs de ce retard est la productivité des opérations de manutention. En moyenne, les ports français déplacent moins de conteneurs par heure et par grue. À Marseille-Fos, la productivité atteint seulement 35 conteneurs par heure, contre plus de 55 en moyenne européenne. Cette faiblesse tient au nombre réduit d’engins de levage disponibles : 1,5 grue en moyenne par escale, contre 2,2 en Europe.
Nombre moyen de conteneurs manutentionnés par navire et par heure en 2022. S&P Global Market Intelligence, Fourni par l’auteur
Nous pouvons noter également que les ports français occupent un rôle encore limité dans le transbordement, l’acheminement de conteneurs vers une destination intermédiaire. Au Havre (50,8 conteneurs par heure) et à Dunkerque (45,6), la situation est meilleure mais reste en deçà des ports de la Manche et de la mer du Nord, où la moyenne s’établit à 61,8.
Des taux de manutention faibles
Au-delà de la vitesse, le rapport entre le nombre de conteneurs effectivement manutentionnés et la capacité théorique des navires met en évidence une autre faiblesse structurelle. Le taux moyen de manutention s’élève à 44,4 % en Europe, mais seulement à 28 % au Havre, 30 % à Dunkerque et 33 % à Marseille. Les trois ports français possèdent des taux moyens parmi les plus bas des ports étudiés.
Autrement dit, pour le même porte-conteneurs d’une capacité de transport théorique de 20 000 EVP, au Havre, 5 600 EVP seront chargés et déchargés contre 11 000 EVP à Anvers. Cette différence illustre la place intermédiaire occupée par les ports français dans les itinéraires des grandes compagnies maritimes : leurs escales y sont plus courtes et moins chargées.
Dans le contexte actuel et dans les stratégies des grands armateurs, certaines escales sont moins importantes. En cas de réorganisation des lignes maritimes, les armateurs peuvent choisir de supprimer certains ports de leurs services et de se concentrer sur un nombre plus restreint de ports, le blank sailing.
Les services maritimes se recentrant sur un nombre d’escales plus réduit, les ports avec des taux de manutention faibles sont les plus susceptibles d’être une escale sautée. Il faut tout de même souligner que, pour les ports de Marseille et de Dunkerque, les taux de manutention sont en progression constante depuis 2018 de 7 et de 13 points de pourcentage respectivement.
Haropa en établissement unique
La compétitivité portuaire ne se réduit pas à la performance technique. Elle dépend aussi de la qualité des connexions terrestres, du fonctionnement des terminaux, du dialogue social et de la capacité à offrir des services logistiques intégrés.
Les ports français souffrent encore d’une fragmentation institutionnelle et d’une gouvernance parfois complexe, là où leurs concurrents du Nord ont su rationaliser et industrialiser leurs processus. La transformation de Haropa en établissement unique en 2021 constitue un pas dans la bonne direction, mais les effets se feront sentir sur le long terme.
Vers une stratégie nationale de reconquête ?
Malgré leurs connexions sur les lignes majeures mondiales, les ports français présentent des performances plus faibles que leurs principaux concurrents européens. Face à la domination d’Anvers, de Rotterdam ou de Hambourg, les marges de manœuvre existent. L’enjeu n’est pas de rivaliser sur la taille, mais sur l’efficacité et la fiabilité. Réduire la durée des escales, améliorer la productivité des terminaux et fluidifier les dessertes ferroviaires et fluviales sont des priorités.
Les ports français disposent d’atouts : des réserves foncières importantes, une position géographique stratégique entre Méditerranée et Manche, une accessibilité maritime globalement performante, etc. Dans le même temps, plusieurs ports européens connaissent des épisodes de congestion et la présence d’un armateur mondial, CMA CGM, ou encore l’investissement important de MSC au Havre.
Pour transformer ces atouts en avantage compétitif durable, il faudra poursuivre les efforts de modernisation et de coordination logistique. Car dans un monde où chaque heure de transit compte, la compétitivité portuaire devient un indicateur clé de la souveraineté économique.
Ronan Kerbiriou a reçu des financements de la fondation SEFACIL.
Arnaud Serry ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, medium-sized data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city.
Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers.
In space, sunshine – which solar panels can convert into electricity – is abundant and reliable. On Nov. 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So, instead of beaming power back to Earth, the constellation would beam data back to Earth.
For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space.
As a technology entrepreneur, I applaud Google’s ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris.
The mathematics of disaster
Space debris – the collection of defunct human-made objects in Earth’s orbit – is already affecting space agencies, companies and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites.
Space debris travels at hypersonic speeds of approximately 17,500 miles per hour (28,000 km/h) in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil.
Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceX’s Starlink. The Starlink network has more than 7,500 satellites, which provide global high-speed internet.
The U.S. Space Force actively tracks over 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to reliably identify and track.
In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the United States and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station.
The orbital shell Google’s project targets – a Sun-synchronous orbit approximately 400 miles (650 kilometers) above Earth – is a prime location for uninterrupted solar energy. At this orbit, the spacecraft’s solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, Sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris.
As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable.
Implications for Project Suncatcher
Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just one kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 miles per hour – while separated by gaps about the distance you need to safely brake on the highway.
This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to “think” and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototype satellites by early 2027 to validate the hardware.
But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites – this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind.
To add to this complexity, streams of particles and magnetic fields from the Sun – known as space weather – can cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag.
When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield.
The importance of active avoidance
To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with “reflexes” that autonomously detect and dance through a debris field. Suncatcher’s current design doesn’t include these active avoidance capabilities.
In the first six months of 2025 alone, SpaceX’s Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds.
Today’s object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time.
Equipping Suncatcher with active collision avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor.
Detecting space debris in orbit can help prevent collisions.
Paying rent for the orbit
Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the mission’s completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellite’s altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere.
However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use-tax for space debris removal.
A use-tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris removal missions, which capture and remove the most dangerous pieces of junk.
Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris removal programs can help keep low Earth orbit open for business.
Mojtaba Akhavan-Tafti receives funding from NASA and Intelligence Advanced Research Projects Activity (IARPA). He teaches space systems engineering and mission design and management at the University of Michigan’s College of Engineering.
If you use a mobile phone with location services turned on, it is likely that data about where you live and work, where you shop for groceries, where you go to church and see your doctor, and where you traveled to over the holidays is up for sale. And U.S. Immigration and Customs Enforcement is one of the customers.
The U.S. government doesn’t need to collect data about people’s locations itself, because your mobile phone is already doing it. While location data is sometimes collected as part of a mobile phone app’s intended use, like for navigation or to get a weather forecast, more often locations are collected invisibly in the background.
I am a privacy researcher who studies how people understand and make decisions about data that is collected about them, and I research new ways to help consumers get back some control over their privacy. Unfortunately, once you give an app or webpage permission to collect location data, you no longer have control over how the data is used and shared, including who the data is shared with or sold to.
Why mobile phones collect location data
Mobile phones collect location data for two reasons: as a by-product of their normal operation, and because they are required to by law.
Mobile phones are constantly scanning for nearby cell towers so that when someone wants to place a call or send a text, their phone is already connected to the closest tower. This makes it faster to place a call or send a text.
To maintain quality of service, mobile phones often connect with multiple cell towers at the same time. The range of the radio signal from a cell tower can be thought of as a big bubble with the cell tower in the center. The location of a mobile phone can be calculated via triangulation based on the intersection of the bubbles surrounding each of the cell towers the phone is connected to.
In addition to cell tower triangulation, since 2001 mobile phone carriers have been required by law to provide latitude and longitude information for phones that have been used to call 911. This supports faster response times from emergency responders.
The ‘Today’ show gives an overview of how your phone reveals where you go and what you do.
How location data ends up being shared
When people allow webpages and apps to access location data generated by their mobile phones, the software maker can share that data widely without asking for further permission. Sometimes the apps themselves do this directly through partnerships between the maker and data brokers.
More often, apps and webpages that contain advertisements share location data via a process called “real-time bidding,” which determines which ads are shown. This process involves third parties hired by advertisers, which place automated bids on the ad space to ensure that ads are shown to people who match the profile of interests the advertisers are looking for.
To identify the target audience for the ads, software embedded in the app or webpage shares information collected about the user, including their location, with the third parties placing the bids. These third parties are middlemen that can keep the data and do whatever they want with it, including selling the data to location data brokers, whether or not their bid wins the auction for the ad space.
The invisible collection, sale and repackaging of location data is a problem because location data is extremely sensitive and cannot be made anonymous. The two most common locations a person visits are their home and where they work. From this information alone, it is trivially easy to determine a person’s identity and match it with the other location data about them that these companies have acquired.
Also, most people don’t realize that the location data they allowed apps and services to collect for one purpose, like navigation or weather, can reveal sensitive personal information about them that they may not want to be sold to a location data broker. For example, a research study I published about fitness tracker data found that even though people use location data to track their route while exercising, they didn’t think about how that data could be used to infer their home address.
This lack of awareness means that people can’t be expected to anticipate that data collected through the normal use of their mobile phones might be available to, for example, U.S. Immigration and Customs Enforcement.
More restrictions on how mobile phone carriers and apps are allowed to collect and share location data – and on how the government is allowed to obtain and use location information about people – could help protect your privacy. To date, Federal Trade Commission efforts to curb carriers’ data sales have had mixed results in federal court, and only a few states are attempting to pass legislation to tackle the problem.
Emilee Rader receives funding from the National Science Foundation.
Source: The Conversation – USA – By Melinda Haas, Assistant Professor of International Affairs, University of Pittsburgh
A new Trump administration policy threatens to undermine foundational American commitments to free speech and association.D-Keine, Getty Images
A largely overlooked directive issued by the Trump administration marks a major shift in U.S. counterterrorism policy, one that threatens bedrock free speech rights enshrined in the Bill of Rights.
National Security Presidential Memorandum/NSPM-7, issued on Sept. 25, 2025, is a presidential directive that for the first time appears to authorize preemptive law enforcement measures against Americans based not on whether they are planning to commit violence but for their political or ideological beliefs.
This structure allows the president to direct law enforcement and national security agencies, with little opportunity for congressional oversight.
This seventh national security memorandum from the Trump White House pushes the limits of presidential authority by targeting individuals and groups as potential domestic terrorists based on their beliefs rather than their actions.
The memorandum represents a profound shift in U.S. counterterrorism policy, one that risks undermining foundational American commitments to free speech and association.
The presidential memorandum signed by Donald Trump identifies ‘anti-Christian,’ ‘anti-capitalism’ or ‘anti-American’ views as potential indicators that a group or person will commit domestic terrorism. Andrew Harnik/Getty Images
Presidential national security powers
Executive memoranda instruct government officials and agencies by delegating tasks and directing agency actions.
They can, for example, order a department to prepare reports, implement new policies, coordinate interagency efforts or review existing programs to align with the administration’s priorities.
Unlike executive orders, they are not required to be published. When these memoranda, like NSPM-7, relate to national security and military and foreign policy, they are called national security directives, although the specific name of these directives changes with each administration.
Many of these directives are classified. They may not be declassified, if at all, until years or decades after the end of the administration that issued them.
The stated purpose of NSPM-7 is to counter domestic terrorism and organized political violence, focusing mainly on perceived threats from the political left. The memorandum identifies “anti-Christian,” “anti-capitalism” or “anti-American” views as potential indicators that a group or person will commit domestic terrorism.
The memorandum claims that political violence originates with “anti-fascist” groups that hold the following views: “support for the overthrow of the United States Government; extremism on migration, race, and gender; and hostility towards those who hold traditional American views on family, religion, and morality.”
The strategy laid out in NSPM-7 includes preemptive measures to disrupt groups before they engage in violent political acts. For example, multiagency task forces are empowered to investigate potential federal crimes related to radicalization, as well as the funders of those potential crimes.
‘Domestic terrorist organizations’
The memorandum directs the Department of Justice to focus the resources of the FBI’s approximately 200 Joint Terrorism Task Forces on investigating “acts of recruiting or radicalizing persons” for the purpose of “political violence, terrorism, or conspiracy against rights; and the violent deprivation of any citizen’s rights.”
NSPM-7 also allows the attorney general to propose groups for designation as “domestic terrorist organizations.” That includes groups that engage in the following behaviors: “organized doxing campaigns, swatting, rioting, looting, trespass, assault, destruction of property, threats of violence, and civil disorder.”
Existing laws allow the secretary of state to designate groups as “foreign terrorist organizations” that are then subject to financial sanctions.
Would protesters like these at a Washington, D.C., ‘No Kings’ demonstration be seen as potential domestic terrorists by the Trump administration? Jose Luis Magana/AP
Defining terrorism
NSPM-7 marks a major conceptual shift in U.S. counterterrorism policy. Its focus on domestic terrorism significantly departs from historical approaches that primarily targeted foreign threats.
Since Ronald Reagan’s presidency, the U.S. government had treated terrorism as a global menace to democratic institutions, emphasizing protection of citizens and allies abroad. By moving away from a traditional law enforcement framework and recasting terrorism as an act of war, the Reagan administration situated the issue within the broader realm of Cold War geopolitics and military advantage.
After the 9/11 attacks, the Bush administration fused counterterrorism with national defense. The Bush-initiated global war on terrorism expanded the concept of who constituted a threat to include countries that harbored or aided terrorist organizations.
This standard was not focused on ideology but rather on tactical considerations, such as the feasibility of capture and continued threat to U.S. interests.
For example, the lethal drone strike on al-Qaida propagandist Anwar al-Awlaki in 2011 was justified on the basis that he was actively involved in plotting attacks and remained unreachable for capture.
During the first Trump presidency, executive orders were used to change counterterrorism policy, most notably through several iterations of a “travel ban” that attempted to restrict immigration from terror-prone countries such as Iraq, Iran, Somalia, Syria and Yemen.
The Biden administration redirected attention toward preventing catastrophic threats, especially from weapons of mass destruction in the hands of groups or individuals outside of governments, such as terrorist organizations.
First Amendment rights at risk
There is no single official definition of terrorism in U.S. law.
Instead, laws use different definitions based on their purpose, whether criminal law or laws relating to intelligence collection or civil liability.
Definitions in all those areas typically focus on identifying violent or dangerous acts done with the intent to intimidate or coerce civilians or influence government policy.
But more than redefining terrorism, NSPM-7 reorients the machinery of national security toward the policing of belief.
The First Amendment generally prevents the government from punishing people for unpopular opinions. It also protects the ability for people to associate to advance public and private ideas in pursuit of political, economic, religious or cultural goals.
The directive’s emphasis on ideological orientations – “anti-Christianity,” “anti-capitalism” and “anti-American” views – as indicators of domestic terrorism potentially jeopardizes First Amendment rights.
Thirty-one members of Congress sent a letter to Trump expressing “serious concerns” about NSPM-7, warning that it poses “serious constitutional, statutory and civil liberties risks, especially if used to target political dissent, protest or ideological speech.”
As the ACLU warns, any definition of terrorism that includes ideological components risks criminalizing people or groups based on belief rather than based on violence or other criminal conduct.
Congress has declined to create a domestic complement to the foreign terrorist designation in large part because of the potential for impinging on First Amendment–protected association and speech.
But I fear that chilling speech may be the point.
Silencing dissent
NSPM-7 does not authorize new actions in the legal and institutional framework for counterterrorism. It does not criminalize previously legal conduct.
Law professor Steve Vladeck frames this chill as “obeying in advance,” in which organizations self-censor rather than risk investigation, prosecution or defending against the “domestic terrorist” label.
Although left-wing violence has risen in the past decade, empirical evidence proves that this violence remains at very low absolute levels, well below historical levels of right-wing or jihadist violence.
In fact, most domestic terrorists in the U.S. are politically on the right, and right-wing attacks account for the vast majority of fatalities from domestic terrorism.
Yet NSPM-7 focuses disproportionately on left-wing ideologies. NSPM-7 departs from prior U.S. counterterrorism frameworks by prioritizing the suppression of ideologically motivated dissent, even in the absence of concrete evidence of violent intent.
Melinda Haas does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Leonel Lagos, Associate Professor of Construction Management; Director of Research, Applied Research Center, Florida International University
The discussion around powering AI data centers, in particular, has involved a type of nuclear power plant called a small modular reactor. According to the International Atomic Energy Agency, there are about 70 different designs being researched and developed around the world, including reactors that could one day serve small or remote communities, military applications and even ships at sea or spacecraft.
Small modular reactors, at the top, are in between the other two sizes of reactor, and serve different sizes of communities, at the bottom. A. Vargas/IAEA
The basics
There are three general sizes of nuclear reactors – only one of which, conventional nuclear plants, has been built commercially. Conventional plants are built in permanent locations on large plots of land around reactor cores as tall as 30 feet (10 meters). They usually generate more than 1,000 megawatts of power, enough to supply 700,000 to 1 million homes.
The other types are still being researched and are considerably smaller. Microreactors have cores that are small enough to fit into the trailer of a semitruck. They can be installed on land about as big as a football field and generate less than 20 megawatts.
Small modular reactors are in between. Their cores are roughly 9 feet (3 meters) across and 18 feet (6 meters) tall. The entire operation occupies an area of about 50 acres and can generate up to 300 megawatts of electricity.
Because of the reactors’ size, they can be built in factories from various components and then be shipped by truck, rail or water to the location where they are assembled.
All the different types of small modular reactors generate heat the same way: by splitting heavy atoms and capturing the heat into a variety of materials – like water, liquid metal or molten salt – that circulate through water to generate steam that drives a turbine.
They are also designed with safety features to reduce the risk and severity of accidents that might release radiation or radioactive material into the surroundings. For instance, passive systems and those based on fundamental principles like gravity can terminate nuclear reactions before they reach levels where explosions or leaks might occur. These reactors also produce less heat and have far smaller amounts of nuclear material than traditional large reactors, which can reduce the radioactivity risk as well.
The green-wrapped core module of a small nuclear reactor is readied for transfer to a ship. Liu Xuan/VCG via Getty Images
Construction and deployment
Small modular reactors are well-suited to provide electricity in remote places or regions without a large power grid – places where large nuclear power plants are impractical.
Their compact design and flexible placements make them ideal for small geographical regions or industrial installations, like desalination plants, or in countries just starting to develop nuclear power.
There remains a range of technical challenges before small modular reactors can actually be built and put into use. These include relatively straightforward questions like how many people are needed to operate each reactor, and more complex decisions about refinements to safety regulations, both in the U.S. and internationally. It’s also not yet clear what the best way is to manage the transport of radioactive materials, especially for reactors that use coolants other than water, which could produce new forms of radioactive waste.
Understanding the fuel
Larger nuclear power plants use fuel that is about 5% uranium-235, the element that splits in a nuclear reaction, releasing heat. But many small modular reactor designs use a different fuel, with between 5% and 20% uranium-235.
This different fuel, called “high-assay low-enriched uranium,” lets the reactors generate more electricity from a smaller volume of fuel material. And though it contains significantly more uranium than standard nuclear fuel, it remains far below the concentration of 90% uranium-235 that is used in nuclear weapons.
The more concentrated fuel also allows reactors to run longer between refueling and reduces the amount of radioactive waste that remains after the fuel is spent.
An engineer at a French research center works on equipment as part of efforts to develop a small modular reactor. Nicolas Tucat/AFP via Getty Images
All nuclear plants require safe handling of the fuel and the resulting waste. There is no permanent place to store nuclear waste in the U.S. Most nuclear waste is stored on the land around the reactors where it was generated.
That can be useful for desalination plants, which use both electricity and heat to convert seawater into fresh water for drinking and irrigation. Remote mining operations also often need both heat and power to operate equipment, ensure living quarters are habitable and process minerals.
Small modular reactors may also be useful on university campuses. A microreactor planned for the University of Illinois will provide power and steam to campus buildings, while also teaching students how to operate nuclear plants, and offer research and demonstration opportunities for more reactor improvements in the future.
Leonel Lagos does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
American colleges and universities are often nonprofits, but they often operate in many of the same ways that businesses do. tc397/iStock/Getty Images Plus
While there’s been a lot of public attention to the federal government’s financial pressure on universities, universities have been experiencing financial pressure from other sources.
Understanding that is key for applicants and parents to understand their bargaining position when choosing whether and where to pursue a college degree.
As scholars of public administration and economics and former university administrators, we think parents and college applicants need to understand this economic landscape to make smart choices about making such a major investment. Here are four key things to know.
1. Universities are an industry
Most American private colleges and universities are nonprofits, but they still care about revenue. These schools aren’t responsible to shareholders, but they may respond to pressure from alumni, students, employees, donors, boards, the federal government and, if the schools are public universities, state governments.
And like businesses, nonprofit colleges and universities need money. As a result, despite what you might think, most colleges are not particularly selective. Though they don’t advertise that fact, hundreds of schools will take any student who meets minimal academic requirements and can pay tuition.
The added cost of teaching additional students is minimal when there are empty seats, so admitting more students can lead to an increase in revenue for most schools.
This is important because colleges’ costs – largely staff salaries and building maintenance – are hard to cut and are mostly fixed. Those costs must be spread across fewer students when there are unfilled seats.
As the number of people who go to college is declining, colleges need to respond to people’s skepticism about the value of degrees – but change is difficult
Becoming a smaller school is challenging. If students show less interest in foreign language study and more interest in data science classes, the school cannot have a German language professor suddenly teach data science.
As a result, colleges can become stuck with faculty who teach course students don’t want to take.
Unlike business leaders, who may be rewarded for fixing a failing company by laying off workers, university leaders who eliminate faculty positions become unpopular among their peers. This can reduce their chance to advance their careers at their current universities or switch to a new school.
Colleges enrolled 8.4% fewer students in 2024 than when attendance peaked at 21 million in 2010. As a result, schools must increasingly compete harder to attract students.
One way is to offer a better price, meaning lower tuition. Like most elite schools, Harvard has a listed price of about $60,000 for tuition alone in one academic year – and nearly $87,000 when food, housing and other services are included. Few students actually pay that amount, though the exact percentage getting a discount is not public information.
The average net price a Harvard student paid in 2023-24 was $17,900, as colleges offered either financial aid, straight-up discounts or scholarships.
Most schools engage in this sort of price discrimination, the term economists use to describe charging different prices to different customers based on their willingness to pay. In some ways, this is much like airlines selling seats on the same flights at different prices.
Most schools have not pursued this strategy of expanding foreign enrollment as aggressively as Columbia University, where international students approach 40% of the student body.
Rising competition from universities in Australia, Canada and the United Kingdom, combined with stricter U.S. visa policies and geopolitical tensions with China, have led to rapid declines in Chinese students enrolling at American schools.
The number of Chinese undergraduate and graduate students attending U.S. colleges and universities has dropped from 317,299 in 2019 to 265,919 in the 2024-25 school year.
This change has increased the financial strain on American colleges and universities, many of which have grown accustomed to having large numbers of international students who pay their own way.
Chinese graduates throw their hats into the sky at their graduation from Columbia University in May 2016. Xinhua/Li Muzi via Getty Images
Just 22% of Americans said in 2024 that a college degree is worth the cost, if a student has to borrow money to get it.
The University of Texas system – made up of nine universities and four medical schools – shares information on the average income of graduates for every degree program after graduation.
In the case of the University of Texas at Arlington, the average salary for a drama, theater arts and stagecraft major is $14,933 one year after graduation. This amount goes up to $39,608 10 years after graduation, resulting in a negative $324,210 return on the price of college over that first decade.
Of course, some degrees pay off. A University of Texas at Arlington graduate with a degree in civil engineering earns an average of $67,920 one year after college and $105,377 10 years after graduation, demonstrating a positive return on investment of $1.15 million.
We believe that universities and colleges should reform to address the next generation’s uncertainty about higher education.
College applicants should be asking hard questions. What is the data on graduates’ earnings compared to the cost of their program? Where are graduates employed?
If more people treated buying a college degree with the same care they use to buy their first home – an equivalent investment – colleges and universities would feel pressure to become more transparent for students and parents. They would also become more aligned with the rapidly evolving demands of the workplace.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Imagine going from having a book club with your co-workers to seeing them only on a Signal chat where every member has to be vetted – and the main conversation topic is when you might lose your job.
That’s what it was like for workers at one federal agency earlier this year.
“I’d never seen anything like the sort of organization that happened during the RIFs (layoffs, or reductions in force) in supporting each other with news, information and job resources,” said Anthony, a federal worker who’d been with the agency for almost a decade before his position was eliminated. He asked that his real name and other identifying details not be published, out of fear of retaliation.
But something else happened alongside the cuts: Federal workers began building support networks online – connecting with colleagues inside their agencies and with strangers outside them.
I’m an anthropologist, which means I study human nature and human diversity, and I’m an expert in how people cooperate to manage risk. Watching federal workers use social media to provide mutual support offered a rare real-time view of the process. To deepen my understanding, I interviewed several federal workers who work in different parts of government.
They told me that in the past, federal workers haven’t always interacted with their co-workers outside of work, much less connected across federal agencies. But thanks to online platforms, that’s changing.
In 2025, federal workers built social networks like the ones we study in my lab. When experiencing widespread shocks – things such as droughts or mass job loss – humans past and present have relied on relationships that stretch beyond the individuals affected. Often that means getting support from people at a distance, and it can also mean reaching out across groups.
When just a few people reach across groups, social scientists call these connectors “brokers.” They often move information across groups. As a user of LinkedIn and Bluesky, I have observed that federal workers in positions of power, or who have been recently RIFed and thus have less to lose, are often brokers, because with visibility comes risk of retribution. These brokers share information on where to find unemployment benefits or how to sign petitions calling for scientific independence.
There are even more connections spanning distance and agencies when workers can remain anonymous. Platforms such as Reddit and Bluesky are places where workers feel safer to speak freely. There, workers can share information and also frustration, little wins, and some laughs.
What’s more, as my lab has shown, these long-distance relationships can also bolster collective action – working toward a shared goal, often across space and across groups, such as federal agencies.
For example, Julia Simon – who agreed to let me use her name but asked that other identifying details be withheld – has a friend who works at the same federal agency as her but lives in a different part of the country. This year, her friend suggested that Julia join the Federal Unionists Network. Members from across agencies provide mutual support and work together toward change in their union – the American Federation of Government Employees – and beyond.
“I’ve felt that within my own local and district I’ve been seen as too radical so my ideas tend to get shot down or ignored,” Simon told me in an interview. “But finding a group of other AFGE activists who have similar views and goals has been validating.”
Hunkering down among trusted others
That said, when people fear surveillance and possible retaliation, they may not reach out to long-distance connections. Instead, networks often shift toward tight-knit clusters, reducing risk of exposure and increasing trust.
When workers were faced with RIFs, a visit from DOGE or the government shutdown, Signal chat activity would increase, workers told me.
“The content was largely ‘I heard from our division director that the RIF notices will go out Friday’ or ‘If you’re comfortable with it, here’s a Zoom workshop on how to manage your emotions during layoffs,’” Anthony said. At their peak, he told me, these chats had hundreds of participants.
Mason, furloughed from a different agency during the shutdown, gave another example. “Today, there are about a dozen messages among federal employees who are trying to provide information and support to each other about applying for unemployment benefits,” he said in an interview.
Though these Signal groups are tight-knit, long-distance relationships still are a source of information – bringing news from spouses and friends at other agencies and content from Reddit, LinkedIn and Bluesky.
For some workers, the most important benefit of these Signal chats is the sense of community they provide.
“These group chats and communities sprung up because we were being terrorized and we only had each other for support,” Anthony said. “I remember seeing some wild statistic early on that said a lot of folks support DOGE’s mission – from our side, it was like, ‘Guess we’re on our own.’ I can’t tell you how many times I heard, ‘Nobody is coming to save us’ – so that’s why we needed these groups.”
They also offer important lessons on how to build the resilient networks that sustain us as people.
First, if you feel you cannot trust others, trust can emerge in highly connected clusters that can pool information and take action. As Anthony highlighted, forming these clusters can provide individuals with a sense of community.
Second, connections spanning groups and distance open doors for transmitting information and, as Julia experienced, for engaging in collective action. Long-distance relationships can also help you access things that can be hard to find, such as information about what’s next, support with food or loans, and even new job opportunities.
These resilient networks are a reminder that online platforms have a silver lining. Many news stories focus on how social media use can negatively affect people’s mental health or social relationships. What federal workers highlight, however, is that the effect of online platforms on your well-being can depend on how you use them.
LinkedIn, Reddit, Signal and other platforms can allow you to create and sustain networks that might be impossible to have in person, either because trust is low or simply because you’re busy. Online platforms allow people to build tight-knit clusters or to have more long-distance relationships at greater distances than ever before.
So whether you’re looking for like-minded others, people who can help you face something you’ve never faced before, or a sense of community when you’ve lost so much, online platforms remain an important tool to help us find each other.
Anne Pisor receives funding from the National Science Foundation, National Institutes of Health, and Penn State Social Science Research Institute. She has a long-distance social relationship with a source for this article.