Sudan’s protesters built networks to fight a tyrant – today they save lives in a war

Source: The Conversation – Africa (2) – By Lovise Aalen, Research Professor, Political Science, Chr. Michelsen Institute

Sudan has a long history of civilian-led resistance, with young people playing a key role. For example, informal neighbourhood networks established in 2013 to survive repression under three decades of authoritarian rule have since transformed into vibrant support systems.

These groups helped mobilise mass protests in 2018. They have provided a lifeline for communities in the ongoing civil war, which started in 2023.

During the mass protests, youth-led networks organised political sit-ins and demonstrations against the Islamist regime of Omar al-Bashir. They were ultimately successful in overthrowing a 30-year dictatorship.

We are researchers in the fields of anthropology and political science, studying youth mobilisation in authoritarian states. In a recent paper, we studied the emergence and role of Sudan’s neighbourhood committees and informal networks. These became the backbone of protests.

We found that young people built grassroots networks through engagement in different forms of voluntarism and charity. They built resistance structures under the repressive environment of the Islamist regime. Later (around 2013 or so), these developed into neighbourhood committees organising resistance underground.

And since the outbreak of war in April 2023, Emergency Response Rooms, which are community-led networks, have been providing crucial humanitarian relief.

African youth mobilisation is often seen as an outcome of tension between an urban underclass and a repressive state. We argue that in Sudan, a collaboration between different classes, including the middle class, has been key in the fight against autocratic governance.

We found that the committees enabled protests. They played a vital role in organising emergency responses during times of crises.

Building the resistance

Under the repressive policies of the al-Bashir regime, political activities were not allowed in public spaces. Opposition was heavily suppressed.

Despite this, young people found innovative ways to create political spaces. Neighbourhood committees became sites of resistance, emerging as a critical infrastructure for grassroots mobilisation.

The committees represent a unique blend of political and practical action. They serve a dual functionality – mobilising for change while addressing immediate community needs. This underscores the potential of informal, decentralised networks to drive both political and social transformation.




Read more:
Sudan’s people toppled a dictator – despite the war they’re still working to bring about democratic change


They were initially formed during the 2013 anti-austerity protests as neighbourhoods’ underground cells. These committees were informal, hyper-local networks of politically engaged youth.

Over time, they evolved into organised structures. They facilitated protests, provided essential services and emergency responses during crises. In the 2018 uprising, they coordinated logistics. They also provided real-time updates through social media.

The committees also supported a sit-in at the military headquarters in April 2019. This became a focal point of the uprising. This sit-in presented a vibrant community space where youth experienced a sense of political togetherness. It featured art exhibitions, public debates and cultural performances, creating a shared vision of a better Sudan.

The civil war

The war between the army and a paramilitary group, the Rapid Support Forces, has put more than 30 million people – about two-thirds of the population – in need of humanitarian aid. This has created one of the world’s worst humanitarian crises. Conflict and blockades have meant international efforts to send aid hasn’t always been possible.

During the transitional period after al-Bashir’s exit and the 2023 war, the committees transformed into emergency response rooms. These provided critical services, such as healthcare, food and water. These rooms were run by the same youth networks that had led the protests. They drew on their pre-war experiences of grassroots mobilisation and humanitarian aid.

Amid a devastating civil war, they carry on the idea of political togetherness. Bonds of trust, necessity and solidarity established years ago have transcended ethnic or class divisions. They have created civilian resilience against state repression.

Lessons in resilience

The committees’ ability to adapt to new challenges underscores the importance of grassroots networks in both political and humanitarian contexts.

The concept of political togetherness, as seen in Sudan, reveals how temporary alliances across class, gender and ethnic divides can create a cohesive force for change.




Read more:
How a Sudanese university kept learning alive during war


This has implications for understanding youth movements globally, particularly where formal political spaces are inaccessible or untrustworthy.

The adaptability of Sudan’s neighbourhood committees illustrates the resilience of grassroots networks. By stepping into the void left by state failure, these committees provide essential services and also reinforce their legitimacy within their communities.

This suggests that such networks can serve as a foundation for future governance models, especially in post-conflict reconstruction efforts.




Read more:
Sudan’s civilians urgently need protection: the options for international peacekeeping


However, our study also reveals risks associated with informal and flexible structures.

The lack of formal governance mechanisms within these committees leaves them vulnerable to co-optation, fragmentation and the erosion of trust over time.

Without proper institutional support, the cohesion and effectiveness of these networks may wane. This is especially when the crises or transitions are prolonged.

What next?

In a post-war Sudan, both the Sudanese government and the international community should aim to preserve the emergency response rooms’ autonomy and grassroots nature. This should happen while providing resources and institutional support to enhance their capacity for community service and crisis response.

Activists within Sudan and similar contexts should continue to build on the model of political togetherness. This means fostering inclusive alliances that transcend traditional divides.

By prioritising both political mobilisation and community service, these grassroots networks can maintain the momentum for change while addressing immediate needs.




Read more:
Omar al-Bashir brutalised Sudan – how his 30-year legacy is playing out today


The humanitarian efforts that the Sudanese people invented are based on previous experience in civil engagement. The current call for a civilian government, which was also a demand by the protesters during the 2018 uprisings, is rooted in political togetherness. It is also linked to the long history of civilian governance practices at the grassroots level.

The Conversation

Lovise Aalen receives funding from the Research Council of Norway (grant no. ES620468) and the Sudan-Norway Academic Collaboration (SNAC). She is a member of the board of the Rafto Foundation for Human Rights.

Mai Azzam receives funding from the Bayreuth International Graduate School for African Studies (BIGSAS) and from the Gender and Diversity Office (GDO) of the Africa Multiple Cluster of Excellence, University of Bayreuth, Germany.

ref. Sudan’s protesters built networks to fight a tyrant – today they save lives in a war – https://theconversation.com/sudans-protesters-built-networks-to-fight-a-tyrant-today-they-save-lives-in-a-war-270176

Is anyone really misled by the term ‘veggie burger’? Our research suggests consumers are savvy

Source: The Conversation – UK – By Friederike Döbbe, Assistant Professor (Lecturer) in Business & Society, School of Management, University of Bath

Avelina/Shutterstock

The European parliament recently backed changes to the rules around the labelling and marketing of plant-based meat alternatives. New definitions specify that words like “burger”, “sausage” or “steak”, refer exclusively to animal protein. To get to the meat of the matter, this may mean that Europeans’ favourite soy-based patty can no longer be called a burger.

The vote took place amid a long-running European debate over the designation of plant-based alternatives to animal protein and the associated “linguistic gymnastics”.

A previous proposal to prohibit comparisons between dairy and plant-based foods was rejected. But the EU did decide to reserve the term “dairy” for products derived from animal milk. As a result, companies must now refer to their products as “almond drink” or “plant-based slices”, for example.

In the case of meat, the labelling propositions are part of a broader set of amendments to EU agricultural and food market regulations. These are supposed to strengthen the position of farmers in the food supply chain.
Farmers in Europe have long expressed concerns that plant-based substitutes could threaten traditional farming practices.

But what about the role of the consumer in debates over how meat and its plant-based substitutes should be labelled?

Before the vote, MEPs had discussed a perceived lack of transparency for consumers. It was suggested that terms such as “veggie burger” or “tofu steak” obscure the distinction between meat and plant-based or lab-grown alternatives. These ambiguities, it was argued, could confuse or mislead consumers.

While member states must still negotiate the amendments detailing the labelling changes, the consequences could be significant. Some retailers, like supermarket chain Lidl, are working to increase sales of plant-based foods. This aligns with what the science says about sustainable diets.

After initial growth in the market for plant-based alternatives, sales have plateaued. Many producers fear they may now also face additional costs associated with rebranding and relabelling their products.

In response, a coalition of food producers and retailers have argued that avoiding familiar terms like “steak” or “burger” could actually create more confusion among consumers.

But how misled are consumers really?

Despite concerns on both sides of the debate, our research shows a different reality – one in which many consumers are much more knowledgeable than they are made out to be.

We studied how people reacted to a marketing campaign by Swedish chicken producer Kronfågel. The campaign implied that climate action is the consumer’s responsibility, suggesting that shoppers should switch from beef to chicken to “do something simple for the climate”.

As part of the campaign, an emissions calculation underscored this shift, even leaving the impression it could offset air travel – based on just one meal. While the campaign drew from standardised carbon footprinting, the calculation left more questions than answers.

The ‘eat chicken, fly more’ message didn’t land well with consumers.

Through analysis of comments on social media and complaints to the Swedish consumer protection agency, we studied how people reacted to the campaign – rejecting it vehemently. They took issue for a range of reasons, including the corporation’s use of climate science and debates about what constitutes sustainable food consumption and what does not.

The various sources of disagreement illustrate the polarisation over food consumption and production. Many people were critical of the suggestion to “offset” flying by eating chicken, while others questioned the appropriateness of a chicken producer, with suppliers in the agricultural sector, demonising beef production.

The company responded by saying that its intention was to “help consumers navigate” the difficulties of lowering their consumption-related carbon footprint. It also said that it took consumer criticisms about the campaign being misleading to heart and would learn from them. We know of no investigation into the campaign, but we sense a shift towards softer messaging more broadly as companies’ fears of greenwashing accusations increase.




Read more:
Quick climate dictionary: what actually is a carbon footprint?


Our research shows that many consumers are well informed about their choices, actively scrutinising food products about their health effects, climate impact and production processes. And in debating the advantages and disadvantages of meat and plant-based alternatives, we found that they would openly disagree with each other.

These discussions reveal that there are many relevant perspectives and values involved in choosing the “best” diet – and consumption choices are deeply tied to identity, emotion and culture. In light of this complexity, our research serves as a warning for businesses and other organisations, including political parties, to approach climate messaging with care and to make sure their claims are credible.

So what then to make of the labelling debate? It is of course important to safeguard consumers from harmful or deceptive marketing. However, research has illustrated how powerful people and organisations may stereotype citizens. This may be, for instance, as “responsible”, “misled” or “duped” consumers – often the purpose is to serve their own commercial or political interests.

Politicians, food producers and retailers should be cautious about claims that consumers cannot differentiate meat from plant-based alternatives. Shoppers are often much more switched on than some in the EU debate suggest.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Is anyone really misled by the term ‘veggie burger’? Our research suggests consumers are savvy – https://theconversation.com/is-anyone-really-misled-by-the-term-veggie-burger-our-research-suggests-consumers-are-savvy-270635

What ancient Athens teaches us about debate – and dissent – in the social media age

Source: The Conversation – Global Perspectives – By Sara Kells, Director of Program Management at IE Digital Learning and Adjunct Professor of Humanities, IE University

Monument to Socrates and Confucius in Athens, Greece. Collection Maykova/Shutterstock

In ancient Athens, the agora was a public forum where citizens could gather to deliberate, disagree and decide together. It was governed by deep-rooted social principles that ensured lively, inclusive, healthy debate.

Today, our public squares have moved online to the digital feeds and forums of social media. These spaces mostly lack communal rules and codes – instead, algorithms decide which voices rise above the clamour, and which are buried beneath it.

The optimistic idea of the internet being a radically democratic space feels like a distant memory. Our conversations are now shaped by opaque systems designed to maximise engagement, not understanding. Algorithmic popularity, not accuracy or fairness, determines reach.

This has created a paradox. We enjoy unprecedented freedom to speak, yet our speech is constrained by forces beyond our control. Loud voices dominate. Nuanced voices fade. Outrage travels faster than reflection. In this landscape, equal participation is all but unattainable, and honest speech can carry a very genuine risk.

Somewhere between the stone steps of Athens and the screens of today, we have lost something essential to our democratic life and dialogue: the balance between equality of voice and the courage to speak the truth, even when it is dangerous. Two ancient Athenian ideals of free speech, isegoria and parrhesia, can help us find it again.

Ancient ideas that still guide us

In Athens, isegoria referred to the right to speak, but it did not stop at mere entitlement or access. It signalled a shared responsibility, a commitment to fairness, and the idea that public life should not be governed by the powerful alone.

The term parrhesia can be defined as boldness or freedom in speaking. Again, there is nuance; parrhesia is not reckless candour, but ethical courage. It referred to the duty to speak truthfully, even when that truth provoked discomfort or danger.

These ideals were not abstract principles. They were civic practices, learned and reinforced through participation. Athenians understood that democratic speech was both a right and a responsibility, and that the quality of public life depended on the character of its citizens.

The digital sphere has changed the context but not the importance of these virtues. Access alone is insufficient. Without norms that support equality of voice and encourage truth-telling, free speech becomes vulnerable to distortion, intimidation and manipulation.

The emergence of AI-generated content intensifies these pressures. Citizens must now navigate not only human voices, but also machine-produced ones that blur the boundaries of credibility and intent.




Leer más:
The ancient Greeks invented democracy – and warned us how it could go horribly wrong


When being heard becomes a privilege

On contemporary platforms, visibility is distributed unequally and often unpredictably. Algorithms tend to amplify ideas that trigger strong emotions, regardless of their value. Communities that already face marginalisation can find themselves unheard, while those who thrive on provocation can dominate the conversation.

On the internet, isegoria is challenged in a new way. Few people are formally excluded from it, but many are structurally invisible. The right to speak remains, but the opportunity to be heard is uneven.

At the same time, parrhesia becomes more precarious. Speaking with honesty, especially about contested issues, may expose individuals to harassment, misrepresentation or reputational harm. The cost of courage has increased, while the incentives to remain silent, or to retreat into echo chambers, have grown.




Leer más:
Social media can cause stress in real life – our ‘digital thermometer’ helps track it


Building citizens, not audiences

The Athenians understood that democratic virtues do not emerge on their own. Isegoria and parrhesia were sustained through habits learned over time: listening as a civic duty, speaking as a shared responsibility, and recognising that public life depended on the character of its participants. In our era, the closest equivalent is civic education, the space where citizens practise the dispositions that democratic speech requires.

By making classrooms into small-scale agoras, students can learn to inhabit the ethical tension between equality of voice and integrity in speech. Activities that invite shared dialogue, equitable turn-taking and attention to quieter voices help them experience isegoria, not as an abstract right but as a lived practice of fairness.

In practice, this means holding discussions and debates where students have to verify information, articulate and justify arguments, revise their views publicly, or engage respectfully with opposing arguments. These skills all cultivate the intellectual courage associated with parrhesia.

Importantly, these experiences do not prescribe what students should believe. Instead, they rehearse the habits that make belief accountable to others: the discipline of listening, the willingness to offer reasons, and the readiness to refine a position in light of new understanding. Such practices restore a sense that democratic participation is not merely expressive, but relational and built through shared effort.

What civic education ultimately offers is practice. It creates miniature agoras where students rehearse the skills they need as citizens: speaking clearly, listening generously, questioning assumptions and engaging with those who think differently.

These habits counter the pressures of the digital world. They slow down conversation in spaces designed for speed. They introduce reflection into environments engineered for reaction. They remind us that democratic discourse is not a performance, but a shared responsibility.




Leer más:
‘Historical time’ helps students truly understand the complexity of the past – and how they fit into it


Returning to the spirit of the agora

The challenge of our era is not only technological but educational. No algorithm can teach responsibility, courage or fairness. These are qualities formed through experience, reflection and practice. Athenians understood this intuitively, because their democracy relied on ordinary citizens learning how to speak as equals and with integrity.

We face the same challenge today. If we want digital public squares that support democratic life, we must prepare citizens who know how to inhabit them wisely. Civic education is not optional enrichment – it is the training ground for the habits that sustain freedom.

The agora may have changed form, but its purpose endures. To speak and listen as equals, with honesty, courage and care, is still the heart of democracy. And this is something we can teach.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Sara Kells no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. What ancient Athens teaches us about debate – and dissent – in the social media age – https://theconversation.com/what-ancient-athens-teaches-us-about-debate-and-dissent-in-the-social-media-age-270100

Like night and day: why Test cricket changes so much under lights

Source: The Conversation – Global Perspectives – By Vaughan Cruickshank, Senior Lecturer in Health and Physical Education, University of Tasmania

Cricket’s first Test match was played between Australia and England in 1877.

The next Ashes match, starting at the Gabba in Brisbane on Thursday, will be Test number 2,611.




Read more:
The ‘Bazball’ game style has revolutionised English cricket. Australia should be nervous


It will also be the 25th day-night Test.

Many people criticised the introduction of day-night Tests – including challenges posted by the pink ball (not red, as used in day clashes), visibility issues during twilight, and concerns that cricket is putting commercial interests ahead of the sport’s integrity.

But just how are day-night Tests different from traditional day matches?

History of day-night Tests

Australia and New Zealand played the first official day-night Test at the Adelaide Oval in 2015.

Day-night matches were introduced to increase the popularity of Test cricket and to play it at a time when it could attract larger crowds and a greater primetime audience on television.

From a commercial angle, the move has worked. Evening sessions draw larger crowds and television audiences.

Australia has embraced day-night Tests more than any other country, playing in 14 of the 24 completed day-night Tests. England is next with seven.

Australia has also hosted 13 of the day-night Tests, eight of them in Adelaide. India is next with three.

Cricket Australia and various state governments negotiate summer schedules and venues, with only Adelaide, Brisbane and Hobart hosting day-night Tests so far.

Australian dominance

The Australian team’s familiarity with day-night cricket may partly explain its outstanding record of 13 wins and one loss.

In contrast, England has only won two of its seven day-night Tests, losing all three against Australia.

Familiarity and more opportunities have contributed to Australian dominance of day-night Tests. The top four leading wicket-takers in day-night Tests are Australian.

Mitchell Starc leads (81 wickets in 14 Tests) while the best by an English player is the now-retired James Anderson with 24 wickets in seven Tests.

Australia also has the top five run scorers in day-night Tests.

Marnus Labuschagne (958 runs in nine Tests) is the current leader and has the chance to be the first player to score 1,000 runs in day-night encounters. Joe Root (501 runs in seven games) is the top Englishman at sixth on the list.

How things change under lights

Day-night games have several key differences to day Tests, such as the ball, the conditions and tactics used.

To make day-night Tests work, manufacturers had to develop a ball that’s visible under floodlights, yet durable enough for Test conditions.

Traditional red balls are too difficult to see at night, whereas white balls (used in shorter cricket formats) become dirty and discoloured too quickly.

After years of experimentation with orange and yellow versions, the pink ball emerged as the best compromise. It was trialled in domestic competitions and one-day internationals before being used in Tests.

Batting and bowling under lights is very different from daytime play because the pink ball behaves differently.

Its thicker coating keeps it shiny for longer, which gives fast bowlers more swing and seam movement.

This is most obvious when the ball is new and also during the twilight session, when dew can add extra moisture to the pitch.

Additionally, more grass is often left on the pitch to help reduce damage to the ball.

This all makes life more difficult for batters.

Spinners, though, often struggle because the ball’s harder coating and extra dew reduce grip and turn.

Players have also spoken about the difficulty of adjusting their eyes as daylight fades and floodlights take over. Fielders can also lose sight of the ball against the dusky sky.

In day Tests, the average runs per wicket increases slightly from session one to session three, with scoring rates also increasing slightly across the day. This pattern suggests batting becomes easier as the ball softens and the pitch flattens, while bowlers tire and conditions remain stable across daylight hours.

In contrast, session two is the easiest to bat in during day-night Tests. Batting is much harder in session one (when the ball is often new) and in session three under lights.

Pink ball scoring rates are similar to daytime matches but bowlers strike more often.

What about tactics?

Teams have learned to plan around the evening session (session three), when the fading light and cooling air can make batting harder.

Captains often time their declarations or new-ball spells to coincide with the twilight period and choose to bat first.

Fast bowlers in particular relish the chance to attack under lights and many batters say adapting footwork and timing against the moving pink ball is more difficult.

Comparing results

In short, day-night Tests are harder for batters. Fewer runs are scored, wickets fall more quickly, and games generally finish earlier.

When comparing all Tests from the past ten years, teams in day-night matches score about 150 fewer runs per game and bowlers need ten fewer balls to take each wicket.

Day-night Tests also tend to end with a result sooner, with matches on average being around 50 overs shorter. Notably, none of the 24 day-night Tests played so far has ended in a draw, compared with 14% of day Tests.

Thursday’s second Ashes Test at the Gabba will be the fourth day-night Test at the Queensland ground.

The Australians lost the previous day-night Gabba Test, to the West Indies last summer, which will give England some hope after their disastrous loss in the opening Ashes clash in Perth.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Like night and day: why Test cricket changes so much under lights – https://theconversation.com/like-night-and-day-why-test-cricket-changes-so-much-under-lights-267320

Kim Kardashian’s brain scan shows ‘low activity’ and holes. I’m a brain expert and I have questions

Source: The Conversation – Global Perspectives – By Sarah Hellewell, Senior Research Fellow, The Perron Institute for Neurological and Translational Science, and Research Fellow, Faculty of Health Sciences, Curtin University

A recent episode of the The Kardashians shared some startling news about Kim Kardashian’s brain.

Discussing Kim’s recent brain scan, her doctor pointed out “holes” on her brain scan he said were related to “low activity”.

While this sounds incredibly sad and concerning, doctors and scientists have doubts about the technology used and its growing commercialisation.

I study brain health, including imaging the brain to look for early signs of disease.

Here’s what I think about this technology, whether it can really find holes in our brains, and if should we be getting these scans to check our own.

What can imaging really tell you?

Earlier this year, Kim was diagnosed with a brain aneurysm, or widening of an artery, after an MRI.

The type and extent of this aneurysm is unclear. And there doesn’t seem to be a clear link between her aneurysm and this recent news.

But we do know the latest announcement came after a different type of imaging, known as single-photon emission tomography (known as SPECT).

This involves injecting radioactive chemicals into the blood and using a special camera which creates 3D images of organs, including the brain. This type of imaging was developed in 1976 and was first used in the brain in 1990.

SPECT scans can be used to track and measure blood flow in organs, and are used by doctors to diagnose and guide treatment for conditions affecting the brain, heart and bones.

While SPECT does has some clinical use under limited circumstances, there is not good evidence for SPECT scans outside these purposes.

Enter the world of celebrities and private clinics

The clinic featured in The Kardashian episode offers
SPECT to its clients, including the Kardashian-Jenners.

SPECT images have mass appeal due to their aesthetically pleasing pastel colours, widespread promotion on social media, and claims these scans can be used to diagnose any number of conditions. These include stress (as in Kim’s case), Alzheimer’s, ADHD, brain injury, eating disorders, sleep problems, anger and even marital problems.

But the scientific evidence to support the use of SPECT as a diagnostic tool for an individual and for so many conditions has led many doctors, scientists and former patients to criticise the work of such clinics as scientifically unfounded and “snake oil”.

Scans could potentially show changes in blood flow, though these may be common across conditions. Blood flow can also vary depending on the area of the brain examined, time of day, and even how well-rested a person is.

Areas in which blood flow is reduced have been described as “holes”, “dents” or “dings” on such SPECT scans.

In Kim’s case, this reduced blood flow was explained as “low activity” of the brain. Her doctor suggested the frontal lobes of her brain were not working as they should be, due to chronic stress.

But there is no scientific evidence to link these changes in blood flow to stress or functional outcomes. In fact, there is no single technique with scientific support to link changes in brain function to symptoms or outcomes for an individual.

These scans aren’t cheap

Doctors have several concerns about people without symptoms seeking SPECT as a diagnostic tool. First, people are injected with radioactive materials without a defined clinical reason.

Patients may also undergo treatment, or be recommended to take particular supplements, based on a diagnosis from SPECT that is scientifically unfounded.

And as SPECT scans are not recognised as a medical requirement, patients pay upwards of US$3,000 for a SPECT scan, with dietary supplements costing extra.

Do I need a scan like this?

While imaging tools such as SPECT and MRI may be genuinely used to diagnose many conditions, there is no medical need for healthy people to have them.

Such scans for healthy people are often described as “opportunistic”, with a double meaning: they may possibly find something in a person with no symptoms, but at several thousand dollars a scan, they take advantage of people’s health anxieties and can lead to unnecessary use of the health-care system.

It can be tempting to follow in the footsteps of the stars and look for diagnoses via popularised and widely advertised scans. But it’s important to remember the best medical care is based on solid scientific evidence, provided by experts who use best-practice tools based on decades of research.

The Conversation

Sarah Hellewell receives funding from the Medical Research Future Fund for MRI-based research.

ref. Kim Kardashian’s brain scan shows ‘low activity’ and holes. I’m a brain expert and I have questions – https://theconversation.com/kim-kardashians-brain-scan-shows-low-activity-and-holes-im-a-brain-expert-and-i-have-questions-271083

Why we remember the source of an opinion better than the source of a fact – new research

Source: The Conversation – Global Perspectives – By Daniel Mirny, Assistant Professor of Marketing, IESE Business School (Universidad de Navarra)

Anton Vierietin/Shutterstock

In public discourse, we spend a great deal of collective energy debating the accuracy of facts. We fact-check politicians, monitor social media for misinformation, and prioritise data-driven decision-making in our workplaces. This focus is vital; the distinction between truth and falsehood is the bedrock of a functioning society.

However, by focusing so intently on factual accuracy, we risk overlooking another fundamental distinction: the difference between a fact and an opinion.

A statement of fact is relatively easy to verify: it is either true or not. But a claim’s objectivity – is it a verifiable objective statement or a subjective expression of belief? – is far more complex. This is why our minds process and encode opinions in a fundamentally different way to facts.

The stakes of objectivity

Objectivity is not a mere linguistic nuance; it lies at the foundation of important policy and legal debates. For instance, in defamation lawsuits against US media figures like Tucker Carlson and Sidney Powell, legal defences have hinged on whether statements could “reasonably be interpreted as facts” or were merely “opinions.” Similarly, social media platforms have struggled with whether to fact-check posts labelled as opinions, a policy that has recently complicated efforts to combat climate change denialism.

The distinction matters because it frames how we disagree. When a claim is clearly an opinion – for instance, “the current administration is failing the working class” – one may agree or disagree, but we understand that there is room for disagreement and neither side is inherently right nor wrong.

However, a factual statement – “The official US poverty rate was 10.6% in 2024” – leaves little room for debate. It necessitates the existence of a source, and an objectively correct response.

As a result, beliefs about claim objectivity can stifle receptiveness to conflicting perspectives. This, in turn, fuels interpersonal conflict and drives political polarisation.

The information we value

Despite these high stakes, there has been limited research on the cognitive implications of claim objectivity. In a recent series of 13 pre-registered experiments involving 7,510 participants, conducted with UCLA Anderson’s Stephen Spiller and published in the Journal of Consumer Research, we investigated how claim objectivity affects a specific and crucial type of memory: source memory.

Our findings suggest that the human mind does not treat facts and opinions equally. When it comes to remembering who said what, objective facts are at a distinct disadvantage.

We can illustrate this with an example. A doctor makes the factual claim that “the measles vaccine prevented an estimated 56 million deaths between 2000 and 2021.” Another doctor might say something similar, but give an opinion instead of data: “I believe vaccination is an easy way to prevent unnecessary suffering.”

In our research, we tested this dynamic, using medical claims about a fictitious disease to control for prior knowledge. We found that people are significantly more likely to remember the original source of an opinion than that of a fact.

Crucially, this is not because opinions are simply “catchier” or easier to remember in general. Across all 13 of our experiments, we also measured “recognition memory” – the ability to remember that a statement was made at all. We found no consistent difference in recognition memory between facts and opinions. Participants remembered seeing factual claims and opinions equally well. However, they struggled to link the factual claims back to the correct source.




Leer más:
The people we like can influence the connections our memory makes


Encoding the source

Why does this disconnect occur? Source memory is a form of associative memory. It relies on the brain’s ability to bind distinct components of an experience – what was said and who said it – into a coherent web of interconnected elements during the initial encoding of information.

We propose that the strength of this binding depends on one thing: what the claim tells us about its source.

Both facts and opinions provide information about the source, but they do so to different degrees. If a political candidate says “The United States Agency for International Development (USAID) was created by the Foreign Assistance Act of 1961,” we learn that they know about legislative history. But if that same candidate says, “I believe shuttering USAID has been a moral catastrophe for our nation and the world,” we learn far more about them. We learn about their values, their priorities, and their stance on America’s role in the world.

Because opinions generally provide more information about the speaker than facts do, our brains encode stronger links between sources and opinions than between sources and facts.

Studies in developmental psychology and neuroscience support this. Research has found that when encoding opinions compared to facts, there is greater activation in the brain regions involved in theory of mind – the ability to represent the thoughts and mental states of others.

When we hear an opinion, we are building a richer mental model of the speaker. This additional social information strengthens the associative links formed during encoding.




Leer más:
Jane Austen and theory of mind: how literary fiction sharpens your ‘mindreading’ skills


But what happens when opinions tell us nothing about a source? We tested this mechanism by presenting participants with book reviews. When participants believed the sources were the authors of the reviews, they remembered the sources of opinions far better than facts. However, when we told participants the sources were merely “re-tellers” reading randomly selected reviews, the source memory advantage for opinions disappeared, performing on par with facts.

We also tested source memory for facts that reveal something about a source, such as personal statements like “I was born in Virginia”. In these cases, source memory was just as accurate as it was for opinions like “chocolate ice cream tastes better than vanilla”. It was also more accurate than for general facts about the world, such as “Stockholm is the capital of Sweden”.

The visibility paradox

These findings present a major challenge for experts and leaders. Authorities are often advised to “stick to the facts” to maintain credibility, but our findings suggest that by presenting only facts, experts risk being forgotten as the sources of important information.

This may pose a problem for the credibility of information – in an age of rampant misinformation and growing polarisation, remembering who said what is increasingly important to avoid conflict and ensure accuracy.

For experts, the goal is often to anchor facts in reality. Our research suggests that sharing opinions can help people to accurately attribute relevant information to credible sources. By sharing what they believe about the data – rather than just the data itself – experts can provide the social cues that our brains need to more strongly bind the information to its source. While facts play an important role in the battle against misinformation, opinions may be just as critical – and they don’t go unnoticed.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

This research was conducted in part thanks to the generous support of the UCLA Anderson Morrison Center for Marketing and Data Analytics.

ref. Why we remember the source of an opinion better than the source of a fact – new research – https://theconversation.com/why-we-remember-the-source-of-an-opinion-better-than-the-source-of-a-fact-new-research-270579

Lasting peace in Ukraine may hinge on independent monitors – yet Trump’s 28-point plan barely mentions them

Source: The Conversation – Global Perspectives – By Peter J. Quaranto, Visiting Professor of the Practice, University of Notre Dame

Russian President Vladimir Putin attends a meeting with U.S. representatives Steve Witkoff and Jared Kushner (both not pictured) on Dec. 2, 2025. Alexander Kazakov/ AFP via Getty Images

Start-and-stop negotiations for a deal to end the war in Ukraine have been injected with new intensity after U.S. President Donald Trump’s administration unveiled a 28-point peace proposal.

It is far from clear whether the latest flurry of diplomacy, which on Dec. 2, 2025, saw Trump’s envoys Steve Witkoff and Jared Kushner meet with Russian President Vladimir Putin, will force the warring parties any closer to a resolution in the grinding, nearly four-year-long conflict.

Yet even if negotiators can broker a welcome deal to stop the current fighting, they will immediately be faced with the challenges of sustaining and implementing it.

And many peace accords fall apart quickly and are followed by new waves of violence.

Our research as scholars focusing on peace monitoring and Ukraine suggests that one thing is key in managing mistrust between parties involved in any peace plan: multifaceted third-party monitoring.

The University of Notre Dame’s Peace Accords Matrix, – the largest collection of implementation data on intrastate peace agreements – shows clear evidence that built-in safeguards, such as monitoring and verification by third parties, can increase success rates in peace agreements by more than 29% – meaning no resumption of fighting in the first five years of an accord.

Peace Accords Matrix team members regularly provide support to ongoing peace processes and in the design and implementation of agreements. We believe the program’s research could be applied to the challenges facing future peace in Ukraine.

Lessons from Colombia

The Peace Accords Matrix team’s work in Colombia is instructive on how an effective monitoring mechanism could be shaped in Ukraine.

Notre Dame’s Kroc Institute for International Peace Studies was tasked with carrying out on-the-ground and real-time monitoring of the 2016 peace deal between the Colombian government and the Revolutionary Armed Forces of Colombia, better known as FARC.

The Peace Accords Matrix’s 30-staffer team in Colombia has served as an independent body monitoring 578 peace accord commitments in areas such as rural reform, political participation and securing justice for victims. These staffers have, for example, traveled to reintegration camps to speak to former combatants in verifying United Nations data on the number of weapons surrendered and destroyed, among other accord targets.

Armed with quantitative and qualitative data, matrix members regularly meet with stakeholders – including victims, former guerrillas and politicians – to assess the status of implementation and to identify areas that need to be prioritized.

Over the past decade, the work has highlighted when and where there has been insufficient progress in boosting livelihoods and leadership opportunities for women and ethnic minorities.

This reporting has prompted new attention toward implementing these obligations laid out in the accord.

What does Ukraine need?

Our experience shows that when it comes to securing a lasting peace in Ukraine, it is imperative that a mandate for robust monitoring is spelled out clearly and realistically. To be effective, a monitoring body must have the independence to fully report and document violations.

That’s just the first step. Consider the failure of the Minsk agreements, signed in 2014 and 2015 to end fighting in the Donbas region of Ukraine between Ukrainian troops and Russian-backed separatists.

Those accords failed in part because the monitoring mission, led by the Organization for Security and Co-operation in Europe, lacked any defined mechanism to press for any action or change once violations – and there were many – had been established.

While the organization’s Special Monitoring Mission may have contributed to some temporary de-escalation in the Donbas conflict, ultimately Russia was able to exploit the weaknesses of the Minsk agreements and commit hostile acts, laying the groundwork for the current war.

Research suggests that monitoring works best when it extends beyond physical ceasefire lines to encompass the cyber domain, too. Moscow has carried out extensive cyberattacks on Ukrainian infrastructure throughout the conflict. Such aggression could continue invisibly despite a ceasefire, allowing one party to pre-position capabilities for future attacks or to conduct espionage without triggering traditional monitoring mechanisms.

Unlike conventional military activities, such cyber hostilities are inherently difficult to monitor and verify. A comprehensive monitoring arrangement will need to grapple with these threats, requiring carefully designed information-sharing protocols with the few international actors capable of monitoring the online activities of both sides.

A bigger tent

A key element of ensuring a durable peace is building trust between conflict parties over time. With the right mandate and authority, monitoring bodies can create space and structure for follow-on dialogue as implementation obstacles emerge. Durable peace processes require fine-tuning to adapt to changing political realities on the ground.

A soldier walks down a road surrounded by bombed-out buildings
The war in Ukraine has dragged on for nearly four years.
Russian Defense Ministry/Anadolu via Getty Images

Involving public stakeholders in the implementation of a peace agreement is another key element, our research shows. Third-party monitoring can provide the framework for soliciting outside perspectives and participation.

Over the past decade, Ukrainian nongovernmental organizations have steadily developed expertise in monitoring and accountability in areas including elections, procurement, humanitarian operations and potential war crime activity.

Building on this experience by involving broader segments of civil society – including the country’s highly trusted faith-based communities – would strengthen the legitimacy of third-party monitoring in the eyes of the domestic public and assuage uneasy acceptance of any peace accord.

Ready on Day 1

While the United Nations and other multinational bodies are well placed to support some core monitoring tasks, those planning for peace now should, we believe, consider the benefits of involving a wider range of third-party actors. Indeed, many Ukrainians are skeptical that institutions of which Russia is a member can carry out their work with the needed independence.

As we have seen with the Peace Accords Matrix’s experience, the involvement of an independent research institution can open up new possibilities for monitoring.

And ideally, monitoring missions should be ready to go from Day 1, or as close to that as possible.

Comparative research has shown that the speed at which a monitoring mission starts its work can affect its relevance. Yet, many monitoring bodies are wracked by delays due to lack of planning, support and resources.

The current 28-point peace plan being mulled by Russia and Ukraine makes only a brief mention of monitoring, by a “Peace Council, headed by President Donald J. Trump.”

But our experience shows that prioritizing third-party monitoring and delving into the details of how it would be carried out – even as ceasefire negotiations are ongoing – can help ensure the success of a future deal.

It would serve as a vital signal to Ukrainians that, unlike the aftermath of the Minsk agreements, this time the international community will continue to engage and act to ensure their country’s peace.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Lasting peace in Ukraine may hinge on independent monitors – yet Trump’s 28-point plan barely mentions them – https://theconversation.com/lasting-peace-in-ukraine-may-hinge-on-independent-monitors-yet-trumps-28-point-plan-barely-mentions-them-268469

Les ports du Havre, de Marseille et de Dunkerque face à la concurrence européenne

Source: The Conversation – in French – By Arnaud Serry, Maitre de conférences HDR en géographie, Université Le Havre Normandie

En 2024, le port du Havre (Seine-Maritime), géré par l’établissement public Haropa Port, est le septième plus grand port européen en termes de trafic total annuel en tonnes. Alexandre Prevot/Shutterstock

Une étude menée de 2018 à 2022 sur les ports du Havre, de Marseille et de Dunkerque met en lumière leur performance face aux 49 principaux ports européens. S’ils affichent des indicateurs en deçà de leurs concurrents – taux de navires obligés de passer au mouillage, durée moyenne des escales ou taux moyen de manutention –, leur progression est bien réelle.


La performance des ports est au cœur de la compétitivité économique et logistique d’un pays. Dans un contexte de chaînes d’approvisionnement mondialisées, la fluidité du passage portuaire est un facteur déterminant pour attirer les armateurs, dont une poignée contrôle la quasi-totalité des capacités de transport et les trafics conteneurisés.

Malgré des investissements et des atouts géographiques indéniables, les ports français présentent des performances en retrait par rapport à leurs grands concurrents européens.

C’est ce que montre notre étude menée de 2018 à 2022. À partir des données Automatic Identification System (AIS) et de la plateforme Port Performance de S&P Global, nous avons comparé les performances des trois principaux ports à conteneurs français – Haropa-Le Havre (Seine-Maritime), Marseille-Fos (Bouches-du-Rhône) et Dunkerque (Nord) – à celles des 49 ports européens ayant traité plus de 500 000 équivalents vingt pieds, l’unité de mesure du transport maritime, en 2022.

Croissance du Havre, de Marseille et de Dunkerque

Les trois ports français étudiés présentent des profils contrastés.

Haropa-Le Havre, premier port à conteneurs national, affiche un trafic de 3,01 millions d’équivalents vingt pieds (EVP) en 2022. Soit une croissance de 5 % par rapport à 2018. Marseille-Fos, deuxième, atteint 1,5 million d’EVP avec une croissance de 8 %. Dunkerque, longtemps tourné vers le vrac, connaît une véritable percée avec 745 000 EVP, soit une hausse de 76 % sur la période.




À lire aussi :
Contrôler les données des ports, un enjeu de guerre économique avec la Chine ?


Cette progression s’explique par le renforcement des liaisons opérées par CMA CGM, acteur majeur du pavillon français. À Dunkerque, la compagnie représente près des trois quarts des escales de porte-conteneurs. Le port nordiste accueille désormais des unités de très grande capacité de plus de 18 000 EVP, illustrant sa montée en gamme sur les lignes maritimes principales.

Peu d’engorgement pour l’accès maritime des navires

Les ports français se distinguent positivement par leur accessibilité maritime.

En 2022, en moyenne en Europe, un tiers des navires étaient obligés de passer au mouillage avant d’accoster. À Marseille, ce chiffre n’était que de 4 %. Au Havre, 24 % des navires ont attendu en rade, un chiffre en hausse mais encore proche de la moyenne régionale, et à Dunkerque ils étaient 16 %. Ce dernier se distingue également sur le temps d’attente moyen qui n’est que de 8,2 heures contre 25 heures en moyenne en Europe.

Ces chiffres traduisent une fluidité maritime et une faible congestion. C’est une force dans un contexte où de nombreux ports européens subissent régulièrement des engorgements et où l’attente signifie une perte d’argent pour les armateurs de lignes régulières conteneurisées.

Taux de navires passant au mouillage (passant au moins quinze minutes en zone d’ancrage) en 2022.
S&P Global Market Intelligence, Fourni par l’auteur

Un rythme de 35 conteneurs par heure

Le revers de la médaille apparaît à quai. La durée moyenne des escales reste plus élevée en France que dans le reste de l’Europe.

Au Havre, elle dépasse de six heures la moyenne de ses concurrents de la Manche et de mer du Nord. À Marseille et à Dunkerque, les durées se rapprochent davantage de celles observées dans leurs bassins géographiques respectifs, mais sans les dépasser.

L’un des principaux facteurs de ce retard est la productivité des opérations de manutention. En moyenne, les ports français déplacent moins de conteneurs par heure et par grue. À Marseille-Fos, la productivité atteint seulement 35 conteneurs par heure, contre plus de 55 en moyenne européenne. Cette faiblesse tient au nombre réduit d’engins de levage disponibles : 1,5 grue en moyenne par escale, contre 2,2 en Europe.

Nombre moyen de conteneurs manutentionnés par navire et par heure en 2022.
S&P Global Market Intelligence, Fourni par l’auteur

Nous pouvons noter également que les ports français occupent un rôle encore limité dans le transbordement, l’acheminement de conteneurs vers une destination intermédiaire. Au Havre (50,8 conteneurs par heure) et à Dunkerque (45,6), la situation est meilleure mais reste en deçà des ports de la Manche et de la mer du Nord, où la moyenne s’établit à 61,8.

Des taux de manutention faibles

Au-delà de la vitesse, le rapport entre le nombre de conteneurs effectivement manutentionnés et la capacité théorique des navires met en évidence une autre faiblesse structurelle. Le taux moyen de manutention s’élève à 44,4 % en Europe, mais seulement à 28 % au Havre, 30 % à Dunkerque et 33 % à Marseille. Les trois ports français possèdent des taux moyens parmi les plus bas des ports étudiés.

Autrement dit, pour le même porte-conteneurs d’une capacité de transport théorique de 20 000 EVP, au Havre, 5 600 EVP seront chargés et déchargés contre 11 000 EVP à Anvers. Cette différence illustre la place intermédiaire occupée par les ports français dans les itinéraires des grandes compagnies maritimes : leurs escales y sont plus courtes et moins chargées.

Dans le contexte actuel et dans les stratégies des grands armateurs, certaines escales sont moins importantes. En cas de réorganisation des lignes maritimes, les armateurs peuvent choisir de supprimer certains ports de leurs services et de se concentrer sur un nombre plus restreint de ports, le blank sailing.

Les services maritimes se recentrant sur un nombre d’escales plus réduit, les ports avec des taux de manutention faibles sont les plus susceptibles d’être une escale sautée. Il faut tout de même souligner que, pour les ports de Marseille et de Dunkerque, les taux de manutention sont en progression constante depuis 2018 de 7 et de 13 points de pourcentage respectivement.

Haropa en établissement unique

La compétitivité portuaire ne se réduit pas à la performance technique. Elle dépend aussi de la qualité des connexions terrestres, du fonctionnement des terminaux, du dialogue social et de la capacité à offrir des services logistiques intégrés.

Les ports français souffrent encore d’une fragmentation institutionnelle et d’une gouvernance parfois complexe, là où leurs concurrents du Nord ont su rationaliser et industrialiser leurs processus. La transformation de Haropa en établissement unique en 2021 constitue un pas dans la bonne direction, mais les effets se feront sentir sur le long terme.

Vers une stratégie nationale de reconquête ?

Malgré leurs connexions sur les lignes majeures mondiales, les ports français présentent des performances plus faibles que leurs principaux concurrents européens. Face à la domination d’Anvers, de Rotterdam ou de Hambourg, les marges de manœuvre existent. L’enjeu n’est pas de rivaliser sur la taille, mais sur l’efficacité et la fiabilité. Réduire la durée des escales, améliorer la productivité des terminaux et fluidifier les dessertes ferroviaires et fluviales sont des priorités.

Les ports français disposent d’atouts : des réserves foncières importantes, une position géographique stratégique entre Méditerranée et Manche, une accessibilité maritime globalement performante, etc. Dans le même temps, plusieurs ports européens connaissent des épisodes de congestion et la présence d’un armateur mondial, CMA CGM, ou encore l’investissement important de MSC au Havre.

Pour transformer ces atouts en avantage compétitif durable, il faudra poursuivre les efforts de modernisation et de coordination logistique. Car dans un monde où chaque heure de transit compte, la compétitivité portuaire devient un indicateur clé de la souveraineté économique.

The Conversation

Ronan Kerbiriou a reçu des financements de la fondation SEFACIL.

Arnaud Serry ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Les ports du Havre, de Marseille et de Dunkerque face à la concurrence européenne – https://theconversation.com/les-ports-du-havre-de-marseille-et-de-dunkerque-face-a-la-concurrence-europeenne-269646

Google’s proposed data center in orbit will face issues with space debris in an already crowded orbit

Source: The Conversation – USA – By Mojtaba Akhavan-Tafti, Associate Research Scientist, University of Michigan

This rendering shows satellites orbiting Earth. yucelyilmaz/iStock via Getty Images

The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, medium-sized data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city.

Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers.

In space, sunshine – which solar panels can convert into electricity – is abundant and reliable. On Nov. 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So, instead of beaming power back to Earth, the constellation would beam data back to Earth.

For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space.

As a technology entrepreneur, I applaud Google’s ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris.

The mathematics of disaster

Space debris – the collection of defunct human-made objects in Earth’s orbit – is already affecting space agencies, companies and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites.

Space debris travels at hypersonic speeds of approximately 17,500 miles per hour (28,000 km/h) in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil.

Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceX’s Starlink. The Starlink network has more than 7,500 satellites, which provide global high-speed internet.

The U.S. Space Force actively tracks over 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to reliably identify and track.

In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the United States and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station.

The orbital shell Google’s project targets – a Sun-synchronous orbit approximately 400 miles (650 kilometers) above Earth – is a prime location for uninterrupted solar energy. At this orbit, the spacecraft’s solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, Sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris.

As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable.

Implications for Project Suncatcher

Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just one kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 miles per hour – while separated by gaps about the distance you need to safely brake on the highway.

This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to “think” and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototype satellites by early 2027 to validate the hardware.

But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites – this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind.

To add to this complexity, streams of particles and magnetic fields from the Sun – known as space weather – can cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag.

When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield.

The importance of active avoidance

To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with “reflexes” that autonomously detect and dance through a debris field. Suncatcher’s current design doesn’t include these active avoidance capabilities.

In the first six months of 2025 alone, SpaceX’s Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds.

Today’s object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time.

Equipping Suncatcher with active collision avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor.

Detecting space debris in orbit can help prevent collisions.

Paying rent for the orbit

Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the mission’s completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellite’s altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere.

However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use-tax for space debris removal.

A use-tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris removal missions, which capture and remove the most dangerous pieces of junk.

Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris removal programs can help keep low Earth orbit open for business.

The Conversation

Mojtaba Akhavan-Tafti receives funding from NASA and Intelligence Advanced Research Projects Activity (IARPA). He teaches space systems engineering and mission design and management at the University of Michigan’s College of Engineering.

ref. Google’s proposed data center in orbit will face issues with space debris in an already crowded orbit – https://theconversation.com/googles-proposed-data-center-in-orbit-will-face-issues-with-space-debris-in-an-already-crowded-orbit-270410

Labeling dissent as terrorism: New US domestic terrorism priorities raise constitutional alarms

Source: The Conversation – USA – By Melinda Haas, Assistant Professor of International Affairs, University of Pittsburgh

A new Trump administration policy threatens to undermine foundational American commitments to free speech and association. D-Keine, Getty Images

A largely overlooked directive issued by the Trump administration marks a major shift in U.S. counterterrorism policy, one that threatens bedrock free speech rights enshrined in the Bill of Rights.

National Security Presidential Memorandum/NSPM-7, issued on Sept. 25, 2025, is a presidential directive that for the first time appears to authorize preemptive law enforcement measures against Americans based not on whether they are planning to commit violence but for their political or ideological beliefs.

You’ve probably heard a lot about President Donald Trump’s many executive orders. But as an international relations scholar who has studied U.S. foreign policy decision-making and national security legislation, I recognize that presidents can take several types of executive actions without legislative involvement: executive orders, memoranda and proclamations.

This structure allows the president to direct law enforcement and national security agencies, with little opportunity for congressional oversight.

This seventh national security memorandum from the Trump White House pushes the limits of presidential authority by targeting individuals and groups as potential domestic terrorists based on their beliefs rather than their actions.

The memorandum represents a profound shift in U.S. counterterrorism policy, one that risks undermining foundational American commitments to free speech and association.

A man in a dark suit and blue tie sits at a desk.
The presidential memorandum signed by Donald Trump identifies ‘anti-Christian,’ ‘anti-capitalism’ or ‘anti-American’ views as potential indicators that a group or person will commit domestic terrorism.
Andrew Harnik/Getty Images

Presidential national security powers

Executive memoranda instruct government officials and agencies by delegating tasks and directing agency actions.

They can, for example, order a department to prepare reports, implement new policies, coordinate interagency efforts or review existing programs to align with the administration’s priorities.

Unlike executive orders, they are not required to be published. When these memoranda, like NSPM-7, relate to national security and military and foreign policy, they are called national security directives, although the specific name of these directives changes with each administration.

Many of these directives are classified. They may not be declassified, if at all, until years or decades after the end of the administration that issued them.

The stated purpose of NSPM-7 is to counter domestic terrorism and organized political violence, focusing mainly on perceived threats from the political left. The memorandum identifies “anti-Christian,” “anti-capitalism” or “anti-American” views as potential indicators that a group or person will commit domestic terrorism.

The memorandum claims that political violence originates with “anti-fascist” groups that hold the following views: “support for the overthrow of the United States Government; extremism on migration, race, and gender; and hostility towards those who hold traditional American views on family, religion, and morality.”

The strategy laid out in NSPM-7 includes preemptive measures to disrupt groups before they engage in violent political acts. For example, multiagency task forces are empowered to investigate potential federal crimes related to radicalization, as well as the funders of those potential crimes.

‘Domestic terrorist organizations’

The memorandum directs the Department of Justice to focus the resources of the FBI’s approximately 200 Joint Terrorism Task Forces on investigating “acts of recruiting or radicalizing persons” for the purpose of “political violence, terrorism, or conspiracy against rights; and the violent deprivation of any citizen’s rights.”

NSPM-7 also allows the attorney general to propose groups for designation as “domestic terrorist organizations.” That includes groups that engage in the following behaviors: “organized doxing campaigns, swatting, rioting, looting, trespass, assault, destruction of property, threats of violence, and civil disorder.”

Existing laws allow the secretary of state to designate groups as “foreign terrorist organizations” that are then subject to financial sanctions.

But these laws do not permit the president to label domestic groups this way.

A protest with a person in an orange outfit carrying a sign saying 'It's my First Amendment right to be HERE.'
Would protesters like these at a Washington, D.C., ‘No Kings’ demonstration be seen as potential domestic terrorists by the Trump administration?
Jose Luis Magana/AP

Defining terrorism

NSPM-7 marks a major conceptual shift in U.S. counterterrorism policy. Its focus on domestic terrorism significantly departs from historical approaches that primarily targeted foreign threats.

Earlier presidential directives largely defined terrorism as a foreign threat to be countered through military power, diplomacy and international cooperation.

Since Ronald Reagan’s presidency, the U.S. government had treated terrorism as a global menace to democratic institutions, emphasizing protection of citizens and allies abroad. By moving away from a traditional law enforcement framework and recasting terrorism as an act of war, the Reagan administration situated the issue within the broader realm of Cold War geopolitics and military advantage.

In the 1990s, the Clinton administration reframed terrorism as both a foreign policy and domestic security challenge, particularly after high-profile attacks such as the 1993 World Trade Center bombing and the 1995 Oklahoma City bombing. Clinton’s policy highlighted the dangers of transnational networks and the need to defend critical infrastructure.

After the 9/11 attacks, the Bush administration fused counterterrorism with national defense. The Bush-initiated global war on terrorism expanded the concept of who constituted a threat to include countries that harbored or aided terrorist organizations.

The Obama administration tried to narrow and regulate those powers by embedding counterterrorism within a system of legal rules and procedures. The key question, according to the declassified guidance, was whether the targeted individuals “pose a continuing, imminent threat to U.S. persons.”

This standard was not focused on ideology but rather on tactical considerations, such as the feasibility of capture and continued threat to U.S. interests.

For example, the lethal drone strike on al-Qaida propagandist Anwar al-Awlaki in 2011 was justified on the basis that he was actively involved in plotting attacks and remained unreachable for capture.

During the first Trump presidency, executive orders were used to change counterterrorism policy, most notably through several iterations of a “travel ban” that attempted to restrict immigration from terror-prone countries such as Iraq, Iran, Somalia, Syria and Yemen.

The Biden administration redirected attention toward preventing catastrophic threats, especially from weapons of mass destruction in the hands of groups or individuals outside of governments, such as terrorist organizations.

First Amendment rights at risk

There is no single official definition of terrorism in U.S. law.

Instead, laws use different definitions based on their purpose, whether criminal law or laws relating to intelligence collection or civil liability.

Definitions in all those areas typically focus on identifying violent or dangerous acts done with the intent to intimidate or coerce civilians or influence government policy.

But more than redefining terrorism, NSPM-7 reorients the machinery of national security toward the policing of belief.

The First Amendment generally prevents the government from punishing people for unpopular opinions. It also protects the ability for people to associate to advance public and private ideas in pursuit of political, economic, religious or cultural goals.

The directive’s emphasis on ideological orientations – “anti-Christianity,” “anti-capitalism” and “anti-American” views – as indicators of domestic terrorism potentially jeopardizes First Amendment rights.

Thirty-one members of Congress sent a letter to Trump expressing “serious concerns” about NSPM-7, warning that it poses “serious constitutional, statutory and civil liberties risks, especially if used to target political dissent, protest or ideological speech.”

As the ACLU warns, any definition of terrorism that includes ideological components risks criminalizing people or groups based on belief rather than based on violence or other criminal conduct.

Congress has declined to create a domestic complement to the foreign terrorist designation in large part because of the potential for impinging on First Amendment–protected association and speech.

But I fear that chilling speech may be the point.

Silencing dissent

NSPM-7 does not authorize new actions in the legal and institutional framework for counterterrorism. It does not criminalize previously legal conduct.

Rather, it states that the Trump administration’s investigative focus will be around the identity and ideology of supposed perpetrators. Prioritizing investigations into this broad swath of ideologies serves to instill fear, silencing anti-fascist and other messages in opposition to the Trump administration.

Law professor Steve Vladeck frames this chill as “obeying in advance,” in which organizations self-censor rather than risk investigation, prosecution or defending against the “domestic terrorist” label.

Although left-wing violence has risen in the past decade, empirical evidence proves that this violence remains at very low absolute levels, well below historical levels of right-wing or jihadist violence.

In fact, most domestic terrorists in the U.S. are politically on the right, and right-wing attacks account for the vast majority of fatalities from domestic terrorism.

Yet NSPM-7 focuses disproportionately on left-wing ideologies. NSPM-7 departs from prior U.S. counterterrorism frameworks by prioritizing the suppression of ideologically motivated dissent, even in the absence of concrete evidence of violent intent.

The Conversation

Melinda Haas does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Labeling dissent as terrorism: New US domestic terrorism priorities raise constitutional alarms – https://theconversation.com/labeling-dissent-as-terrorism-new-us-domestic-terrorism-priorities-raise-constitutional-alarms-269161