Every morning for the past 32 years, I have been counting earwigs. Here at Marshalls Heath, a small nature reserve in Hertfordshire, the only site where these nocturnal insects have been so systematically monitored for so long, the number of common earwigs has declined dramatically.
Using a light trap (equipment that entices nocturnal flying insects towards an artificial light and into a box until they can be counted and released in the morning), I found 282 earwigs in a single night in 1996. In 2024, only 31 adults were trapped in that entire year.
My new study, published in the Entomologist’s Record and Journal of Variation, indicates how this catastrophic decline appears to be due to very late frosts, with much higher numbers in the 1990s linked to runs of sunny, dry summers. I have had other anecdotal reports of declines in numbers in other parts of the UK, so this could be a more widespread phenomenon.
Most gardeners regard earwigs as a pest because they eat some leaves and flower petals. But their favourite foods also include woolly apple aphids, codling moth caterpillars and pear psyllid – these are all tiny insects that feed on apples and pears. In commercial orchards, earwigs are introduced to eat other insects and control apple and pear pests.
Like many insects, earwigs break down waste and decompose dead matter – this improves soil structure and helps create a healthy landscape. They also provide a source of food for some birds and small mammals.
A frosty decline
My research shows that the cause of the decline at Marshalls Heath is due to a number of factors. Years of exceptionally high numbers of earwigs were those when summer sunshine had been greatest and rainfall lowest in the two previous years, and when autumns had been cool and the springs dry – weather in the current year had no effect on numbers. But this does not explain their sudden drop in numbers in 2020.
John Barrett Murray has used a light trap to catch, count and then release earwigs. John Barrett Murray, CC BY-NC-ND
A key part of the puzzle is how adult female earwigs behave towards their young. Unusually for an insect, studies datingback to 1941 have shown that the mother defends and incubates her eggs and cares for her newly-hatched young in an underground nest.
As the young earwigs (called nymphs) grow, they may accompany their mother when she forages at night on the ground outside the nest, always returning back to the underground nest during daylight hours. But as soon as the nymphs moult for the first time, when they are about 5mm long, the mother abandons them to fend for themselves.
This is a critical stage in the young insect’s life, and my new study shows that the earlier the nymphs reach this free-roaming phase, the higher are their chances of survival. A delay of one month can be fatal, since they are vulnerable to disease, starvation and predation by birds and small mammals.
To reach this stage earlier, it appears that the fewer the late spring frosts, the better. In 2011, when nymphs moulted early, there were only eight ground frosts in April and May. In 2021, when adult numbers crashed, there were 32 late frosts, and according to the Met Office, it was the frostiest April since records began.
So what next? Trends over the past four years at Marshalls Heath show adult numbers of between 31 and 47 earwigs per year. In 2025 so far, I have only caught seven adults and nine nymphs.
So there is no sign of any revival in these numbers. However, the hot dry summer this year is just what might favour the survival of these insects in the years to come. If we have a cool autumn and a dry spring, followed by similar conditions next year, I’m hoping to record larger numbers in 2027.
Don’t have time to read about climate change as much as you’d like?
By the time the next US election takes place in 2028, millennial and gen Z voters – who already watch over six hours of media content a day – will make up the majority of the electorate. As gen alpha (people born between 2010 and 2024) also comes of voting age, social media platforms such as TikTok and Instagram or their future equivalents can play a role in political success – if political actors can capitalise on it.
Also, viral content spreads quickly, sometimes unpredictably, and across platforms that all behave differently. The algorithms behind viral spread are specific to each platform – and not transparent. This makes the impact of viral activity difficult to measure and hard to track. This presents a challenge to politicians and campaigns looking to capitalise on it.
My recently published research investigated this. I mapped and visualised the “Kamala IS brat” phenomenon as it moved across X, Instagram and TikTok in the run-up to the 2024 US election. The aim of the research was to investigate the anatomy of a viral movement: what made it spread on each platform, how long did it last, and who was driving it.
I found that viral political content that emerges on X spreads by a mix of strategic communication, and letting the audience do the rest. It often spreads to TikTok through catchy adaptations, and moves slightly slower on Instagram, but “explainer” content with images, for instance – often from a mix of everyday users and mainstream media outlets keeps – it visible.
Viral content moves between platforms, adapting to the environment of each as it is transformed into audio and visual forms. My research found that using audio was particularly powerful: turning quotes into soundbites and superimposing dance trends onto political backgrounds made for hugely shareable combinations, and the more surreal, the better.
Most people think that going viral is short-lived, but this study – and other research – has found that digital content has a “long tail”: it pops up, resurges and re-emerges, days, weeks, or even months later, offering new chances to reconnect with audiences.
This was particularly apparent on X, where content was re-used and re-contextualised in satirical and humorous ways. This wasn’t always positive. In the data I analysed, Republican supporters used the phrase “Kamala IS brat” to try and switch the narrative into something negative but it’s likely that this increased visibility as views are driven by influential public figures and shared by meme accounts.
Kamala Harris used social media in her 2024 campaign, but she didn’t win.
For politicians, this potential for re-emergence means that successful social media engagement is not just about strategic planning, it’s more about understanding how audiences remix and repost content in ways that can be hard to predict.
It’s not about rigidly tailoring content to each platform either, but about adapting to their styles. Effective digital strategists work with, not for, their audience, and make the most of moments that can’t always be planned in advance. Canada’s prime ministerial candidate, Mark Carney, for instance, embraced the hashtag #elbowsupCanada during his successful 2025 campaign.
The research also found that posting the right type of content is important – and short-form content works best. Social media platforms use a mix of recommender and social algorithms, that are politically intuitive. A high number of followers can still help to increase visibility, but getting the content right can extend viral reach, regardless of how many followers an account has.
Donald Trump regularly posts his decisions on Truth Social social media network.
TikTok’s algorithm in particular is set up for exploration, and Instagram’s Threads already pushes political content to users, not necessarily from accounts that they follow. Research suggests that users of any platform expect to see political content, whether they’re looking for it or not.
Given the potential for viral activity to reach a huge – and increasingly politically significant – audience, the challenge remains for political actors to turn social media engagement into electoral gain.
Many are trying, with varying levels of success. Harris’s digital-first strategy took an innovative approach – giving creative licence to a rapid response team of 25-year-olds. The digital campaign itself was considered a blueprint for PR success, but it ultimately failed to translate into votes. This was probably because it wasn’t accompanied by clear, concise messaging.
Other political hopefuls, such as Arizonan activist Deja Foxx and Democratic mayoral candidate Zohran Mamdani, are also capitalising on social media engagement. While Foxx recently lost in her bid to become the first gen Z woman to be elected to Congress, her approach, based on catchy content and influencer tactics, turned a long-shot candidacy into a very competitive campaign.
Mamdani has had more tangible success. His effective use of social media visuals, and multilingual engagement expanded his reach, and were credited with helping him win New York City’s Democratic mayoral primary in June.
So, if politicians can get it right, there is growing evidence that capitalising on going viral can influence political success.
Social media won’t win an election on its own, but looking ahead to 2028, it’s increasingly likely to be a part of a winning campaign. Young voters are far from a monolith, but what they do have in common is where they spend their time: on social media. TikTok remains the fastest-growing platform among this age group. Far from just providing entertainment, many use it to get their news, and engage in politics. Campaigns can’t afford to ignore it.
Emma Connolly does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Nicolas Forsans, Professor of Management and Co-director of the Centre for Latin American & Caribbean Studies, University of Essex
A protester holds a sign reading ‘it’s not community if you displace us’ during a demonstration against gentrification in Mexico City.Octavio Hoyos / Shutterstock
When thousands of residents took to the streets of Mexico City in July chanting “Gringo, go home”, news headlines were quick to blame digital nomads and expats. The story seemed simple: tech-savvy remote workers move in, rents go up and locals get priced out.
But that’s not the whole tale. While digital migration has undeniably accelerated housing pressures in Latin America, the forces driving resentment towards gentrification there run far deeper. The recent protests are symptoms of several structural issues that have long shaped inequality in the region’s cities.
Long before digital nomad visas became policy buzzwords after the pandemic, Latin America’s cities were changing at speed. In 1950, around 40% of the region’s population was urban. This figure had increased to 70% by 1990.
Nowadays, about 80% of people live in bustling cities, making Latin America the world’s most urbanised region. And by 2050, cities are expected to host 90% of the region’s population. Such rapid urbanisation has proved a magnet for international investors, tourists and, more recently, digital nomads.
In Latin America, gentrification has often involved large-scale redevelopment and high-rise construction, driven by state policies that prioritise economic growth and city branding over social inclusion.
Governments have re-branded entire working-class or marginalised areas as “innovation corridors” or “creative districts”, as in the La Boca neighbourhood of Buenos Aires, to attract investment. Neighbourhood re-branding has fostered resentment among locals and, in Buenos Aires, policies supporting self-managed social housing.
The introduction of integrated urban public transport systems has, while improving city access for marginalised communities, also triggered property speculation in once-isolated communities. In the Colombian city of Medellín, for instance, this has driven up prices and displaced long-time residents from hillside neighbourhoods like Comuna 13.
This is not an isolated case. A study from 2024 found that transport projects in Latin America are frequently leveraged by governments to attract private investment, effectively using mobility as a tool for urban restructuring rather than social equity.
The expansion of the public transportation system in Medellín has been associated with increased property values in once-isolated communities. Alexander Canas Arango / Shutterstock
Researchers call the urban development seen in Latin America “touristification”. This is a form of extractivism where – just like raw materials are removed from the earth for export – urban heritage, culture and everyday life are “mined” for economic value.
In the Barranco district of the Peruvian capital, Lima, heritage is marketed for tourism. But while the district’s bohemian and artistic identity has become a distinctive tourism asset, Barranco now faces challenges that threaten its sociocultural diversity and authenticity. The price of land there increased by 22% between 2014 and 2017, compared with just 4% in San Insidro, which is considered the wealthiest district of metropolitan Lima.
In Chile, the designation of Valparaíso’s historic quarter as a Unesco world heritage site in 2003 led to an increase in heritage-led tourism. Persistent outward migration of long-term residents from the city centre has led to a severe decline in the residential function of the world heritage area – and with it, the loss of vibrant local life.
Symptoms of deeper issues
Protests against high rents and displacement in Latin America are often framed as direct responses to gentrification. However, academic research and policy analysis suggest these protests are symptoms of much deeper structural issues.
Latin America is one of the most unequal regions in the world. Limited access to things like quality education and formal employment means many urban residents are already vulnerable before gentrification pressures begin. More than half of the current generation’s inequality in Latin America is inherited from the past, with estimates ranging from 44% to 63% depending on the country and measure used.
Cities in the region also have long histories of spatial and social segregation, with marginalised groups concentrated in under-resourced neighbourhoods. Gentrification often exacerbates this by pushing these populations further to the periphery.
Cartagena, a port city with roots in colonial-era divisions, possibly exemplifies the greatest urban segregation in Colombia’s history. Spaniards and other Europeans lived in the fortified centre, while enslaved people were confined to poorer neighbourhoods like Getsemaní outside the walls.
Recently, urban planning decisions have been taken there that favour certain groups over others. Only the colonial legacy linked to Europeans has been protected. Getsemaní, once populated with slaves’ homes and workshops, is now home to luxury hotels, restaurants and housing.
The colourful streets of Getsemaní, a neighbourhood in Cartagena, Colombia. Nowaczyk / Shutterstock
Finally, a large share of Latin America’s urban workforce is employed informally. Informality, where workers lack job security and social protections, is not just an unfortunate effect of economic development. It is an integral part of how global capitalism and urbanisation unfold in Latin America.
It reflects the failure of state and market systems to meet the needs of the majority. Rising rents and living costs driven by gentrification disproportionately hurt those who have few resources to absorb such shocks.
Protesters in Mexico aren’t just angry at an influx of remote workers sipping flat whites. They are responding to decades of urban inequality, neglect and exclusion. What is emerging is a continent-wide battle over who gets to live in, profit from and shape the future of Latin American cities.
The region’s urban future doesn’t have to mirror its past. But getting there means moving beyond simplistic narratives about foreign renters or digital workers, and tackling the structural issues that have long shaped inequality in Latin America’s cities.
Nicolas Forsans does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – in French – By Bernard-Simon Leclerc, Coordonnateur académique du doctorat professionnel en santé publique et enseignant en épidémiologie, Université de Montréal
L’épidémiologie doit évoluer, mais elle tarde à refléter les réalités actuelles de la santé publique.(Shutterstock)
À l’heure où les spécialistes de l’épidémiologie et de la biostatistique se réunissent à Montréal, du 11 au 13 août 2025, dans le cadre de la conférence biennale de la Société canadienne d’épidémiologie et de biostatistique, pour réfléchir collectivement à l’évolution de leurs disciplines et à leur rôle dans l’amélioration de la santé et du bien-être des populations, il semble plus que jamais nécessaire de réinterroger les orientations dominantes de ces sciences, souvent centrées sur des modèles biomédicaux et statistiques, au détriment des dynamiques sociales qui façonnent les déterminants de la santé.
L’épidémiologie joue un rôle clé en santé publique et en médecine. S’appuyant sur des chiffres et des calculs, elle aide à comprendre et à suivre l’évolution des maladies ainsi que d’autres événements sanitaires. Elle nourrit à cet égard une relation étroite avec les sciences médicales, ce qui bien entendu est nécessaire. On peut néanmoins déplorer qu’elle n’en nourrisse pas une aussi riche avec les humanités.
Contrairement à une idée répandue, l’épidémiologie ne se limite pas aux maladies. Elle s’intéresse aussi aux problèmes sociaux, envisagés comme des éléments à expliquer, ainsi qu’aux réalités sociales, perçues comme des facteurs influençant la santé. Une perspective plus large permettrait de mieux saisir les dimensions sociales et éthiques des questions de santé publique. Relier l’épidémiologie aux humanités offrirait une vision plus complète de ces enjeux.
En français, humanités correspond à ce que les milieux anglophones désignent par arts libéraux. Cela inclut la philosophie, l’histoire, la littérature et les beaux-arts. Ces disciplines, parfois appelées sciences de l’esprit, explorent les influences culturelles et intellectuelles. Cet article examine leur lien avec l’épidémiologie et leur contribution à une meilleure perception des défis en santé.
Épidémiologiste, enseignant universitaire et conseiller au Centre de recherche de l’Institut universitaire de gériatrie de Montréal, je déplore l’éloignement de cette discipline de son objectif premier : la santé humaine. L’épidémiologie doit évoluer, mais elle tarde à refléter les réalités actuelles de la santé publique. Même si l’on reconnaît la prépondérance des déterminants sociaux dans la santé et le bien-être des individus et des communautés, l’attention reste centrée sur les causes individuelles. Cela limite une vision globale et systémique des problématiques de santé.
L’approche biomédicale et l’approche sociale en épidémiologie
Souvent perçue comme une discipline médicale et statistique, l’épidémiologie mobilise aussi la pensée critique et l’analyse contextuelle, comme les sciences humaines. Deux approches principales se font concurrence.
Le modèle biomédical, centré sur les causes physiologiques et comportementales, s’attarde aux prédispositions génétiques et aux risques individuels. Il tend toutefois à négliger les conditions de vie et les disparités sociales.
À l’inverse, l’épidémiologie sociale voit la santé comme le résultat de déterminants collectifs. Elle intègre les facteurs économiques, le logement, l’éducation et les politiques publiques. Ces éléments influencent directement les trajectoires de vie et les inégalités de santé.
Bien que ces facteurs soient largement reconnus comme essentiels, l’épidémiologie peine encore à dépasser le cadre biomédical classique. Le sociologue français Patrick Peretti-Watel illustre cette limite. Il note que, statistiquement parlant, être afro-américain est parfois présenté comme un facteur de risque du suicide – ce qui, évidemment, n’a aucun sens en soi.
Une telle approche isole la variable ethnique sans tenir compte des désavantages systémiques. Elle conduit ainsi à des interprétations erronées. Ces corrélations chiffrées, souvent déconnectées des structures sociales, peuvent renforcer des préjugés et masquer les véritables inégalités.
Bien que les statistiques soient importantes, elles ne rendent pas compte du portrait d’ensemble. (Shutterstock)
Cette tendance à privilégier les explications individuelles et biomédicales s’accompagne souvent d’un recours excessif aux méthodes quantitatives. Les épidémiologistes et les statisticiens collaborent étroitement. Mais cette proximité peut parfois créer une confusion disciplinaire et une obsession méthodologique. En se concentrant sur la rigueur statistique, l’épidémiologie risque de perdre de vue ses finalités concrètes. Certains chercheurs privilégient l’optimisation des modèles plutôt qu’une réflexion sur les besoins réels des populations.
L’épidémiologie comme discipline des humanités
En 1987, le médecin et épidémiologiste américain David W. Fraser a proposé de considérer l’épidémiologie comme un « art libéral ». Il soulignait son potentiel à dépasser la technicité des méthodes quantitatives et à s’ouvrir à des perspectives interdisciplinaires.
Cette idée, peu explorée, a ensuite été développée par trois auteurs : l’expert américain en politique de santé et en épidémiologie appliquée Robin D. Gorsky, le médecin et épidémiologiste américain engagé sur les questions éthiques en santé publique Douglas L. Weed et, plus récemment, l’épidémiologiste américain Michael B. Bracken, professeur émérite et ancien président de deux grandes sociétés savantes en épidémiologie, ayant plaidé pour une approche plus nuancée de la discipline.
Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.
Cette réflexion vise à mieux comprendre la complexité des phénomènes sanitaires. En plus des statistiques et des modèles, l’épidémiologie mobilise aussi la pensée critique et l’analyse contextuelle. Pourtant, certaines simplifications persistent.
Par exemple, comme le souligne encore Peretti-Watel, le non-usage du préservatif est souvent vu comme un choix individuel. Or, il peut être façonné par des contraintes sociales et économiques. Dans certains milieux défavorisés, des femmes subissent des pressions ou des violences qui les empêchent d’en exiger l’usage. Une lecture strictement biomédicale omet ces réalités. Elle risque de négliger l’impact des inégalités sociales sur la santé.
Pour une pédagogie de l’épidémiologie ancrée dans les humanités
Intégrer l’épidémiologie plus tôt dans les cursus collégiaux et universitaires aiderait les citoyens à mieux interpréter les informations médicales et scientifiques.
Revoir la place de l’épidémiologie à l’université et ses rapports avec les humanités est primordial. (Shutterstock)
L’épidémiologie repose sur un raisonnement rigoureux, mais ses concepts de base restent accessibles aux non-spécialistes. Mieux enseignée, elle renforcerait la pensée critique. Elle aiderait à repérer les croyances populaires et les affirmations pseudoscientifiques fondées sur des principes contestés. Par exemple, les polémiques autour des vaccins et les prétendus « traitements miracles » comme l’homéopathie illustrent les dangers d’une méconnaissance scientifique.
Une intégration plus large de l’épidémiologie dans les programmes éducatifs encouragerait une vision plus éclairée de la santé publique. Elle permettrait d’analyser rationnellement les politiques sanitaires. Elle aiderait à comprendre les déterminants sociaux de la santé, au-delà des seules interprétations biomédicales.
Les humanités au service de l’épidémiologie
Pour relever les défis contemporains de la santé publique, l’épidémiologie doit élargir ses fondements. Elle gagne à être pensée comme une discipline intégrant les sciences humaines. Une approche plus interdisciplinaire permettrait de mieux comprendre les déterminants sociaux et structurels de la santé, au-delà des analyses biomédicales classiques. Les institutions académiques ont un rôle clé à jouer dans cette transformation. En intégrant les humanités à la formation en épidémiologie, elles encourageraient une approche plus globale et critique des questions de santé.
Cette démarche ouvrirait la voie à une santé publique mieux adaptée aux réalités sociales et politiques de nos sociétés. L’enseignement de l’épidémiologie, enrichi par la philosophie, l’histoire et la sociologie, favoriserait une prise de décision plus éclairée et équitable. En plaçant la réflexion critique au cœur de la santé publique, cette orientation contribuerait à une vision plus humaine et inclusive des soins et des politiques sanitaires.
L’épidémiologie ne se limite pas aux chiffres et aux modèles statistiques. Elle éclaire des enjeux qui touchent directement les populations. En repensant son enseignement et ses fondements, elle pourrait jouer un rôle encore plus essentiel dans l’amélioration de la santé mondiale.
Quelques initiatives
Dans cette perspective, l’enseignement de l’épidémiologie peut s’inspirer d’un mouvement amorcé dans plusieurs institutions occidentales.
L’Université de Montréal en est un exemple, en réfléchissant à la place des arts et des humanités dans la formation médicale. Plusieurs organisations crédibles soutiennent cette approche.
Ces recommandations confirment la valeur ajoutée des humanités dans le développement de compétences clés pour une pratique médicale plus humaine et efficace. Pour concrétiser cette vision, il faut repenser les programmes de formation.
Un programme d’épidémiologie intégré à une école de santé publique devrait refléter une véritable culture de santé publique. Celle-ci serait fondée sur un cadre interdisciplinaire et intersectoriel, une vision globale de la santé et une prise en compte des déterminants sociaux. Cela impliquerait aussi de rééquilibrer l’accent mis sur l’épidémiologie clinique et biomédicale, au profit d’une perspective davantage sociale et populationnelle.
Bernard-Simon Leclerc ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Australia will recognise a Palestinian state at the UN General Assembly meeting in September, joining the United Kingdom, Canada and France in taking the historic step.
Recognising a Palestinian state is at one level symbolic – it signals a growing global consensus behind the rights of Palestinians to have their own state. In the short term, it won’t impact the situation on the ground in Gaza.
Practically speaking, the formation of a future Palestinian state consisting of the West Bank, Gaza Strip and East Jerusalem is far more difficult to achieve.
The Israeli government has ruled out a two-state solution and reacted with fury to the moves by the four G20 members to recognise Palestine. Israeli Prime Minister Benjamin Netanyahu called the decision “shameful”.
So, what are the political issues that need to be resolved before a Palestinian state becomes a reality? And what is the point of recognition if it doesn’t overcome these seemingly intractable obstacles?
Settlements have exploded
The first problem is what to do about Israeli settlements in the West Bank and East Jerusalem, which the International Court of Justice has declared are illegal.
Palestinians see East Jerusalem as an indispensable part of any future state. They will never countenance a state without it as their capital.
In May, the Israeli government announced it would also build 22 new settlements in the West Bank and East Jerusalem – the largest settler expansion in decades. Defence Minister Israel Katz described this as a “strategic move that prevents the establishment of a Palestinian state that would endanger Israel”.
Second is the issue of a future border between a Palestinian state and Israel.
The demarcations of the Gaza Strip, West Bank and East Jerusalem are not internationally recognised borders. Rather, they are the ceasefire lines, known as the “Green Line”, from the 1948 War that saw the creation of Israel.
However, in the Six-Day War of 1967, Israel captured and occupied the West Bank, Gaza, East Jerusalem, Egypt’s Sinai Peninsula (since returned), and Syria’s Golan Heights. And successive Israeli governments have used the construction of settlements in the occupied territories, alongside expansive infrastructure, to create new “facts on the ground”.
Israel solidifies its hold on this territory by designating it as “state land”, meaning it no longer recognises Palestinian ownership, further inhibiting the possibility of a future Palestinian state.
For example, according to research by Israeli professor Neve Gordon, Jerusalem’s municipal boundaries covered approximately seven square kilometres before 1967. Since then, Israeli settlement construction has expanded its eastern boundaries, so it now now covers about 70 square km.
Israel also uses its Separation Wall or Barrier, which runs for around 700km through the West Bank and East Jerusalem, to further expropriate Palestinian territory.
According to a 2013 book by researchers Ariella Azoulay and Adi Ophir, the wall is part of the Israeli government’s policy of cleansing Israeli space of any Palestinian presence. It breaks up contiguous Palestinian urban and rural spaces, cutting off some 150 Palestinian communities from their farmland and pastureland.
The barrier is reinforced by other methods of separation, such as checkpoints, earth mounds, roadblocks, trenches, road gates and barriers, and earth walls.
Then there is the complex geography of Israel’s occupation in the West Bank.
Under the Oslo Accords of the 1990s, the West Bank was divided into three areas, labelled Area A, Area B and Area C.
In Area A, which consists of 18% of the West Bank, the Palestinian Authority exercises majority control. Area B is under joint Israeli-Palestinian authority. Area C, which comprises 60% of the West Bank, is under full Israeli control.
Administrative control was meant to be gradually transferred to Palestinian control under the Oslo Accords, but this never happened.
Areas A and B are today separated into many small divisions that remain isolated from one another due to Israeli control over Area C. This deliberate ghettoisation creates separate rules, laws and norms in the West Bank that are intended to prevent freedom of movement between the Palestinian zones and inhibit the realisation of a Palestinian state.
Who will govern a future state?
Finally, there are the conditions that Western governments have placed on recognition of a Palestinian state, which rob Palestinians of their agency.
Chief among these is the stipulation that Hamas will not play a role in the governance of a future Palestinian state. This has been backed by the Arab League, which has also called for Hamas to disarm and relinquish power in Gaza.
Fatah and Hamas are currently the only two movements in Palestinian politics capable of forming a government. In a May poll, 32% of respondents in both Gaza and the West Bank said they preferred Hamas, compared with 21% support for Fatah. One-third did not support either or had no opinion.
Mahmoud Abbas, leader of the Palestinian Authority, is deeply unpopular, with 80% of Palestinians wanting him to resign.
A “reformed” Palestinian Authority is the West’s preferred option to govern a future Palestinian state. But if Western powers deny Palestinians the opportunity to elect a government of their choosing by dictating who can participate, the new government would likely be seen as illegitimate.
This risks repeating the mistakes of Western attempts to install governments of their choosing in Iraq and Afghanistan. It also plays into the hands of Hamas hardliners, who mistrust democracy and see it as a tool to impose puppet governments in Palestine, as well as Israel’s narrative that Palestinians are incapable of governing themselves.
Redressing these issues and the myriad others will take time, money and considerable effort. The question is, how much political capital are the leaders of France, the UK, Canada and Australia (and others) willing to expend to ensure their recognition of Palestine results in an actual state?
What if Israel refuses to dismantle its settlements and Separation Wall, and moves ahead with annexing the West Bank? What are these Western leaders willing or able to do? In the past, they have been unwilling to do more than issue strongly worded statements in the face of Israeli refusals to advance the two-state solution.
Given these doubts around the political will and actual power of Western states to compel Israel to agree to the two-state solution, it begs the question: what and who is recognition for?
Martin Kear does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – (in Spanish) – By Robert J. Gordon, Emeritus Professor, University of Vermont and Research Associate, University of the Free State
El genocidio de los pueblos ovaherero y nama de Namibia a manos de las fuerzas coloniales alemanas (1904-1907) está ampliamente documentado. Pero se habla mucho menos de lo que vino después: el genocidio de los bosquimanos del país, también conocidos como los san.
En 1992, el antropólogo Robert J. Gordon publicó un libro hablando del mito de los bosquimanos y la creación de una clase marginada en Namibia, recientemente reeditado.
La palabra “bosquimano” se utiliza como término genérico que engloba a más de 200 grupos étnicos. No existe un bosquimano típico, sino que constituyen una miscelánea de grupos fluidos. Muchas comunidades locales prefieren el término bosquimano en lugar de la categorización oficial de “san” y “marginales”. De hecho, el término “san”, proveniente de la lengua khoekhoegowab, significa lo mismo que bosquimano.
En general, se trata de personas sociables con una fuerte tendencia a compartir. Antes de la colonización, los bosquimanos vivían como cazadores-recolectores, vagando por el paisaje. Tenían un concepto diferente de la propiedad, no deseaban ni dinero ni ganado; eran incontrolables y, por ello, se les trataba como animales y se les sometía a la aniquilación.
Pánico a la ‘plaga de bosquimanos’
La actual Namibia fue una colonia alemana desde 1884 llamada África del Sudoeste alemana. Como resultado de la genocida guerra herero-nama de 1904-1907, Alemania consiguió fomentar la colonización.
El arco noreste del territorio, que se extiende desde Otavi hasta Gobabis, con Grootfontein como epicentro, sirvió de imán para los colonos: con una línea ferroviaria recién terminada, minas, un vasto potencial agrícola y tierras accesibles. Solo en Grootfontein, el número de granjas de colonos aumentó de 15 en 1903 a 175 en 1913. Casi todas estas ganaderías se encontraban en tierras ocupadas por los bosquimanos.
Los colonos pronto se vieron en apuros. En 1911, los titulares de la prensa namibia hablaban de una “plaga de los bosquimanos”. Dos elementos alimentaron el pánico. En primer lugar, el asesinato de un policía y varios granjeros blancos. En segundo lugar, se suponía que las actividades de los bosquimanos estaban obstaculizando el flujo de trabajadores migrantes contratados, muy necesarios, procedentes de las regiones de Owambo y Kavango para trabajar en los yacimientos de diamantes recién descubiertos de Luderitzbucht. La Cámara de Minas quería “sanear” la zona.
En consecuencia, el gobernador alemán ordenó que se disparara a los bosquimanos [si se creía que intentaban resistirse al arresto por parte de funcionarios o colonos]. Entre 1911 y 1913 se desplegaron más de 400 patrullas antibosquimanas que cubrían unos 60 000 km².
Pero los colonos y las autoridades consideraron que estas medidas eran insuficientes y continuaron aterrorizando a los bosquimanos sin recibir ni una simple reprimenda. Las “cacerías de bosquimanos” continuaron hasta la toma del territorio por Sudáfrica en 1915, cuando el país pasó a llamarse África del Sudoeste.
No sabemos cuántos bosquimanos murieron, pero, como explico en mi libro, las estimaciones oficiales sitúan el número de bosquimanos en 1913 entre 8 000 y 12 000. En 1923 eran 3 600. Esto da una idea de la magnitud de las matanzas.
Lo que alimentó el genocidio fue el espíritu colonizador. El ethos dominante era el de un asedio, de sentirse amenazados por fuerzas externas impredecibles. Los granjeros, atraídos por las generosas ayudas y subvenciones del Gobierno, eran en su mayoría soldados licenciados, mal entrenados en la agricultura, carentes de conocimientos locales esenciales y educados en la arrogancia racista. La situación generó inseguridad, miedo e hipermasculinidad.
Los bosquimanos, con su reputada habilidad para camuflarse y rastrear y cazar con flechas envenenadas para las que no se conocía antídoto, personificaban su peor pesadilla mientras intentaban establecer su dominio en sus granjas aisladas. Considerados una especie de presas depredadoras, los bosquimanos debían ser exterminados como grupo: es decir, un genocidio.
Lo que sucedió después del genocidio
La represión continuó bajo el régimen sudafricano desde 1915 hasta la independencia en 1990, aunque de forma menos extrema. Se ilegalizó la posesión de arcos y flechas bosquimanos. Los bosquimanos fueron despojados progresivamente de su territorio para dar paso a reservas de caza y granjas de colonos.
Aún en la década de 1970 la administración seguía pensando en reubicar a 30 000 bosquimanos en la denominada Bushmanland, creada artificialmente, que constituía el 2 % del territorio que habían ocupado anteriormente.
La gran mayoría permaneció en sus zonas tradicionales, ahora bajo el dominio de los granjeros colonos, donde se hundieron en una situación de servidumbre. Con la independencia de Namibia, la situación empeoró. Las nuevas leyes laborales establecieron un salario mínimo, lo que hizo que no fuera rentable mantener a los trabajadores bosquimanos. Muchos granjeros se dedicaron a la caza mayor o vendieron sus tierras a granjeros negros que preferían contratar a sus parientes.
El resultado fue que los bosquimanos se vieron obligados a trasladarse a zonas comunales o a asentamientos informales en los alrededores de las ciudades, donde malviven en condiciones precarias.
¿Dónde se encuentra esta población hoy en día?
Actualmente, los bosquimanos se encuentran en diferentes situaciones de servidumbre, realizando en su mayoría trabajos de baja cualificación en las regiones del norte y el noreste, donde antaño eran los habitantes ancestrales. El Gobierno está tratando de ayudarlos, principalmente con subsidios sociales y unas pocas granjas de reasentamiento superpobladas.
Si buscamos “bosquimanos namibios” en internet, aparecen innumerables imágenes idealizadas de bosquimanos con trajes tradicionales cazando y rastreando. Estas narrativas, en gran parte fruto de la promoción turística, refuerzan el mito de los bosquimanos “puros”. La historia del genocidio y la servidumbre queda totalmente borrada.
Robert J. Gordon no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.
Pollinators play a vital role in fertilising flowers, which grow into seeds and fruits and underpin our agriculture. But climate change can cause a mismatch between plants and their pollinators, affecting where they live and what time of year they’re active. This has happened before.
When Earth went through rapid global warming 56 million years ago, plants from dry tropical areas expanded to new areas – and so did their animal pollinators. Our new study, published in Paleobiology today, shows this major change happened in a remarkably short timespan of just thousands of years.
Can we turn to the past to learn more about how interactions between plants and pollinators changed during climate change? That’s what we set out to learn.
A major warming event 56 million years ago
In the last 150 years, humans have raised atmospheric carbon dioxide concentrations by more than 40%. This increase in carbon dioxide has already warmed the planet by more than 1.3°C.
Current greenhouse gas concentrations and global temperature are not only unprecedented in human history but exceed anything known in the last 2.5 million years.
To understand how giant carbon emission events like ours could affect climate and life on Earth, we’ve had to go deeper into our planet’s history.
Fifty-six million years ago there was a major, sudden warming event caused by the release of a gigantic amount of carbon into the atmosphere and ocean. This event is known as the Paleocene-Eocene Thermal Maximum.
For about 5,000 years, huge amounts of carbon entered the atmosphere, likely from a combination of volcanic activity and methane release from ocean sediments. This caused Earth’s global temperature to rise by about 6°C and it stayed elevated for more than 100,000 years.
Although the initial carbon release and climate change were perhaps ten times slower than what’s happening today, they had enormous effects on Earth.
Earlier studies have shown plants and animals changed a lot during this time, especially through major shifts in where they lived. We wanted to know if pollination might also have changed during this rapid climate change.
Paleobotanist Scott Wing, palynologist Vera Korasidis and colleagues searching for new pollen samples in Wyoming from 56 million-year-old rocks. Richard Barclay
Hunting for pollen fossils in the badlands
We looked at fossil pollen from the Bighorn Basin, Wyoming – a deep and wide valley in the northern Rocky Mountains in the United States, full of sedimentary rocks deposited 50 to 60 million years ago.
The widespread badlands of the modern Bighorn Basin expose remarkably fossil-rich sediments. These were laid down by ancient rivers eroding the surrounding mountains.
We studied fossil pollen because we wanted to understand changes in pollination. Pollen is invaluable for this because it is abundant, widely dispersed in air and water, and resistant to decay – easily preserved in ancient rocks.
We used three lines of evidence to investigate pollination in the fossil record:
fossil pollen preserved in clumps
how living plants related to the fossils are pollinated today, and
the total variety of pollen shapes.
56 million-year-old fossil pollen clumps collected from Wyoming and photographed on the National Museum of Natural History’s scanning electron microscope. Vera Korasidis
What did we discover?
Our findings show pollination by animals became more common during this interval of elevated temperature and carbon dioxide. Meanwhile, pollination by wind decreased.
The wind-pollinated plants included many related to deciduous broad-leaved trees still common in moist northern hemisphere temperate regions today.
By contrast, the plants pollinated by animals were related to subtropical palms, silk-cotton trees and other plants that typically grow in dry tropical climates.
The decline in wind pollination was likely due to the local extinction of populations of wind-pollinated plants that grew in the Bighorn Basin.
The increase in animal-pollinated plants means that plants from regions with warmer, drier climates had spread poleward and moved into the Bighorn Basin.
Earlier studies have shown these changes in the plants of the Bighorn Basin were related to the climate being hotter and more seasonally dry than before – or after – this interval of rapid climate change.
Pollinating insects and other animals likely moved 56 million years ago along with the plants they pollinated. Their presence in the landscape helped new plant communities establish in the hot, dry climate. It may have provided invaluable resources to animals such as the earliest primates, small marsupials, and other small mammals.
A lesson for our future
What lessons does this ancient climate change event have to offer when we think about our own future?
The large carbon release at the beginning of the Paleocene-Eocene Thermal Maximum clearly resulted in major global warming. It dramatically altered ecosystems on land and in the sea.
In spite of these dramatic changes, most land species and ecological interactions seem to have survived. This is likely because the event occurred at about one-tenth the rate of current anthropogenic climate change.
The forests that returned to the region after more than 100,000 years of hot, dry climate were very similar to those that existed before. This suggests that in the absence of major extinction, forest ecosystems and their pollinators could reestablish into very similar communities even after a very long period of altered climate.
The key for the future may be keeping rates of environmental change slow enough to avoid extinctions.
Vera Korasidis received funding from the University of Melbourne Elizabeth and Vernon Puzey Fellowship Award.
Scott Wing’s fieldwork was supported by the Roland W. Brown fund of the Department of Paleobiology, and by the MacMillan Fund of the National Museum of Natural History.
Last month, the American non-profit organisation behind Wikipedia issued draft guidelines for researchers studying how neutral Wikipedia really is. But instead of supporting open inquiry, the guidelines reveal just how unaware the Wikimedia Foundation is of its own influence.
These new rules tell researchers – some based in universities, some at non-profit organisations or elsewhere – not just how to study Wikipedia’s neutrality, but what they should study and how to interpret their results. That’s a worrying move.
As someone who has researched Wikipedia for more than 15 years – and served on the Wikimedia Foundation’s own Advisory Board before that – I’m concerned these guidelines could discourage truly independent research into one of the world’s most powerful repositories of knowledge.
Telling researchers what to do
The new guidelines come at a time when Wikipedia is under pressure.
Tech billionaire Elon Musk, who was until recently also a senior adviser to US President Donald Trump, has repeatedly accused Wikipedia of being biased against American conservatives. On X (formerly Twitter), he told users to “stop donating to Wokepedia”.
In another case, a conservative think tank in the United States was caught planning to “target” Wikipedia volunteers it claimed were pushing antisemitic content.
Until now, the Wikimedia Foundation has mostly avoided interfering in how people research or write about the platform. It has limited its guidance to issues such as privacy and ethics, and has stayed out of the editorial decisions made by Wikipedia’s global community of volunteers.
But that’s changing.
In March this year, the foundation established a working group to standardise Wikipedia’s famous “neutral point of view” policies across all 342 versions in different languages. And now the foundation has chosen to involve itself directly in research.
Its “guidance” directly instructs researchers on both how to carry out neutrality research and how to interpret it. It also defines what it believes are open and closed research questions for people studying Wikipedia.
In universities, researchers are already guided by rules set by their institutions and fields. So why do the new guidelines matter?
Because the Wikimedia Foundation has lots of control over research on Wikipedia. It decides who it will work with, who gets funding, whose work to promote, and who gets access to internal data. That means it can quietly influence which research gets done – and which doesn’t.
Now the foundation is setting the terms for how neutrality should be studied.
What’s not neutral about the new guidelines
The guidelines fall short in at least three ways.
1. They assume Wikipedia’s definition of neutrality is the only valid one. The rules of English Wikipedia say neutrality can be achieved when an article fairly and proportionally represents all significant viewpoints published by reliable sources.
But researchers such as Nathaniel Tkacz have shown this idea isn’t perfect or universal. There are always different ways to represent a topic. What constitutes a “reliable source”, for example, is often up for debate. So too is what constitutes consensus in those sources.
2. They treat ongoing debates about neutrality as settled. The guidelines say some factors – such as which language Wikipedia is written in, or the type of article – are the main things shaping neutrality. They even claim Wikipedia gets more neutral over time.
But this view of steady improvement doesn’t hold up. Articles can become less neutral, especially when they become the focus of political fights or coordinated attacks. For example, the Gamergate controversy and nationalist editing have both created serious problems with neutrality.
The guidelines also leave out important factors such as politics, culture, and state influence.
3. They restrict where researchers should direct their research. The guidelines say researchers must share results with the Wikipedia community and “communicate in ways that strengthen Wikipedia”. Any criticism should come with suggestions for improvement.
That’s a narrow view of what research should be. In our wikihistories project, for example, we focus on educating the public about bias in the Australian context. We support editors who want to improve the site, but we believe researchers should be free to share their findings with the public, even if they are uncomfortable.
Neutrality is in the spotlight
Most of Wikipedia’s critics aren’t pushing for better neutrality. They just don’t like what Wikipedia says.
The reason Wikipedia has become a target is because it is so powerful. Its content shapes search engines, AI chatbot answers, and educational materials.
The Wikimedia Foundation may see independent and critical research as a threat. But in fact, this research is an important part of keeping Wikipedia honest and effective.
Critical research can show where Wikipedians strive to be neutral but don’t quite succeed. It doesn’t require de-funding Wikipedia or hunting down its editors. It doesn’t mean there aren’t better and worse ways of representing reality.
Nor does it mean we should discard objectivity or neutrality as ideals. Instead, it means understanding that neutrality isn’t automatic or perfect.
Neutrality is something to be worked towards. That work should involve more transparency and self-awareness, not less – and it must leave space for independent voices.
Heather Ford receives funding from the Australian Research Council. She was previously a member of the Wikimedia Foundation Advisory Board.
However, a common obstacle to addressing bullying is that parents and schools often disagree about whether a particular situation constitutes bullying.
A study in Norwegian schools found that when parents think their child is being bullied, around two-thirds of the time, the school does not agree. There are also cases in which the school says a child is bullying others, but the child’s parents don’t agree.
Why is it so complicated? How can parents approach this situation?
What does ‘bullying’ mean?
When we look at the definition of bullying, it is not surprising disagreements occur. Identifying bullying is not clear-cut.
After a report of bullying, what does the school do?
When a student or parent reports bullying, usually the first thing a school does is talk with students, teachers and parents, and observe interactions between students.
However, there are many challenges in working out whether behaviour is bullying.
First, bullying often occurs when adults are not around and students often don’t tell teachers, so direct observation is not always possible.
Second, even if a teacher is present, social forms of bullying can be very subtle, such as turning away to exclude someone, or using a mocking facial expression, so it can be easily overlooked.
Third, determining whether there is “intent to harm” can be difficult as students accused of bullying may claim (rightly or wrongly) they were “only joking” or not intending to hurt or upset.
Fourth, the issue of power is not easy to determine. If the student is older or physically bigger, or if multiple students are involved in bullying, a power difference may seem apparent. But when power is based on popularity, a power difference may not be clear. There are also cases in which students may deliberately accuse others of bullying to get them into trouble (which may in itself constitute bullying).
Finally, not all aggressive behaviour is bullying. For example, conflict that involves arguments or fights between equals is not bullying, as there is no power imbalance. However, this situation can still be upsetting.
A more difficult situation occurs when the victim of bullying reacts aggressively – such as when they lash out angrily to taunts. The aggressive response of the victim may be more visible to teachers than the bullying that provoked the outburst, and this can make the direction of bullying difficult for schools to ascertain.
What if the school and parents disagree?
A school may not prioritise limited resources to resolve cases they do not see as bullying. This can leave the student languishing and can be very distressing for families.
However, research shows parents’ reports that their child has been bullied predict an increased risk of later child anxiety and depression, regardless of whether school staff concur or were even asked if the child was bullied.
So whether or not the school initially agrees a child is being bullied, it is important to improve the situation.
What can be done?
Sometimes, by taking steps to address the situation, the school can find out if bullying is occurring.
For example, sometimes children are upset by behaviours that may seem innocuous – such as humming, tapping or standing close. If this behaviour is not intended to hurt, we would expect children to reduce this when made aware it is upsetting. However, if the behaviour increases or continues, even with reminders, there would be more reason to believe it is deliberately intended to provoke (and is bullying).
One helpful strategy for parents is to keep a careful record of the child’s experiences – exactly what the child experiences and how it impacts them. This can help establish a pattern of hurtful behaviours over time.
It’s important for parents to maintain a good relationship and ongoing communication with the school (however difficult). As bullying can be a complex and evolving issue, good communication can help ensure issues are promptly managed.
The parent can coach the child to manage the situation – for example, to ask in a friendly and confident way for other students to stop when they are doing things they don’t like. The parent can also help the child plan when they would ask a teacher for help.
By working together, and understanding the problem better over time, schools and families can address behaviour that is hurtful – whether or not there is initial agreement it is “bullying”.
If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14 or Kids Helpline on 1800 55 1800.
Karyn Healy has received funding from QIMR Berghofer Medical Research Institute, the Australian Research Council and Australian government Emerging Priorities Program and is an honorary Principal Research Fellow with The University of Queensland. Karyn is a co-author of the Resilience Triple P parenting program. Resilience Triple P and all Triple P programs are owned by the University of Queensland. The university has licensed Triple P International Pty Ltd to publish and disseminate Triple P programs worldwide. Royalties stemming from published Triple P resources are distributed to the Parenting and Family Support Centre, School of Psychology, Faculty of Health and Behavioural Sciences and contributory authors. No author has any share or ownership in Triple P International Pty Ltd.
Source: The Conversation – (in Spanish) – By Miguel Ángel Sánchez de la Nieta Hernández, Profesor contratado doctor en el Grado en Periodismo, Universidad Villanueva
En el ensayo El fracaso de la república de Weimar, el periodista e historiador Volker Ullrich plantea una pregunta inquietante cuando se contempla desde hoy: ¿cómo pudo desmoronarse con tanta facilidad la democracia alemana en los años 30?
Su respuesta no apunta a un destino trágico, sino a una serie de decisiones evitables. “Nada estaba escrito”, insiste. Si algunos jueces y políticos –yo añadiría periodistas– hubieran actuado con mayor claridad, Adolf Hitler podría haber sido solo uno más entre muchos agitadores radicales que han existido.
Ahora bien, no pretenden estas líneas alimentar el pesimismo. Si algo ha mejorado desde los días de Weimar es la capacidad que hoy tienen las democracias para fiscalizar el poder. En particular, el periodismo, como elemento de contrapoder, cuenta con más recursos y eficacia que nunca en defensa del Estado de derecho.
Estamos —aunque no siempre lo percibamos— en una especie de edad de oro de la trazabilidad informativa.
Un político que oculte patrimonio, manipule licitaciones o canalice donaciones irregulares a través de fundaciones pantalla se arriesga a ser descubierto no ya en años, sino en cuestión de horas. No por azar, sino porque hay periodistas formados para cruzar registros, rastrear operaciones, automatizar búsquedas y detectar patrones que antes permanecían ocultos.
Herederos del periodismo de precisión
Este cambio metodológico no es reciente. En los años 70, el periodista estadounidense Philip Meyer anticipó la evolución del oficio al proponer lo que llamó periodismo de precisión. Apostaba por incorporar los métodos de las ciencias sociales al trabajo periodístico: estadísticas, encuestas, bases de datos.
Cualquier decisión polémica, cualquier documento filtrado o declaración engañosa puede circular y ser analizada en tiempo real por expertos, medios y usuarios con acceso a herramientas de análisis y fuentes abiertas.
Fatiga del escándalo
Pero no todo son buenas noticias. Porque la eficacia de estos mecanismos depende también del estado de la ciudadanía. Y aquí aparece un riesgo sutil, pero creciente: la saturación de escándalos.
Si todo es escándalo, nada lo es. Cuando la corrupción se presenta como ubicua, cuando los titulares sobre irregularidades se suceden sin pausa en no pocos países, puede llegar a producirse un efecto paradójico: lejos de provocar una ciudadanía más exigente, se genera indiferencia, cinismo, desmovilización. Es lo que algunos estudios llaman fatiga del escándalo.
La repetición, la aceleración informativa, la ausencia de contexto y la pérdida de jerarquía noticiosa terminan vaciando de sentido el papel del periodismo como generador de inteligencia pública.
Hoy, el problema ya no es solo que el poder mienta, sino que la ciudadanía se resigne a que así sea. La abundancia de información, por sí sola, no garantiza una comprensión más clara de la realidad. Al contrario: sin mediación profesional, sin criterios editoriales sólidos, sin una narrativa que ordene y jerarquice, la información se transforma en ruido. Y el ruido es terreno fértil para la apatía democrática.
Por eso, aunque celebremos los avances en transparencia y periodismo de datos, debemos estar atentos a otro tipo de erosión: la que proviene no del secretismo, sino de la sobreexposición. Una que convierte la denuncia en espectáculo, y el escándalo en rutina.
El periodismo debe seguir desvelando tramas, sí. Pero también tiene la responsabilidad de cuidar su forma y su fondo. Aportar contexto. Explicar qué está en juego. Distinguir lo anecdótico de lo esencial. Resistirse al titular fácil. Porque, como advertía Ullrich, todo puede cambiar a peor en muy poco tiempo. Y si la sociedad se anestesia ante la mentira, el siguiente “demagogo carismático” podría encontrar la puerta abierta.
La respuesta desde el periodismo público
Existe un enfoque del periodismo muy adecuado para hacer frente a esta deriva de la sociedad. Nacido desde las intuiciones de Jay Roseny Davis Merrit en los años 90, el Public Journalism no se conforma con levantar alfombras; quiere que el ciudadano mire debajo de ellas y decida barrer.
Frente a la fatiga del escándalo –ese hartazgo que anestesia– propone relatos que no solo denuncien, sino que convoquen. Sustituye el grito repetido por el diálogo fértil. Da voz a la comunidad y convierte el dato en carne: lo baja a tierra, lo humaniza. Así, donde hay riesgo de apatía, cultiva responsabilidad compartida.
En tiempos de sobreinformación y ruido, este periodismo sereno rescata lo esencial: el sentido. No reduce la verdad al clic ni la urgencia al escándalo. Apuesta por el contexto, la conversación y el compromiso. No quiere espectadores irritados, sino vecinos implicados. Porque sin ciudadanos despiertos, ni la mejor exclusiva cambia nada. Y sin alma, ni la verdad más verificada conmueve.
Miguel Ángel Sánchez de la Nieta Hernández no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.