Source: The Conversation – UK – By Urban Wiesing, Professor of Ethics and History of Medicine, University of Tübingen
The US Food and Drug Administration recently convened a panel of experts to examine a sensitive and increasingly urgent question: should antidepressants be prescribed to women suffering from depression during pregnancy?
To the surprise of many in the American medical community, the panel included not only US-based experts but also three international voices known for their critical views on psychiatric medication. Their inclusion sparked immediate controversy and foreshadowed the disagreements to come.
At the heart of the debate is a long-standing assumption in American medical practice: while antidepressants may carry some risk to the unborn child, the dangers of leaving maternal depression untreated are usually greater. Yet this mainstream position was strongly challenged. A majority of the panel appeared unconvinced that the benefits of antidepressant use in pregnancy clearly outweigh the potential risks.
As the discussion unfolded, fundamental questions remained unresolved. What exactly are the risks to the unborn child? The panel offered different answers.
How substantial are the benefits to a pregnant woman? Some experts questioned whether antidepressants deliver meaningful help in these circumstances at all. And without clarity on these points, how can the the risk-benefit ratio be reliably assessed?
It’s a familiar scenario in science: experts looking at the same data but drawing different conclusions – not only about the facts, but how to interpret them. In this case, the division seemed to reflect deeper cultural and philosophical differences in how various countries approach mental health care during pregnancy.
The outcome of the panel’s deliberations reflected that divide, with no consensus reached.
To some extent, the conflict was embedded in the very design of the panel. When those with sharply opposing views are brought together without agreement on the evidence base, gridlock is a likely result. Still, the impasse underlines the need for more independent, high-quality research on the effects of antidepressants during pregnancy – research that can inform not only regulators but also doctors and patients.
Complicating matters further is the political climate. The current US health secretary – Robert F. Kennedy Jr. – has, critics argue, an uneasy relationship with scientific consensus, which makes trust in the process all the more fragile.
FDA expert panel discussion on antidepressants and pregnancy.
A warning label is not a substitute for a conversation
Still, the panel produced one tangible suggestion: a proposal from around half of its members to place a so-called “black box” warning on antidepressant packaging, alerting pregnant women to potential risks to the unborn child. Such warnings are typically reserved for the most serious medical concerns. But is this really the right approach?
A comparison often made is to cigarette packaging. But this analogy quickly breaks down. Cigarettes are freely bought; antidepressants are prescribed following a medical consultation. To issue a blunt warning on a medicine that has already been deemed appropriate by a doctor risks undermining the doctor–patient relationship.
If stronger warnings are needed, the real problem may lie in the consultation process itself, not in the packaging.
Pregnancy presents a unique ethical dilemma. The unborn child cannot give consent, and damage sustained in the womb can result in lifelong consequences. At the same time, untreated depression in a pregnant woman carries serious risks of its own – for both mother and child. This is a classic medical conflict, with no easy solution.
And while US law gives pregnant women the right to make such decisions – albeit with variation across states – it doesn’t solve the underlying uncertainty. That must be navigated through informed, respectful dialogue between doctor and patient, not by resorting to fear-inducing labels.
Ultimately, every case is personal. Every decision must take into account the individual’s mental health, support system, risk tolerance and values. What’s needed is thoughtful communication, prudent prescribing and careful balancing of benefit and harm. In short: good medicine.
What’s not needed is to heap more guilt on women already grappling with depression. If scientists and policymakers cannot agree, pregnant women should not bear the burden of that confusion. They deserve support, not stigma.
Urban Wiesing does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
There was a time, just a couple of decades ago, when researchers in psychology and health always had to engage with people face-to-face or using the telephone. The worst case scenario was sending questionnaire packs out to postal addresses and waiting for handwritten replies.
So we either literally met our participants, or we had multiple corroborating points of evidence that indicated we were dealing with a real person who was, therefore, likely to be telling us the truth about themselves.
Since then, technology has done what it always does – creating opportunities for us to cut costs, save time and access wider pools of participants on the internet. But what most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation which could be deliberately aiming to put research projects in jeopardy.
What enthused scientists most about internet research was the new capability to access people who we might not normally be able to involve in research. For example, as more people could afford to go online, people who were poorer became able to participate, as were those from rural communities who might be many hours and multiple forms of transport away from our laboratories.
Technology then leapt ahead, in a very short period of time. The democratisation of the internet opened it up to yet more and more people, and artificial intelligence grew in pervasiveness and technical capacity. So, where are we now?
As members of an international interest group looking at fraud in research (Fraud Analysis in Internet Research, or Fair), we’ve realised that it is now harder than ever to identify if someone is real. There are companies that scientists can pay to provide us with participants for internet research, and they in turn pay the participants.
While they do have checks and balances in place to reduce fraud, it’s probably impossible to eradicate it completely. Many people live in countries where the standard of living is low, but the internet is available. If they sign up to “work” for one of these companies, they can make a reasonable amount of money this way, possibly even more than they can in jobs involving hard labour and long hours in unsanitary or dangerous conditions.
In itself, this is not a problem. However, there will always be a temptation to maximise the number of studies they can participate in, and one way to do this is to pretend to be relevant to, and eligible for, a larger number of studies. Gaming the system is likely to be happening, and some of us have seen indirect evidence of this (people with extraordinarily high numbers of concurrent illnesses, for example).
It’s not feasible (or ethical) to insist on asking for medical records, so we rely on trust that a person with heart disease in one study is also eligible to take part in a cancer study because they also have cancer, in addition to anxiety, depression, blood disorders or migraines and so on. Or all of these. Short of requiring medical records, there is no easy answer for how to exclude such people.
More insidiously, there will also be people who use other individuals to game the system, often against their will. We are only now starting to consider the possibility of this new form of slavery, the extent of which is largely unknown.
Enter the bots
Similarly, we are seeing the rise of bots who are pretending to be participants, answering questions in increasingly sophisticated ways. Multiple identities can be fabricated by a single coder who can then not only make a lot of money from studies, but also seriously undermine the science we are trying to do (very concerning where studies are open to political influence).
It’s getting much more difficult to spot artificial intelligence. There was a time when written interview questions, for example, could not be completed by AI, but they now can.
It’s literally only a matter of time before we will find ourselves conducting and recording online interviews with a visual representation of a living, breathing individual, who simply does not exist, for example through deepfake technology.
The capture highlights the growing problem of deepfakes. wikipedia
We are only a few years away from such a profound deception, if not months. The British TV series The Capture might seem far-fetched to some, with its portrayal of real-time fake TV news, but anyone who has seen where the state of the art now is with respect to AI can easily imagine us being just a short stretch away from its depictions of the “evils” of impersonation using perfect avatars scraped from real data. It is time to worry.
The only answer, for now, will be to simply conduct interviews face-to-face, in our offices or laboratories, with real people who we can look in the eye and shake the hand of. We will have travelled right back in time to the point a few decades ago mentioned earlier.
With this comes a loss of one of the great things about the internet: it is a wonderful platform for democratising participation in research for people who might otherwise not have a voice, such as those who cannot travel because of a physical disability, and so on. It is dismaying to think that every fraudster is essentially stealing the voice of a real person who we genuinely want in our studies. And indeed, between 20–100% of survey responses have been found as fraudulent in previous research.
We must be suspicious going forward, when our natural propensity as amenable people who try to serve humanity with the work we do, is to be trusting and open. This is the real tragedy of the situation we find ourselves in, over and above that of the corruption of data that feed into our studies.
It also has ethical implications that we urgently need to consider. We do not, however, seem to have any choice but to “hope for the best but assume the worst”. We must build systems around our research, which are fundamentally only in place in order to detect and remove false participation of one type or another.
The sad fact is that we are potentially going backwards by decades to rule out a relatively small proportion of false responses. Every “firewall” we erect around our studies is going to reduce fraud (although probably not entirely eliminate it), but at the cost of reducing the breadth of participation that we desperately want to see.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Every year, millions of students from all parts of the globe study for a degree through a language other than their first, usually English. In 2023, 25% of all higher education students in the UK were international students.
The understanding is that the incoming students will have, or develop, enough proficiency in English as a second language to study engineering, history, physics and other courses taught in English.
English-medium courses are also offered in countries where English is not the first language. In Sweden, where English has no official status, 66% of master’s programmes were taught through English in 2020. Universities in France primarily attract overseas students from Francophone Africa, to study in French, but they also offer courses taught in English.
If you’re planning on taking a degree taught in English and it’s not your first language, you already know that it will probably be more challenging than learning in your mother tongue would be.
Lecturers may also be uncomfortable helping students with English, and do not see themselves as language teachers, even though all students need to become familiar with the specific language used in the field they are studying.
Keep a list of key concepts and expressions related to the field you are studying as you come across them in your reading and lectures. Add translations into your strongest languages. Use a dictionary to get the exact meanings of words.
Do the assigned reading in good time. During your reading and lectures you can take well-structured notes in any or all of your languages. Use technology to support your reading, but be careful of mistakes made by automatic translation.
Research effective reading and note taking strategies. Use any study support your university offers. Practise writing in English regularly – free writing or copying out paragraphs from your set texts will develop your writing fluency.
Before lectures you may be able to access the lecturer’s slides. Make sure you understand them. Annotate them in your first language. Becoming familiar with course materials before a lecture or other activity can support learning by reducing the amount of new information you need to deal with in class.
If possible, arrange a study group with other students who share your first or another language. You each read the course literature and then discuss it together in the languages you choose, to make sure everyone is on board. If the lecturer has made summaries of the literature, or shares lecture slides, discuss them before or after lectures to make sure you have understood the main points.
Consider multilingual collaborative note taking with other students, so that you all can access and contribute to a shared document, possibly based on the lecture slides (but be aware that these notes cannot replace your independent classwork).
You may be reluctant to ask questions in class, but it is important that you are clear on what you are expected to do. Your question helps the lecturer see what is difficult, and others are probably wondering the same thing.
Plan and write a first draft of written work using any or all of your languages. This is called translanguaging – using all the language skills that you have at your disposal to think freely about your work. If you stick to what you can easily express in English you may limit your thinking.
You don’t need to do all your studying only in English. Use your linguistic resources to make the most of your opportunities.
Una Cunningham does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
It’s summer in the northern hemisphere and that means sun, sea – and wasps.
A lot of us have been taught to fear wasps as aggressive insects that exist only to make our lives a misery. But with unsustainable wildlife loss across the planet, we need to learn to live alongside all organisms – even wasps. They are important pollinators and predators of insects.
A little knowledge about their natural history can help you dine safely alongside wasps.
The wasps that usually visit your picnic are typically the common yellowjacket (Vespula vulgaris) and the German wasp (Vespula germanica). They seem to appear from nowhere. What should you do?
1. Stay still, or she’ll think you’re a predator
Her (all workers are female) smell receptors have got her to your picnic table, but she’s now using visual landmarks (you and your surroundings) to orientate her way to the food on your plate. Keep your mouth closed and avoid breathing heavily to minimise the release of carbon dioxide, which wasps use as a cue that a predator is attacking. Similarly, if you start flapping and shouting, you are behaving like a predator (mainly badgers in the UK), which might trigger the wasp’s attack mode.
2. Watch what she is eating
This is a worker wasp. She is looking for food to feed to her sibling larvae in her mother’s papery looking nest. Is she carving off a lump of ham, gathering a dollop of jam or slurping at your sugary drink? Watch what she is eating because this gives you a clue to what your wasp offering will be. She is so focused on her task that she won’t notice you watching.
3. Make a wasp-offering to keep her from bothering you
Before you know it, she’s off with jaws full of jam or a hunk of ham. She might zigzag away from your table – a sign that she is reorientating for a reliable return. Once landmarks are mapped, she will fly straight and fast. If you followed her, she would lead you to her nest. But you are better off using your time to prepare your wasp offering, because she’s going to come back soon. Your offering should be a portion of whatever she harvested from your plate. You can move it slightly away from the rest of your food. If you let her have her share, you too can dine in peace.
You can gradually move your wasp offering further away from you. Wasp offerings are well-tested techniques around the world, whether you’re looking to track down a wasp nest to eat, or keep customers unbothered by wasps at an outdoor restaurant.
Are the wasps at your picnic making a beeline for sweet food? hecke61/Shutterstock
Happily, your picnic friend is unlikely to bring a swarm of wasps to your table, because social wasps are poor recruiters. This makes sense because wasp food (insects, carrion) is usually a scattered, short-lived resource. One caterpillar doesn’t necessarily mean there’s a huge patch of them, for example.
This contrasts with honeybees, for which there has been strong natural selection for the evolution of a communication system (waggle dance) to recruit many foragers to a patch of flowers.
However, you might get a few wasps at your picnic, especially if the nest is close, just by chance. Wasps tend to be attracted to a forage source by the presence of other wasps. If she sees a few wasps gathered, then she will investigate. But if there are too many wasps, this puts her off.
Wasps’ changing feeding habits
You may already know that wasps go crazy for sugar at the end of the summer. But why do they prefer a protein earlier in the season? It depends on what is going on inside the colony – and this changes with the season.
Wasp larvae are carnivorous. Together, the workers rear thousands of larvae. If your wasp wants ham (or some other protein source) at your picnic, you know her colony is full of hungry larvae. You might notice this in early-to-mid summer – and no later than mid-to-late August.
Enjoy the knowledge that you are helping feed armies of tiny pest controllers, who will soon set to work regulating populations of flies, caterpillars, aphids and spiders.
A defining feature of an adult wasp is the tiny petiole (wasp-waist). This constriction between her thorax and abdomen evolved so her ancestors could bend their abdomens, yoga-style, to parasitise or paralyse their prey.
The wasp-waist of an adult worker limits her to a largely liquid diet. She is like a waiter who must deliver feasts to customers without tasting it. The larvae tip her service with a nutritious liquid secretion, which she supplements with nectar from flowers. For much of the season, this is enough.
Blend science and a picnic
Towards the end of the summer, most wasp larvae have pupated – and a pupated larva doesn’t need feeding. So, demand for protein foraging diminishes, as do the sweet secretions that have kept the workers nourished.
This means worker wasps must now visit flowers for nectar – although your jam scone or sweet lemonade may also be exceedingly tempting. If your wasp is fixated on sugar at your table, then you know her colony is likely to be in its twilight phase of life.
Although time of the year is a good indicator of the balance of ham-to-jam in a wasp’s foraging preferences, weather, prey availability, local competition and rate of colony growth can influence them too. This means the switch from ham to jam this year may be different to next year.
We’d like you to help us gather data on this, to improve predictions on whether to offer your wasps ham or jam. To take part, report here whether the wasp at your picnic wanted protein (such as chicken, hummus, beef or sausage), jam (or anything sugary, including sugary drinks), or both.
Seirian Sumner receives funding from the UK government’s Natural Environment Research Council (NERC) and the Biotechnology and Biological Sciences Research Council (BBSRC). She is a Trustee and Fellow of the Royal Entomological Society, and author of the book ‘Endless Forms: Why We Should Love Wasps’
Source: The Conversation – Canada – By Lauren McNamara, Research Scientist (Diversity and Equity in Schools), Diversity Insitute, Ted Rogers School of Management, Toronto Metropolitan University
The ministry mentions “new flexibility in the scheduling of recess and lunch — for example, schools may choose to offer one longer recess period in place of two shorter ones, while still providing a lunch break,” plus the 300 daily minutes of instructional time.
As researchers who have long studied the links between school environments and children’s well-being, we know that reducing or restructuring recess time can negatively impact learning and development.
Cognitive science tells us that young children need regular breaks from focused academic work. These breaks reduce mental fatigue, improve concentration and help children return to class refreshed and ready to learn.
Simply switching from mathematics to reading isn’t enough. What’s needed are genuine pauses from cognitive effort, ideally involving unstructured play.
The power of play
Recess offers a chance for unstructured play, something children do freely and joyfully. Play isn’t just fun, it’s essential to healthy brain development. Whether they’re running, building, imagining or exploring, play activates the brain’s reward systems, releasing endorphins that enhance mood and reduce stress.
Play is so fundamental to healthy development that the United Nations Convention on the Rights of the Child has long deemed it a basic human right, and as a signatory, Canada is obligated to uphold this right.
It’s important to note that gym class or other structured physical activities don’t offer the same benefits. Children need time to follow their own interests, move at their own pace and interact freely with peers.
Movement, the outdoors
Kids aren’t meant to sit still all day. Recess gives them a chance to move, whether that’s running, jumping or just walking and stretching. Regular movement improves circulation, boosts energy, supports mental clarity and improves mood. Even short bursts of physical activity can help offset the long hours spent sitting in classrooms.
Time outside can have meaningful effects. Nature has a calming effect on the brain, reduces anxiety and helps with attention and emotional regulation. Green spaces and natural materials like trees, grass and fresh air offer benefits that indoor classrooms simply can’t replicate.
Socializing, mental wellness
To children, recess isn’t just a break, it’s a vital social time. It’s when they form friendships, practise conflict resolution and feel a sense of belonging. These connections support emotional development and make school a place where kids want to be.
Unfortunately, as schools focus more on maximizing instructional minutes, this social time can be undervalued. But connection and belonging are not side benefits — they are essential to academic motivation, engagement and overall student success.
Physical activity, outdoor time, free play and meaningful social interaction all work together to support mental health and overall well-being.
Recess creates space for laughter, joy, relaxation and calm. Students who feel emotionally safe, happy and supported are more likely to pay attention in class, co-operate with peers and persist through academic challenges. In summary, healthy children are better learners.
Schools are more than instruction
Schools are communities where children spend much of their waking lives. They are places not only of academic growth but also social, emotional and physical development.
It’s a critical part of the school day and must be protected and well supported, not minimized.
Recommendations for recess
According to Physical and Health Education Canada’s National Position Paper on Recess, all students — from kindergarten through high school — should have regularly scheduled recess across the school day.
Children in kindergarten through Grade 2 should receive at least four 15-minute recesses daily, ideally outdoors. Children in grades 2 to 6 should have at least two 20-minute recesses, not including time spent putting on coats or lining up.
These are research-backed guidelines that support children’s full development. And, of course, the quality of recess matters, which is described further in the position paper.
The Ontario memo invites us all to revisit the role of recess in the school day. We must remember that time to play, move, connect and breathe is not a break from learning, it’s a vital part of learning.
Tracy Vaillancourt is affiliated with the Centre for International Governance Innovation.
Lauren McNamara does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
These frameworks are meant to address the ongoing effects of historical and structural marginalization. Emerging from the four designated categories in Canada’s Employment Equity Act, EDI policies in Canadian universities tend to centre race, Indigeneity as well as gender, with limited attention to religious affiliation.
To understand this oversight, we conducted a content and discourse analysis of the most recent (at the time of the study) EDI policies and Canada Research Chair EDI documents from 28 Canadian universities.
Our sample included English-speaking research universities of more than 15,000 students and a few smaller universities to ensure regional representation.
We focused on how these documents referred to Jewish identity, antisemitism and related terms, as well as how they situated these within broader EDI discourses. We found that, in most cases, antisemitism and Jewish identity were either completely absent or mentioned only superficially.
Three patterns emerged from our analysis:
1. Antisemitism is marginalized as a systemic issue: Where it appears, antisemitism is generally folded into long lists of forms of discrimination, alongside racism, sexism, homophobia, Islamophobia and other “isms.” Unlike anti-Black racism or Indigenous-based racism, which often have dedicated sections and careful unpacking, antisemitism is rarely examined. While EDI policies can be performative, they still represent institutional commitment and orientation. Not specifically considering antisemitism renders it peripheral and unimportant, even though it remains a pressing issue on campuses.
EDI politices represent institutional commitment and orientation. (David Schultz/Unsplash)
2. Jewish identity is reduced to religion: When Jewishness is acknowledged in EDI frameworks, it is almost always under the category of religious affiliation, appearing as part of the demographic sections. This framing erases the ethnic and cultural dimensions of Jewish identity and peoplehood and disregards the ways in which many Canadian Jews understand themselves. The lack of understanding of Jewishness as an intersectional identity also erases the experiences of Jews of colour, LGBTQ+ Jews, and Mizrahi and Sephardi Jews.
While some Jews may identify as white, some do not, and even those who benefit from white privilege may still experience antisemitism and exclusion.
But Zionism presents a challenge for EDI for several reasons. Firstly, Zionism enters into a tension with (mis)conceptions of Jews as non-racialized people within anti-racism discourses.
3. Pairing antisemitism and Islamophobia: In the EDI policies we examined, antisemitism is rhetorically paired with Islamophobia: In nearly every case where antisemitism was mentioned, it was coupled with Islamophobia. This rhetorical symmetry may be driven by institutional anxiety over appearing biased or by attempts to balance political sensitivities. Yet it falsely implies that antisemitism and Islamophobia are similar or are inherently connected.
The erasure of antisemitism from EDI policies affects how Jewish students and faculty experience campus life. Jews may not be marginalized in the same way as other equity-seeking groups, yet they are still deserving of protection and inclusion.
The EDI principle of listening to lived experiences cannot be applied selectivity. Jewish identity is complex, and framing it narrowly contributes to undercounting Jewish people in institutional data and EDI policies. Simplistic classifications erase differences, silence lived experiences and reinforce assimilation.
By failing to name and analyze Jewish identity and antisemitism, universities leave Jewish members of the academic community without appropriate mechanisms of support. The lack of EDI recognition reflects and reproduces the perceptions of Jews as powerful and privileged, resulting in a paradox: Jewish people are often treated as outside the bounds of EDI, even as antisemitism intensifies.
The question of Jewish connection to Israel or Zionism introduces another layer of complexity that most EDI policies avoid entirely. While criticism of Israeli state policies is not antisemitic, many Jews experience exclusion based on real or perceived Zionist identification. Universities cannot afford to ignore this dynamic, even when it proves uncomfortable or politically fraught.
What needs to change
If Canadian universities are to build truly inclusive campuses, then their EDI frameworks must evolve in both language and structure.
First, antisemitism must be recognized as a form of racism, not merely religious intolerance. This shift would reflect how antisemitism has historically operated and continues to manifest through racialized tropes, conspiracy theories and scapegoating.
Second, institutions must expand their data collection and demographic frameworks to reflect the full dimensions of Jewish identity: religious, ethnic and cultural. Without this inclusion, the understanding of Jewish identity will remain essentialized and unacknowledged.
Third, Jewish voices, including those of Jews of colour, LGBTQ+ Jews and Jews with diverse relationships to Zionism, must be included in EDI consultation processes. These perspectives are critical to understanding how antisemitism intersects with other forms of marginalization.
Fourth, the rhetorical pairing of antisemitism and Islamophobia, while perhaps intended to promote balance, should be replaced with a deep unpacking of both phenomena and their intersections.
Finally, universities must resist the urge to treat difficult conversations as too controversial to include. Complex dialogue should not be a barrier to equity work. The gaps we identified reveal how current EDI frameworks can exclude any group whose identities fall outside established categories.
In a time of polarization and disinformation, universities must model how to hold space for complexity and foster real inclusion.
Lilach Marom receives funding from Ronald S. Roadburg Foundation
Ania Switzer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
African cities are growing at an incredible pace. With this growth comes a mix of opportunity and challenge. How do we build cities that are not only smart but also fair, inclusive and resilient?
A smart city uses digital tools such as sensors, data networks and connected devices to run services more efficiently and respond to problems in real time. From traffic and electricity to public safety and waste removal, smart technologies aim to make life smoother, greener and more connected.
Ideally, they also help governments listen to and serve citizens better. But without community input, “smart” can end up ignoring the people it’s meant to help.
That’s why a different approach is gaining ground. One that starts not with tech companies or city officials, but with the residents themselves.
I’ve been exploring what this looks like in practice, in collaboration with Terence Fenn from the University of Johannesburg. We invited a group of Johannesburg residents to imagine their own future neighbourhoods, and how technology could support those changes.
Our research shows that when residents help shape the vision for a smart city, the outcomes are more relevant, inclusive and trusted.
Rethinking smart cities
Our research centred on Westbury, a dense, working-class neighbourhood west of central Johannesburg, South Africa. Originally designated for Coloured (multi-racial) residents under apartheid, Westbury remains shaped by spatial injustice, high unemployment and gang-related violence, challenges that continue to limit access to opportunity and basic services. Despite this, it is also a place of resilience, cultural pride and strong community ties.
We tested a method called Participatory Futures, which invites people to imagine and shape the future of their own communities. In Westbury, we worked with a group of 30 residents, selected through local networks to reflect a mix of ages, genders and life experiences. Participants took part in workshops where they mapped their neighbourhood, created stories and artefacts and discussed the kind of futures they wanted to see. This approach builds on similar methods used in cities like Helsinki, Singapore and Cape Town, where local imagination has been harnessed to inform urban planning in meaningful, grounded ways.
We invited residents to imagine their own future neighbourhoods. What kind of changes would they like to see? How could technology support those changes without overriding local values and priorities?
Through this process, it became clear that communities wanted a say in how technology shapes their world. They identified safety, culture and sustainability as priorities, but wanted technology that supports, not replaces, their values and everyday realities.
The workshops revealed that when people imagine their future neighbourhoods, technology isn’t about gadgets or buzzwords; it’s about solving real problems in ways that fit their lives.
Safety was a top concern. Residents imagined smart surveillance systems that could help reduce crime, but they were clear: these systems needed to be locally controlled. Cameras and sensors were fine, as long as they were managed within the community by people they trusted, not some distant authority. The goal was safer streets, not more control from afar.
Safety is a deeply rooted concern in Westbury, where residents live with the daily reality of gang violence, drug-related crime and strained relations with law enforcement. Trust in official structures is eroded. The desire for smart safety technologies is not about surveillance but about reclaiming a sense of control and protection.
Energy came up constantly. Power cuts are a regular part of life in Westbury. People wanted solar panels, not as a green luxury but as basic infrastructure. They imagined solar hubs that powered homes, schools and local businesses even during blackouts. Sustainability wasn’t an abstract goal; it was about self-sufficiency and dignity.
Technology also opened the door to cultural expression. Residents dreamed up tools that could make their stories visible, literally. One idea was using augmented reality, a technology that adds digital images or information to the real world through a phone or tablet, to overlay neighbourhood landmarks with local history, art and personal memories. It’s tech not as a spectacle, but as a way to connect past and future.
And then there were ideas about skills and education: digital centres where young people could learn to code, produce music or connect globally. These were spaces to build the future, not just survive the present. People imagined smart tools that could showcase local art, amplify community voices, or support small businesses.
In short, the technology imagined in Westbury wasn’t about creating a futuristic cityscape. It was about building tools that reflect the community’s values: safety, creativity, shared power and resilience.
Lessons for the future
If we want African smart cities to succeed, they need to be designed with, not just for, the people who live in them. Top-down models can miss the nuances of everyday life.
There are growing examples of participatory approaches reshaping urban futures around the world. In Cape Town, the “Play Khayelitsha” initiative used interactive roleplay and games to engage residents in imagining and co-planning future neighbourhoods. This helped surface priorities such as safety, mobility and dignity.
In Medellín, Colombia, a history of top-down planning was transformed by including local voices in decisions about transport, public space and education.
These cases, like Westbury, show that when communities are treated as co-creators rather than passive recipients, the outcomes are more inclusive, sustainable and grounded in real-life experience.
This shift is especially important in African cities, where the effects of colonial history and structural inequality still shape urban development. Technology isn’t neutral. It carries the assumptions of its designers. That’s why it matters who’s in the room when decisions are made. The smartest cities are those built with the people who live in them.
Rennie Naidoo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Armed banditry in Nigeria has escalated into a full-blown security crisis, particularly in the north-west and north-central regions. What began as sporadic attacks has now morphed into coordinated campaigns of terror affecting entire communities.
In March 2022, bandits attacked an Abuja-bound train with over 900 passengers, killing several and abducting an unknown number. Earlier, in January 2022, around 200 people were killed and 10,000 displaced in Zamfara after over 300 gunmen on motorcycles stormed eight villages, shooting indiscriminately and burning homes.
Between 2023 and May 2025, at least 10,217 were killed by armed groups, including bandits, in northern Nigeria. Most of the victims were women and children.
States like Zamfara, Sokoto and Katsina in the North-West and Niger, Kogi and Benue in the north-central region are especially hard hit. Farmers are abducted en route to their fields, travellers are kidnapped on major highways, and whole villages have been displaced. In many rural areas, residents are now forced to pay “taxes” to bandits before they can even harvest their crops.
Insecurity is now reshaping daily life in rural Nigeria. Families are abandoning their homes. Food supply chains are being disrupted. School attendance is falling. The rise in banditry is fuelling poverty, eroding trust in the state, and contributing to emigration in Nigeria.
While existing studies on armed banditry in Nigeria have largely focused on causes like ungoverned spaces, poverty and marginalisation, they often under-emphasise the fact that, since banditry is a law enforcement issue, the capacity of the police to address the crisis is paramount. Effective policing is the bedrock of internal security.
I’m a PhD researcher and have just completed my thesis on the link between institutional weakness and insecurity in Nigeria. A recent paper draws on my thesis.
This study examines how factors such as police manpower, funding, welfare conditions and structural organisation shape the ability of the Nigeria Police Force to respond effectively.
I found that the Nigeria Police Force has too few officers, is chronically underfunded, works under poor conditions, and is over-centralised, resulting in a lack of local ownership and initiative. These shortcomings aren’t just bureaucratic – they create an environment where organised violence thrives.
Tackling armed banditry in Nigeria requires addressing the institutional weaknesses of the police: expanding recruitment; improving salaries and welfare infrastructure; decentralising the force to enable state and community policing; and ensuring transparent, accountable use of security funds.
Between 2022 and 2023, I conducted virtual interviews with 17 respondents including police and civil defence personnel serving in north-central Nigeria. I also conducted informal focus group discussions with police personnel and individuals affected by banditry in Abuja. Additionally, I analysed security reports and public documents from civil society organisations and media sources related to banditry and the Nigerian police.
What emerged was a troubling yet consistent story: the Nigerian Police Force wants to do more and has some dedicated officers, but is constrained by deep structural and institutional challenges. These challenges fall into four interlinked areas:
Manpower crisis: too few officers, spread too thin
Nigeria has over 220 million citizens but only about 370,000 police officers. The impact is most severe in regions where insecurity is rampant. In some local governments in northern Nigeria, only 32 officers are tasked with protecting hundreds of thousands of residents.
Rural areas where banditry is most active remain dangerously under-policed, while safer cities in the south have a visible police presence. This imbalance has left vast regions vulnerable to bandit attacks.
Chronic under-funding and operational paralysis
Nigeria’s 2024 police budget stands at about US$808 million, a fraction of what countries like South Africa and Egypt spend. The result is that most police stations lack basic items like paper, computers, or internet access. Officers use personal mobile phones for official work. Some stations can’t even fuel their patrol vehicles without financial help from the public. Specialised equipment like bulletproof vests, tracking devices and functional armoured vehicles is either outdated or unavailable.
Even the Nigeria Police Trust Fund, established in 2019 to address these gaps, has been plagued by corruption and mismanagement. The result is a force that improvises its way through crises with minimal tools.
Poor welfare and working conditions
Morale within the police force is alarmingly low. Junior officers earn as little as US$44 per month – barely enough to live on in today’s Nigeria. Officers buy their own uniforms, pay for basic medical needs, and often live in rundown barracks that lack water, toilets, or electricity. In one barracks in Lagos, several families share a single bathroom.
Healthcare is patchy at best. Insurance schemes don’t cover critical conditions. Officers injured on duty have been abandoned in hospitals, while families of fallen officers sometimes wait years to receive death benefits. With no sense of protection or career dignity, many officers are demoralised and disengaged. This isn’t just a labour rights issue, it’s a national security issue.
Over-centralised structure and lack of local ownership
Nigeria’s police is centrally controlled from Abuja, leaving state governors, who are legally responsible for security, without real authority over officers in their states. This top-down structure causes delays, confusion and weak accountability.
In banditry-prone rural areas, officers often lack local knowledge, language skills and community trust. As a result, the response to attacks is slow, and the security presence feels distant. Bandits exploit this disconnect, operating freely in areas where the state appears absent or ineffective.
To stop armed banditry in Nigeria, the institutional challenges confronting the police must be dealt with. The country must:
increase police recruitment, especially in rural areas
raise police salaries and invest in welfare infrastructure
decentralise the police structure, allowing for state and community policing
ensure transparent use of security funds, particularly the Police Trust Fund.
Onyedikachi Madueke does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Imagine two people in their 70s. Both are active, live independently and enjoy life. But over the next 15 years, one of them develops two or three chronic illnesses – heart disease, diabetes, depression – while the other remains relatively healthy. What made the difference?
According to our new research, diet may be a key part of the answer.
In our new study, our group at the Aging Research Center at the Karolinska Institutet, Sweden, followed more than 2,400 older Swedish adults for 15 years.
We found that people who consistently ate a healthy diet developed chronic diseases more slowly, in contrast to those whose diets were considered more inflammatory; that is, diets high in processed meats, refined grains and sugary drinks, which are known to promote low-grade chronic inflammation in the body.
The strongest associations were seen for cardiovascular and psychiatric conditions. So, people who ate better were less likely to develop diseases including heart failure, stroke, depression or dementia. We did not, however, find a clear link between diet and musculoskeletal diseases such as arthritis or osteoporosis.
Some of the benefits of healthy eating were more pronounced in women and in the oldest participants: those aged 78 and above. This suggests that it is never too late to make changes. Even in very old age, diet matters.
Another reason is that healthy diets support the body’s resilience. They provide essential nutrients that help maintain immune function, muscle mass and cognitive health. Over time, this can make a big difference in how people age.
Our study is one of the longest and most comprehensive of its kind. We used repeated dietary assessments and tracked more than 60 chronic health conditions. We also tested our findings using different analytical methods to make sure they held up.
Of course, diet is just one piece of the puzzle. Physical activity, social connections and access to healthcare all play important roles in healthy ageing. But improving diet quality is a relatively simple and accessible way to help older adults live longer, healthier lives.
So what should older adults eat? The message is clear: eat plenty of vegetables, fruits, legumes, nuts and whole grains. Choose healthy fats like rapeseed oil and fish. Limit red and processed meats, sugary drinks and solid fats.
Ageing is inevitable. But people can shape how it unfolds. Our findings suggest that even small changes in diet can make a meaningful difference in how people experience later life, regardless of their age.
Adrián Carballo Casla receives funding from the Foundation for Geriatric Diseases at Karolinska Institutet (project numbers 2023:0007 and 2024:0011); the Karolinska Institutet Research Foundation Grants (project number 2024:0017); the David and Astrid Hagelén foundation (project number 2024:0005); and the Swedish Research Council for Health, Working Life and Welfare (project number STY-2024/0005).
Amaia Calderón-Larrañaga receives funding from the Swedish Research Council (project number 2021-06398), the Swedish Research Council for Health, Working Life and Welfare (project numbers 2024-01830 and 2021-00256), Karolinska Institutet’s Strategic Research Area in Epidemiology and Biostatistics SFOepi (consolidator bridging grant, 2023), and Alzheimerfonden (AF-1010573, 2024).
David Abbad Gomez receives funding from Hospital del Mar Research Institute as a research assistant.
Source: The Conversation – UK – By Yvonne Reddick, Reader in English Literature and Creative Writing, University of Lancashire
“Far over the misty mountains cold,” Dad read. Every evening before my light was turned out, he read me a story about a hobbit who left his comfortable burrow to journey to the Lonely Mountain. Searching for gold at the mountain’s roots, talking to eagles, scaring wolves off by starting a forest fire, tricking a dragon: these were the tales he read to me.
View of the south-eastern slopes of Braeriach. The River Dee flows out of An Garbh Choire. By Angus, CC BY-SA
We lived in a granite house on the western edge of Aberdeen. Mum planted rhubarb and runner beans in the garden. Summer holidays meant going to Aviemore, in the lap of the Cairngorm mountains. We’d stay in a wooden chalet, where knots in the pine planks looked down at me like the eyes of owls. We’d always walk near the gentle hill of Craigellachie, fledged with silver birches. I learnt to recognise the mountains: Braeriach with its three scooped-out corries, Cairngorm scarred with ski runs.
Dad knew how to disappear. Some weeks, he’d leave before dawn to take the helicopter out to the North Sea oil platforms. During summer weekends, he’d vanish for the summits of those rounded Cairngorm hills. One day, he marched in through the door with his muddy boots still on, and hoisted his battered blue Berghaus backpack onto the kitchen table, grinning:
I’ve got a surprise for you.
What is it?
He hefted out a football-sized lump of mountain quartz and put it on the table in front of me. It shone white as a glacier.
Dad’s “Munro Book” was a gift from my mum, given shortly after I was born. It was always referred to as the Munro Book, never by its title, and it detailed all of Scotland’s peaks over 3,000 feet high – first charted by the tweedy Victorian baronet, Sir Hugh Munro. Getting to the top of all 282 of them is a popular challenge for hikers. For dad, it was an obsession.
Dad was a hillwalker, but the Munro Book was written by mountaineers. It contained sentences such as: “A pleasantly airy scramble, for which some might prefer the security of a rope.” (I only ever saw Dad attempting a climbing wall once, when the two of us clambered up – and slithered off – orange and green plastic holds at a gym in Scotland.)
He never acquired the paraphernalia that winter hikers (or walkers who fancy themselves as climbers) accumulate: crampons, ice axes, ropes, the ironmongery of nut-keys, hexes and cams. However, he did claim that he scrambled up Ben More on the Isle of Mull via a route he termed “the wrong one”. It was one of his favourite stories.
The Munro Book shared bookshelves with Mountaincraft and Leadership, The Pennine Way, The South Downs Way, The Northern Fells, and 30 battered, pink Ordnance Survey maps in miles and feet.
From the age of nine, dad dragged me out with him. At first, I’d whine about the mud and midges. Later, I felt my heart lifting when I reached a cairn and could see as far as Mull and Skye. I learnt to name the whaleback of Ben Nevis.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
Dad kept a weather-eye on the forecasts. His kit list included: cheese-and-pickle sandwiches; an itchy wool balaclava; a Berghaus waterproof; a survival blanket; a map, compass and GPS; spare batteries in case the ones in the GPS went flat; a second compass, in case the first compass malfunctioned; and the phone number of Mountain Rescue. All of this was crammed into the ancient blue rucksack.
Dad’s love of the outdoors developed alongside his work as an oil reservoir engineer. There are North Sea oilfields named Everest, Banff, Cairngorm and Munro. And the ease with which Dad read charts of mountains and valleys deep below the sea translated into the mathematical precision with which he navigated with map, compass and GPS.
When I was old enough to read for myself, I read The Hobbit and longed to journey through the Misty Mountains: “The mountain smoked beneath the Moon … The trees like torches blazing bright.” I’d never seen a forest fire, but even in rainy Scotland, there were signs that blazes were becoming more frequent. Just south of Loch an Eilein (the loch with the island), you came across bunches of strange brooms and paddles by the path: fire beaters, in case the heather caught alight. Aviemore was a busy ski resort in winter, but the snowline was inching higher and higher up the mountains.
I never worried about Dad. Even when he grumbled about tightness in his chest before his last holiday in the Cairngorms, it never occurred to me that the path could run out so soon.
Peak oil
Mountains rise skywards when one of Earth’s plates collides against another. Deep in the guts of great ranges, rocks fold and buckle. Ancient seabeds, turned to stone, are heaved upwards. The same rock-fold, the anticline, can forge mountains and harbour oil.
The Rockies stand on the largest reservoir of untapped shale oil in the world. The richest oilfields west of Russia are near the Carpathians. Iran’s oil and gas deposits lie at the feet of the Zagros, the high range that transects the country from north-west to south-east. North Sea oilfields are named for Highland mountains: Beinn, Schiehallion, Foinaven.
I look up a 3D schematic of an oil deposit, not unlike the one pictured below. It reminds me of a miniature massif: oil and gas seeping upwards through rock, resembling ice-falls in reverse. In place of a summit scarved with cloud, there’s a pointed deposit of trapped methane gas.
The next zone down, where a mountain’s glaciers would be, is an area on the diagram that is coloured green, showing oil. I look at a seismology map – the kind Mum used to work on in Oman. This time, the image shows a vertical cross-section through layers of rock. Its contours are shallower – more hillock than Himalayan peak. I think of knolls and hill forts – Arnside Knott on the Lancashire coast, or Torside on the shoulder of Bleaklow in the Peaks.
A 3D schematic of the reservoir properties of the Illizi Basin in eastern Algeria. SEG Wiki, CC BY
Mountains and petroleum share a similar vocabulary. Exploration, frontiers, surveying. I look at graphs of peak oil production and note coincidences with the so-called golden age of mountaineering. 1854-1865: the dawn of a prolific decade of Alpine mountaineering, the time of Edward Whymper and the Matterhorn disaster. 1859: the drilling of the first oil well by the Pennsylvania Rock Oil Company in Titusville. The 1950s and ’60s saw western companies exploiting deposits in the Middle East and South America, leading to my grandfather’s time working in Venezuela and Iran. Expansion to the Earth’s highest peaks; drilling into its rocky depths.
Richard Bass, the first man to climb the “Seven Summits” – the highest peaks on each continent – ran a Texan oil-and-gas business. Black gold funded Bass’s Snowbird ski resort in Utah. The first rope access workers on North Sea oil platforms were climbers and cavers.
Many of dad’s friends found that a youthful passion for rock climbing gave them an intimate knowledge of the character of different kinds of stone, or that reading maps translated easily into mapping the deep layers of Earth’s bedrock. For a geology or engineering graduate with an enjoyment of adventure and a love of travel, a career in oil exploration was an exciting and well-paid career path.
But expanding oil frontiers and summiting the world’s highest peaks bring similar controversies. It is no coincidence that mountaineering exploits and oil extraction share common ground with colonialism, foreign control, nation-building and struggles for self-rule. Local and Indigenous people are determined to protect their land, or want a stake in the wealth of an industry whose history is mired in colonial exploitation.
I think of the far-north of Canada and Alaska – of Athabasca people either fierce in their resistance to the incursions of oil companies under the ice, or wanting their fair share of the industry’s colossal profits. Sherpas, Sherpanis and Nepalis reclaiming Mount Everest after decades of western exploitation, smashing the time records for summit successes.
Murky soot and tiny microplastics from the oil industry touch even the shining Alpine summits that I love. They taint the high snows of Everest. The cataclysmic impact of fires, blowouts and everyday fossil-fuel burning strips them of their ice. Oil is there in the mountains of plastic waste I saw on the outskirts of Himalayan towns. And perhaps the most explosive place where fossil fuels meet mountains is Azerbaijan’s Yanar Dag, which dad visited in 2002 on the back of a drunken horse.
Mountain of Fire
Nightfall in the countryside near Baku. The horses hung their heads in the stalls. Bahram the guide poured beer on their oats. They shook themselves awake, started munching. Dad hauled himself into the stirrups and thudded into the saddle like a sack of gravel: “Don’t drink and ride!” The horses began the slow plod uphill.
Rocks and thirsty thorn-scrub. A wooden bridge over a parched river. Bahram paused, dismounted, lit a cigarette and flicked the ash towards the riverbed. It touched off flame.
Wink of fire through twilight. Whiff of gasoline on the breeze. Flames surged from a blackened fissure in the rocks. Yanar Dag means Mountain of Fire. This fissure has burned for 3,000 years at least. People raised temples where priests tend eternal fires. Did fires like this inspire Zoroastrianism, one of the world’s most ancient faiths? I look up the prophet Zarathustra, glance through Nietzche’s imagining of his words. I read about Zoroastrian fire rituals and trial by flame. I read about sacred fire, symbolising the light of the deity and the illuminated mind.
Dad worked in the Caspian region when I was a teenager, flying to Baku for one week every month. He admired the Flame Tower skyscrapers, relished lamb-and-rice plov. Deals were toasted with copious quantities of vodka. I heard about Shah Deniz, the King of the Sea, a gargantuan gas field under fathoms of rock and water.
Dad longed to hike the ochre-red foothills of the Caucasus, and loved spinning yarns about “Hell’s Doorway”, the crater over a natural gas deposit that burst into flames after a Russian drilling rig collapsed. At garden parties for his colleagues back in the Home Counties, I met Bahram and Mehtab, their daughters Farah and Donya.
The Caspian seabed was tough drilling. One of Dad’s Azeri colleagues showed me a map of the bedrock, riddled with faults. Dad enjoyed the reservoir engineering challenge this posed, in much the same way he relished building Meccano or getting my second-hand Scalextric cars to work.
Forty-eight billion barrels lie under the world’s largest inland water-body. The Caspian is split into four basins; the most southerly is the deepest, divided from the others by the Apsheron Ridge. This anticline linked to the Caucasus Mountains spans the entire Caspian from Baku to Turkmenistan. The Caspian was formed by a complex interplay of plates shifting and rifting.
One of the greatest hazards for oil exploration in the region – apart from the earthquakes – are the mud volcanoes. Found near petroleum deposits and mountainous regions, these bizarre formations belch up methane, creating mucky splatter cones. They may bubble up under a body of water, or erupt on land. The rounder ones on land are known as “mud domes”; the flatter ones are sometimes referred to as “mud pies”.
Dad loved telling stories about them. Soviet scientists wanted to predict their habits, proposing that variable water levels in the Caspian, and even sunspots, might trigger their eruptions. Mud volcanoes are found in the Carpathians, the Caucasus, California, and even in the Gulf of Mexico. Dashli Island in the Caspian Sea is one giant mud volcano which exploded near an oil platform in 2021, erupting flames 500 metres high.
The Caspian region is one of the oldest oil-producing areas in the world. Troops of the emperor Cyrus the Great used Baku’s oil as an incendiary weapon. Caspian oil lit the way for Alexander the Great’s soldiers. Medieval Arab historians and travellers noted the region’s dependence on oil for heating and trade. An inscription from 1593 commemorates a manually-dug oil well near Baku.
Baku has been an oil town for centuries. An enterprising Azeri merchant drilled two oil wells in the Caspian in 1803, likely the world’s first offshore extraction, although a storm made short work of them in 1825. Robert Nobel arrived in Baku in 1873, tasked with finding walnut trees for wood to build rifles for the tsar’s army; instead, he decided to buy an oil refinery. Grainy sepia images of Russian oil production show forests of wooden derricks. In 1898, Franco-Russian filmmaker Alexandre Mishon filmed gushers and blowouts. Russian production of Azeri oil was the most prolific petroleum source on the planet from 1899 to 1901.
Azerbaijan is the birthplace of many innovations: the first mechanically drilled oil-wells, the first pipelines, the first tankers. Following the collapse of the Soviet Union, competition for oil production partnerships in newly independent Azerbaijan was intense. A deal with BP in 1992 began three decades of exploration and drilling. The offshore platforms became enormous: photos show helipads, flarestacks, workers’ quarters, tangles of pipework. As the shallower oil reserves were exhausted, drilling became more ambitious. The Deepwater Gunashli platform began to pump petroleum from 175 metres below the water’s surface.
After “peak oil” – the height of demand and production – economists predict that we are entering the age of “tough oil”. Deep water, distant locations, greater danger. The oil reserves that dad and his colleagues explored were already becoming increasingly hazardous and hard to reach.