Too many students drop out of A-levels – here’s how to help them pick a course they’ll stick with

Source: The Conversation – UK – By Nigel Newton, Lecturer in Education, Cardiff Metropolitan University

Dmytro Zinkevych/Shutterstock

You can probably remember at least one education choice you regret. You don’t have to be lazy or naive to pick the wrong subject, just lacking in information about what you will actually have to study on the course.

In England, this problem is concentrated at age 16. Young people are expected to choose a small set of subjects – three or four A-levels, or just one T-level, for example – that will shape not just their next two years but potentially how they succeed in the future.

In theory, there is lots of support: open evenings, prospectuses, taster sessions, careers platforms, guidance interviews. Yet disengagement and drop-out remain familiar features of post-16 education. One reason is that the system often treats course choice as a question of career opportunity, while leaving something oddly under-discussed: the curriculum itself.

That matters because students aren’t just choosing “qualifications”. They are choosing to spend hundreds of hours studying – reading, writing, experimenting, analysing – and then to be assessed in particular ways.

In a recently published study, I analysed an unusual dataset: what students thought about the A-level courses they were taking before they began them, and then, later, how well they did in those courses.

The study followed 191 students in a school sixth form who completed 674 questionnaires across 24 A-level subjects. The questionnaires were based on the specific curriculum topics and assessment practices that students would need to engage with on the courses offered in that sixth form.

The questionnaires asked how interested the teenagers would be in studying DNA, including what it is and how it works for A-level biology, for instance, or how much they’d enjoy learning about the management and conservation of coastlines for A-level geography. The questionnaires also asked how they viewed courses in relation to their future career aspirations and progression to university.

Across the subjects with enough data, students who reported higher interest in the content of a course were significantly more likely to complete their courses. But whether a student thought an A-level was valued by future employers, or that would help their progression to university, appeared less likely to affect their chances of completing the course.

This doesn’t mean careers don’t matter to course choice, but it does suggest career aspirations may not be enough to keep students motivated through the weekly pressures of course study.

Schools and colleges go to great lengths to provide guidance. But more information is not the same as meaningful engagement with what a course involves. Previous research suggests students often don’t rely on the course information they’re given to make decisions.

Choice overload

Linked to this is what psychologists call choice overload. Although we value having options, more choice can increase anxiety, reduce satisfaction and encourage us to take shortcuts when making decisions. It’s one reason students simplify decisions by picking subjects they think they know from GCSE, or those their friends are taking.

And for young people from backgrounds affected by disadvantage, choices can narrow towards what seems most likely to lead to employment, even where other interests exist.

Students looking at information on paper
Choice overload can affect decision-making.
gonzagon/Shutterstock

And there’s another layer too: the environment of choice is shaped by competition. Research has shown that sixth forms are using open evenings just as much to market themselves to students as to provide information on what their courses cover.

For instance, in the competitive post-16 marketplace, a school may feel it is risky to recruitment efforts to dwell on the reality that their A-level history focuses on religion in the Tudor period rather than the saucier intrigues of the royal court. “Selling” and “informing” don’t always align.

Education policy implicitly assumes young people are to treat post-16 choices as an optimisation problem: maximise exchange value, keep doors open, choose strategically. This can reduce study to a trade-off: endure now, benefit later. For some learners, that works.

For many, it doesn’t, especially when their attention is already being pulled in multiple directions and when anxiety about their future is high.

But interest in what they are actually studying should not get lost. Interest sustains attention and effort. If we don’t know students’ levels of interest in course content to begin with, it becomes difficult to tell whether later underperformance reflects a poor fit between student and course, or limitations in how teaching and assessment are supporting that engagement.

Curriculum-first guidance is needed, making curriculum and assessment visible early and central to sixth forms and colleges’ offers to students. This should be at the heart of how they support teenagers making choices about their post-16 education.

There’s an additional benefit. If curriculum-specific interests can be measured reliably, this could help schools and colleges evaluate mismatches between course provision, the learners’ interests, and outcomes, creating a new way of thinking about “quality” in post-16 education.

It’s not only about who drops out, or whether GCSE results predict how well students do, but whether sixth forms and colleges are building on students’ intrinsic interests in curriculum disciplines.

It may not be impossible to avoid all regrets about choices in education. But if we start by asking learners what knowledge they would enjoy engaging with and acquiring over the next two years, we may go a long way in reducing those course choice doubts and improving the odds that their motivation survives the first difficult term.

The Conversation

Nigel Newton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Too many students drop out of A-levels – here’s how to help them pick a course they’ll stick with – https://theconversation.com/too-many-students-drop-out-of-a-levels-heres-how-to-help-them-pick-a-course-theyll-stick-with-273406

Saipan: Roy Keane World Cup drama is a highly entertaining slice of Irish football history

Source: The Conversation – UK – By Laura O’Flanagan, PhD Candidate, School of English, Dublin City University

In the summer of 2002, a dispute inside the Republic of Ireland’s football camp spiralled into a national controversy. Few sporting rows have lodged themselves in the Irish imagination as stubbornly as Keane v McCarthy in Saipan, culminating in Keane’s departure from the Irish World Cup squad.

Directed by Glenn Leyburn and Lisa Barros D’Sa, Saipan takes a deliberately narrow focus of the saga, centring on the breakdown of the relationship between Ireland captain Roy Keane and manager Mick McCarthy, framing it as an intimate power struggle. This choice grounds the film and keeps it from slipping into nostalgia or easy hero worship.

Roy Keane (Éanna Hardwicke) is all coiled intensity. The film captures his sense of grievance and moral rigidity without smoothing over the damage it causes. Keane’s frustrations centre on what he sees as a lack of professionalism within the Irish setup in Saipan, from inadequate training facilities to a broader culture of complacency and indulgence.




Read more:
Saipan: the story behind Roy Keane’s World Cup walkout on Ireland’s football team


Keane is a man driven by standards that feel absolute, and the film is careful to show how those standards inspire as much as they alienate. Hardwicke’s terrific performance sits in the space between principle and obsession. He never softens Keane into a misunderstood martyr, nor does he paint him as a simple villain.

Steve Coogan plays Mick McCarthy with a quiet, pained restraint, but the portrayal is far from generous. His McCarthy is isolated and increasingly evasive, a man struggling to assert authority while appearing overwhelmed by events of his own making. He is framed as a figure losing control, unable or unwilling to meet Keane’s demands head on. Coogan avoids outright caricature, but the balance of sympathy is clear, and Saipan’s version of events leans decisively in Keane’s favour.

Saipan also addresses Keane’s questioning of McCarthy’s Irishness, a move that shifts the dispute beyond football and into the terrain of identity. The film does not endorse this line of attack, instead pointedly setting it against the legacy of Jack Charlton (Ireland manager from 1986 to 1995), another English-born figure, but one whose leadership was rarely challenged. (Charlton is one of only 11 honorary Irish citizens.)

McCarthy was born in Barnsley in Yorkshire, but is one of many second-generation Irish players who qualified for the team through their Irish parents. By framing his criticism in these terms, Keane attempts to undermine McCarthy’s legitimacy, using Irishness as a tool in a conflict about standards and authority, and gesturing towards the complexity of Ireland’s relationship with Englishness.

Celtic Tiger excess

When the film shifts its focus to the Football Association of Ireland, its patience wears thin. Saipan portrays an administration steeped in Celtic Tiger excess, treating the 2002 World Cup as a jolly rather than a professional obligation.

In the film version, brown envelopes are slipped out with ease, camp followers hover with no clear purpose, and champagne bottles appear in saunas as preparation drifts into farce. The depiction is unmistakable: this was an organisation cushioned by boom-time arrogance, insulated from consequence, and wholly unprepared for a player who demanded standards it had little interest in meeting.

Balancing the drama, there are scenes of unexpected humour, particularly in scenes involving the squad, where downtime, routines and shared spaces are closely observed. Visually and tonally, these moments recall Taika Waititi’s Next Goal Wins, with comedy in proximity and rhythm rather than punchlines. That lightness is always shadowed by the dangerous edge of Keane’s disapproval, which hangs over the group and gives even the quietest scenes a sense of latent threat.

The film’s use of archival footage and music leans heavily into nostalgia, situating Saipan firmly within its early-2000s moment. The opening notes of Oasis’ Acquiesce land purposefully, a song built around unity and defiance, and sung by two brothers whose own feud would become legendary. It is an on-the-nose choice, particularly coming from an English band with a strong Irish heritage, but an effective one, framing the film around themes of loyalty, fracture and unresolved conflict before a word is spoken.

Saipan is a highly entertaining slice of both Irish and football history. This fallout was never really about one training session or one confrontation. It was about standards colliding with systems, and a country watching itself argue in public. That the dispute still provokes such certainty and division is part of the film’s point. Some rows are simply never settled.

The Conversation

Laura O’Flanagan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Saipan: Roy Keane World Cup drama is a highly entertaining slice of Irish football history – https://theconversation.com/saipan-roy-keane-world-cup-drama-is-a-highly-entertaining-slice-of-irish-football-history-274346

Muscle twitches: why they happen and what they mean

Source: The Conversation – UK – By Adam Taylor, Professor of Anatomy, Lancaster University

Toa55/Shutterstock.com

You’re relaxing on the sofa when suddenly your eyelid starts twitching. Or perhaps it’s a muscle in your arm, your leg, or your foot that begins to spasm – sometimes for a few seconds, sometimes for hours or even days. It’s an unsettling sensation that affects about 70% of people at some point in their lives.

Muscle twitches fall into two main types. There’s myoclonus, where a whole muscle or group of muscles twitch or spasm. Then there’s fasciculation, where single muscle fibres twitch – often too weak to move a limb but visible or sensed beneath the skin.

Many factors can trigger both types of twitching, but people often fear the worst. Some fear it could signal multiple sclerosis – a condition that requires extensive testing, including a lumbar puncture to look for inflammation and MRI scans to detect brain changes.

For many people, however, twitching is simply an annoyance. Once doctors rule out serious causes, everyday features of modern life often turn out to be the trigger.

Too much caffeine, for instance, can cause muscle twitching. As a stimulant, it affects both skeletal and cardiac muscle, increasing heart rate and having a similar effect on skeletal muscle in areas such as the arms and legs. It slows down the time it takes for muscles to relax and increases the amount of calcium ions released within muscles, disrupting normal muscle contraction patterns.

Other stimulants such as nicotine, cocaine and amphetamines can cause similar muscular twitching. These substances interfere with the neurotransmitters that control or influence muscle function.

Some prescription medications can also trigger twitching. Antidepressants and anti-seizure drugs, blood pressure medicines, antibiotics and anaesthetics can all cause muscular side-effects.

When minerals run low

Twitching isn’t only caused by what you consume, it can also stem from what your body lacks. Hypocalcaemia, a drop in the amount of calcium in the body, is associated with twitching, particularly in the back and legs.

Calcium is fundamentally important in helping muscle cells rest and remain stable between contractions. When calcium levels fall, sodium channels open more easily. Sodium floods in and, as a result, nerves become hyperactive and muscles contract when they shouldn’t.

There are recognised twitching areas associated with hypocalcaemia, including the Chvostek sign, which is seen in the face and can be triggered by tapping the skin of the cheek just in front of the ear.

Chvostek sign.

Magnesium deficiency can also cause muscle twitching. Some causes of magnesium deficiency are a poor diet or poor absorption in the gut, usually due to conditions such as coeliac disease or other gastrointestinal conditions.

Some medications, particularly when taken over a long period, can cause a drop in magnesium levels in the body. Proton pump inhibitors used to treat reflux and stomach ulcers are recognised for this effect.

Low potassium is another mineral that can cause muscle twitching. Potassium helps muscle cells rest. It’s usually at high levels inside the cell and lower outside, but when potassium levels outside the cell fall, the electrical balance shifts, making muscle cells unstable and prone to misfiring, causing muscle spasms.

If you have no underlying gastrointestinal conditions, eating a healthy, balanced diet is usually enough to ensure you have enough of each of these minerals for normal muscle function.

A healthy water intake is important too, as dehydration affects the balance of sodium and potassium, resulting in abnormal muscle function, such as twitching and spasms. This is even more important during exercise, where overexertion can cause the same phenomenon.

The brain plays a role as well. Stress and anxiety can cause muscles to twitch as a result of overstimulation of the nervous system by hormones and neurotransmitters such as adrenaline.

Adrenaline increases the “alertness” of the nervous system, meaning it’s ready to trigger muscle contraction. It also increases the amount of blood flow and changes the tension of the muscles, which when a surge of energy arrives – or if the muscle is held in suspense for long periods – can result in twitching.

Adrenaline can also result in the nervous system responding to altered levels of neurotransmitters, causing muscle movement when the body is actually at rest.

Infectious agents can cause muscle twitching and spasms, too. The most commonly known is probably tetanus, which causes a phenomenon called lockjaw, where the neck and jaw muscles contract to the point where it becomes difficult to open the mouth and swallow. Lyme disease, from ticks, can also cause muscle spasms.

Many different infections can affect either the nerves or the muscles and can lead to twitching. Cysticercosis, toxoplasmosis, influenza, HIV and herpes simplex have all been linked to muscle twitching.

When doctors rule out these causes, some people receive a diagnosis of benign fasciculation syndrome – involuntary muscle twitching with no identifiable underlying disease.

It’s unknown how common it is, but it’s believed to affect at least 1% of the healthy population. It can persist for months or years, and for many, although benign, it doesn’t resolve completely.

For many people, muscle twitches remain a manageable annoyance rather than a sign of disease. But for others, a healthcare professional may need to rule out more serious causes.

The Conversation

Adam Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Muscle twitches: why they happen and what they mean – https://theconversation.com/muscle-twitches-why-they-happen-and-what-they-mean-269556

Stone baby: the rare condition that produces a calcified foetus

Source: The Conversation – UK – By Adam Taylor, Professor of Anatomy, Lancaster University

Miridda/Shutterstock

For some women, pregnancy is a time of profound loss. Not all pregnancies progress as expected. One serious complication is ectopic pregnancy, a condition in which a fertilised egg implants somewhere other than the uterus.

The uterus is the only organ designed to stretch, supply blood and safely support a developing pregnancy. When implantation occurs elsewhere, the pregnancy cannot develop normally and poses significant risks to the mother.

In a very small number of cases, implantation occurs within the abdominal cavity. This is known as an abdominal pregnancy and means the embryo attaches to structures such as the bowel or abdominal lining rather than reproductive organs, often undetected.

There are rare reports of such pregnancies continuing into late gestation and, in extraordinary circumstances, a baby being born healthy. Far more often, however, the outcome is one of the strangest phenomena documented in medicine.

This outcome is known as a lithopaedion, a term derived from Greek that translates literally as “stone baby”. Fewer than 350 to 400 cases have been described in the medical literature, making it exceptionally rare.

In these cases, a woman usually experiences at least the early stages of pregnancy. Some reach full term and even go through labour; the body initiates the physical process of childbirth, but no baby is delivered. In some instances, particularly where access to healthcare is limited, a pregnancy may go entirely unnoticed.

The foetus in these cases has sadly died. After approximately three months of gestation, the foetal skeleton begins to ossify into bone. Ossification is the normal biological process by which soft cartilage turns into hardened bone. Once this has occurred, the foetal remains are too large and structurally complex for the body to break down and absorb.

During a typical pregnancy, the placenta plays a crucial role in regulating the exchange of nutrients and immune signals between mother and foetus. At the same time, the maternal immune system enters a state of immune tolerance: it is partially suppressed to prevent it from attacking the genetically distinct foetus. When the foetus is no longer viable, these protective mechanisms disappear. The immune system then recognises the foetal tissue as foreign and potentially dangerous.

To protect itself from infection or inflammation, the body may respond by calcifying the foetus. Calcification involves the gradual deposition of calcium salts around tissue, effectively isolating it. This process seals the foetus off from surrounding organs, preserving it in place and preventing further harm.

Calcification as a defensive response is not unique to pregnancy. The process of dystrophic calcification occurs when calcium deposits form in dead or damaged tissue. Calcium binds to phospholipids, which are fat-based molecules that make up the outer structure of cells and help hold cell membranes together, stabilising the area and limiting injury. A similar biological mechanism contributes to calcium build-up in blood vessels during atherosclerosis, a condition associated with heart disease.

Lithopaedion formation has also been observed in other species, including rabbits, dogs, cats and monkeys. One of the earliest recorded human cases dates back to 1582, involving a 68-year-old French woman who carried a lithopaedion for 28 years.

Another widely reported case describes a woman in China who carried one for over 60 years. Some lithopaedions have been reported to weigh more than two kilograms, roughly the weight of a full-term newborn. In one exceptionally rare case, a woman was found to have twin lithopaedions.

Symptomless cases

Some women carry a lithopaedion without symptoms for many years. Others develop complications caused by its presence in the abdomen. These include pelvic abscesses, which are collections of infected fluid, twisting or obstruction of the intestines that interfere with digestion, fistula formation – meaning abnormal connections between organs – and other abdominal symptoms such as pain or swelling.

Cases without symptoms are often discovered postmortem – after death during examination. When symptoms do occur, surgical removal is usually required. Because lithopaedions develop outside the uterus, they may attach to nearby organs such as the bowel or bladder.

Each case must therefore be carefully assessed. Surgery may be performed laparoscopically, using small incisions and a camera to minimise recovery time, or may require a more extensive open abdominal procedure.

Diagnosis almost always relies on medical imaging. This often occurs incidentally while investigating other symptoms. Calcified foetal bones can be identified using X-rays, ultrasound or CT scans. CT scans are particularly useful because they provide detailed cross-sectional images that clearly show both bone and surrounding soft tissue.

Lithopaedion cases are now exceptionally rare, likely even more so in modern medicine due to accurate pregnancy testing, early ultrasound scanning and routine antenatal care. Although these cases are medically unusual, they highlight both the vulnerability and resilience of the human body. Whether supporting new life or responding when pregnancy ends unexpectedly, the body works to protect the person carrying the pregnancy, sometimes in ways that continue to surprise medicine centuries later.


Strange Health is hosted by Katie Edwards and Dan Baumgardt. The executive producer is Gemma Ware, with video and sound editing for this episode by Sikander Khan. Artwork by Alice Mason.

In this episode, Dan and Katie talk about a social media clip from tonsilstonessss on TikTok.

Listen to Strange Health via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript is available via the Apple Podcasts or Spotify apps.

The Conversation

Adam Taylor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Stone baby: the rare condition that produces a calcified foetus – https://theconversation.com/stone-baby-the-rare-condition-that-produces-a-calcified-foetus-274178

Why Heineken’s zero-alcohol London Underground campaign fell flat

Source: The Conversation – UK – By Jonatan Sodergren, Lecturer in Marketing, Bristol University Business School, University of Bristol

Brewing giant Heineken’s advertising campaign promoting its zero-alcohol beer on the London Underground forced its way into the public conversation. By temporarily altering signs and renaming stops to things like Oxf0.0rd Circus and Waterl0.0, the 0.0 brand placed itself inside one of the UK’s most recognisable public institutions.

The Heineken stunt reflects a wider return of offline brand “activations” – when marketers look for the type of presence that can’t be scrolled past in crowded digital environments. These campaigns, from Netflix’s “experiences” to promote the new season of Stranger Things to live events like Red Bull’s Flugtag, which turn stunts into shareable spectacles.

The Dutch brewer’s campaign was designed to mark dry January; but whether the visibility translated into impact is another matter.

The reasoning behind it was clear. London Underground is famous for its unwritten rules – stand on the right, avoid eye contact and under no circumstances strike up a conversation with a stranger. But Heineken 0.0 attempted to turn it into a hub for connection, using its temporary rebranding of the Bakerloo line to encourage commuters to rediscover real-world socialising. Without alcohol, of course.

As part of the promotion, Heineken was also handing out free 0% beer at Waterloo station over a couple of days in January. The company said it hoped the move would encourage Tube users to make small talk with a stranger after its own data showed 63% of passengers said they were “very unlikely” ever to do this.

Younger generations are drinking less alcohol than their predecessors, and zero-alcohol products are increasingly in the public eye. Campaigns like the Heineken one show how non-alcoholic drinks can be marketed in ways that grab attention and remain culturally relevant.

My research into zero-alcohol marketing focuses on how brands use visual and textual strategies to communicate responsibility and reshape social norms around drinking.

But from that perspective, the Heineken 0.0 campaign reveals some notable shortcomings.

1. Accessibility

Heineken 0.0’s temporary rebranding drew criticism from disability advocates. Campaign group Transport for All warned that altering station names and navigation signage could create confusion for passengers, particularly those with visual impairments, learning disabilities, neurodivergence or fatigue.

Transport for London (TfL), which runs London Underground, pointed out that the changes were limited to certain platform signs and and assessed to ensure they didn’t negatively affect services, staff or customers. But nonetheless, critics hit back that even subtle rebranding risks turning routine journeys into stressful or unsafe experiences for vulnerable commuters.

2. Station mix-up

Heineken 0.0’s campaign for dry January included an unfortunate error: some signs displayed stations out of sequence. While the rebranding was intended as a playful stunt, the mistake risked confusing passengers who rely on accurate station information. TfL told The Conversation it was a printing error and that the signage was corrected, but apologised to customers for any confusion.

3. Implicit assumptions

Although the aim was to promote alcohol-free socialising, the campaign could inadvertently reinforce the idea that beer – or alcohol more broadly – is a prerequisite for connection. By pairing interaction on the Tube with the act of drinking, even a zero-alcohol beer, the campaign relies on familiar cultural tropes that link social environments with alcohol.

For commuters already wary of public interaction, this may undercut the message of inclusive, alcohol-free connection. The campaign’s playful intent is clear, but its execution subtly leans on entrenched assumptions about alcohol and sociability. This limits its potential to challenge norms.

4. Out of place

Heineken said its campaign was “playful” and meant to encourage socialising, but it feels out of step with the reality of commuting. Alcohol has been banned on TfL services since 2008, and most passengers are simply on their way to or from work, focused on their phones, schedules or morning coffee. They aren’t generally thinking about a beer, even when it is alcohol-free.

The activation makes a bold visual and social statement, but it doesn’t fully fit the context. A promotion tied to everyday routines, like coffee or snacks, would have felt more natural in this environment. The stunt sparks conversation, but the setting remains a mismatch.

Campaigns of this type should focus on settings that are actually designed for social connection. For example, a pop-up at a music festival or airport lounge could offer zero-alcohol tastings alongside prompts (so called because they gently cue participation and spark interaction without requiring commitment).

Prompts could include trivia games, mini challenges or small plates of food. These could even be curated to reflect the destination and create a memorable pre-flight experience – paired with a celebratory clink of 0.0 glasses, of course.

These experiences make interaction effortless and enjoyable, reinforcing the idea that socialising doesn’t require alcohol. By embedding responsibility, relevance and context into both strategy and execution, zero-alcohol campaigns can get people talking, while also making zero-alcohol socialising feel aspirational.

There’s no doubt that Heineken 0.0’s London Underground stunt grabbed attention, but the criticisms reveal how it could have been stronger. Accessibility must be central, ensuring that the campaign doesn’t obstruct, exclude or make everyday travel more difficult. And precision matters too. After all, mistakes only reflect badly on the brand.

The Conversation

Jonatan Sodergren does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Heineken’s zero-alcohol London Underground campaign fell flat – https://theconversation.com/why-heinekens-zero-alcohol-london-underground-campaign-fell-flat-273543

The cold war maps that can help us rethink today’s Arctic conflict

Source: The Conversation – UK – By James Cheshire, Professor of Geographic Information and Cartography, UCL

A US view of the cold war world, 1950, showing the fearsome power of the USSR. Cornell University – PJ Mode Collection of Persuasive Cartography.

The late 1940s and early 1950s were a golden age for polar mapmaking in the US. Major magazines such as Time, Life and Fortune commissioned a generation of famous cartographers – who had come of age in the second world war – to explain the new geopolitics to a mass audience that was highly engaged after the catastrophic global conflict they had just lived through.

Their maps were large, dramatic and designed to be spread across kitchen tables and classroom desks. And they also offered a very different perspective to the mainstream maps we have become accustomed to today.

I’ve spent the past four years unearthing maps from the late 1940s and early 1950s to research a book about a largely forgotten map library at my university, and I am always struck by how consequential they feel to the global arguments of their era. Not least because they invited debate from their readers who were asked to become global strategists by discussing the next moves in the game of geopolitics.

These maps didn’t just illustrate the world – they implored people to think about it differently. As the world enters a new period of international relations and global tensions, it’s worth considering the different perspectives maps can offer us.

With each new US foreign policy intervention – such as the US president’s current preoccupation with taking over Greenland – I have often wondered if these maps of global adversaries could have percolated into a young Trump’s mind. The world must have seemed a menacing place and it is shown on these maps as a series of threats and opportunities to be gamed, with the “Arctic arena” as a major venue.

A map showing the political alignments as they were in 1941
The World Divided is an iconic map showing the geoopolitical situation at the height of the second world war. It was created by Richard Edes Harrison and published by Fortune Magazine in August 1941.
Cornell University – PJ Mode Collection of Persuasive Cartography.

The consensus encouraged by the maps was that of alliances, most notably Nato, and US opinion tended to endorse what Henry Luce, the influential owner of Time and Life magazines, called the “American century” in which the US would abandon isolationism and take on a global role.

a map using the North Polar Azimuthal Equidisant Projection
Published in 1950, this map introduces the Azimuthal Equidistant Projection to Time Magazine’s readers.
Time Magazine

Whatever one thinks of that worldview, it was frequently framed in terms of collective responsibility rather than individual dominance. Luce argued that the “work” of shaping the future “cannot come out of the vision of any one man”.

As we can now see with Greenland, Trump has taken the geography of threats and opportunity shown on these influential maps but reached a very different conclusion: an “America first”, resulting from the vision of the US president himself.

Dawning of the ‘air age’

The skilful of the cartographers of the era played with a range of map projections that offered different perspectives of geopolitical arenas. The master of this was Richard Edes Harrison who is described by the historian Susan Schultern as “the person most responsible for sensitizing the public to geography in the 1940s. [The public] tore his maps out of magazines and snatched them off shelves and, in the process, endowed Harrison himself with the status of a minor celebrity.”

Edes Harrison adopted many projections in his work – but for maps of the Arctic, he alighted on the azimuthal equidistant projection. While this creates maps that distort the shapes of countries, it enables the correct distances to be shown from the centre point of the map.

The projection became widely used in the 1940s and 1950s (and was indeed adopted for the UN flag in 1946) because it proved effective at demonstrating the wonder of the burgeoning “air age” as commercial flights followed great circle routes over the Arctic.

World map centered on London 1945
The Air Age Map of The World, 1945 (centered on London).
The Library of Lost Maps

This contrasted with the roundabout routes that needed to be followed by ships and it also mapped the countries that bordered and occupied the Arctic with a much greater sense of proximity and threat.

Missiles and bombers were just as able to travel over the top of Earth as were holidaymakers – and this created a juxtaposition exploited by cartographers. Rand McNally, a renowned map publisher, for example, published a collection of maps entitled Air Age Map of the Global Crisis in 1949.

These set out “the growing line-up of countries and peoples behind the two rival ways of life competing for power in the 20th Century” – that is capitalism as embodied by the US and Soviet and Chinese communism.

Those who bought it were told: “Keep this map folder! It may have great historic significance a generation from now.”

Magazine insert from 1950s with a series of geopolitical maps.
This 1950s map published by Rand McNally was produced as part of a marketing campaign for Airwick air freshener, but also sought to inform the US public about the spread of communism.
Rand McNally

New world order

Donald Trump’s return to office has revived talk of a world moving beyond the assumptions of the postwar order — weakening alliances, acting unilaterally, treating territory as leverage. At the same time, maps remain one of the most trusted forms of evidence in public life.

A Mercator-shaped worldview, widely used by digital maps can distort reality – for example, making Greenland much larger than it is.

Cartographers have long known the strengths and limitations of Mercator, but Trump’s approach to foreign policy is a further reminder of the perspective we lose when we depend on the standardised views of Earth that digital maps encourage (some have also speculated that Mercator’s exaggeration of Greenland’s area heightens its real estate appeal to Trump).

Maps are powerful things and in times of crisis, or rapid change, we turn to them to help explain events and locate ourselves within them. But they can be just as much about arguments as they are facts – and Trump knows this.

The maps of the 1940s and 1950s were about a fresh (American) perspective to create a new world order. They instilled Trump’s generation with a sense of the geopolitical rivalries that tend to get washed out of generic digital maps that are most widely consumed today.

Nearly 80 years on, this order may be creaking – but the maps are still there to remind us of what’s at stake.

The Conversation

James Cheshire receives funding from the Economic and Social Research Council.

ref. The cold war maps that can help us rethink today’s Arctic conflict – https://theconversation.com/the-cold-war-maps-that-can-help-us-rethink-todays-arctic-conflict-274058

Octopus numbers exploded around the UK’s south-west coast in 2025 – a new report explores this rare phenomenon

Source: The Conversation – UK – By Bryce Stewart, Associate Professor, Marine Ecology and Fisheries Biology, University of Plymouth; Marine Biological Association

Cold spray whipped off the ropes as a diesel engine throbbed in the background. One by one, empty shellfish pots came over the side of the fishing boat, occasionally containing the remnants of crab and lobster claws and carapaces. Something strange was going on.

Then the culprit revealed itself – a squirming orange body surrounded by a writhing tangle of tentacles. A few minutes later, three more of these denizens of the deep came up in a single pot, and then, incredibly, a final pot rose from the water completely rammed full of them, more than a dozen together in a squirming mass.

This was a familiar scene off the south coasts of Devon and Cornwall early last year, as a bloom of the common octopus (Octopus vulgaris) emerged, the first time anything like this had been seen for 75 years. In fact, commercial catches of common octopus in 2025 were almost 65 times higher than the recent annual average. A new report now sheds light on these blooms: their history, the causes and the consequences.

The common octopus, despite the name, is not normally common in British waters. Instead, it favours the warmer climes of southern Europe, the Mediterranean and north Africa. But, occasionally, such as in 1900, 1950 and now 2025, numbers explode off the south-west coast of England, changing marine food chains and disrupting the local fishing industry.

Common octopuses take the ultimate “live fast, die young” approach to life. Despite the large size they can attain, they generally only live for less than two years, with females dying after their eggs hatch. The males also die after breeding. This means octopus populations are highly affected by changes in environmental conditions.

Octopus blooms have previously been rare in the UK, but emerging evidence from long-term marine monitoring of the western Channel suggests that these episodes coincide with sustained periods of unusual warmth in both the ocean and atmosphere.

These “marine heatwaves” can stimulate rapid population growth, whether the octopus are locally established or newly arrived from the south. These warm conditions are often accompanied by unusually low salinity in coastal waters, a signal that points to fresher water entering the region. While salinity itself is unlikely to drive the outbreaks, it serves as a valuable tracer of the water’s origin.

The fresher conditions may stem from high river flow from major French Atlantic rivers such as the Loire, or from prolonged easterly winds over the Channel during the cooler months (October to March). These processes could help transport octopus larvae across the Channel from northern France and the Channel Islands.

Taken together, the combination of warmth, altered circulation and low-salinity signatures suggests that climate-driven shifts in ocean and atmospheric dynamics underpin these outbreaks.

From crisis to opportunity?

Those early scenes of octopus consuming catches in crab and lobster pots continued as 2025 rolled on. But they didn’t just stop at crustaceans. Piles of empty scallop shells were found in many pots, sometimes with remnants of flesh still attached.

Scallops don’t normally go into crab and lobster pots (unless they have lights in them, which these ones didn’t), so the only explanation is that octopus were actively putting scallops in pots to stock up their larder, consuming them at leisure later.

However, fishers are nothing if not adaptable. They soon realised that there was a lucrative export market for octopus and began targeting them. One boat fishing out from Newlyn in Cornwall brought home over 20 tonnes of octopus, worth £142,000, from just three days fishing.

Between £6.7 million and £9.4 million worth of common octopus was landed on the south coast of the UK from January to August 2025. However, not all fishers benefited, and for most boats, octopus catches suddenly dropped off in August. With other shellfish fisheries also declining dramatically last year – lobsters by 30% and brown crabs and scallops by over 50% – many fishers worry about a future in which there is nothing left to catch.

So, what does the future hold? Given the link with climate change, the extensive reports of octopus breeding and a recent appearance of juvenile octopuses in UK waters, the continued presence of the common octopus seems likely.

If a bloom the size of last year’s occurs again soon, future fisheries should be guided by sustainable and ethical principles that help diversify opportunities for fishing fleets, while leaving enough octopus in the sea to be enjoyed by the hundreds of divers and snorkellers who loved watching these amazing creatures last year.


Don’t have time to read about climate change as much as you’d like?

Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.


The Conversation

Bryce Stewart receives funding from DEFRA, Plymouth City Council, Devon County Council and the Crown Estate (OWEC Programme)

Emma Sheehan receives funding from DEFRA and Natural England.

Tim Smyth receives funding from the Natural Environment Research Council through their National Capability funded project AtlantiS NE/Y005589/1

ref. Octopus numbers exploded around the UK’s south-west coast in 2025 – a new report explores this rare phenomenon – https://theconversation.com/octopus-numbers-exploded-around-the-uks-south-west-coast-in-2025-a-new-report-explores-this-rare-phenomenon-269723

Scientists once thought the brain couldn’t be changed. Now we know different

Source: The Conversation – UK – By Laura Elin Pigott, Senior Lecturer in Neurosciences and Neurorehabilitation, Course Leader in the College of Health and Life Sciences, London South Bank University

Master1305/Shutterstock

For much of the 20th century, scientists believed that the adult human brain was largely fixed. According to this view, the brain developed during childhood, settled into a stable form in early adulthood, and then resisted meaningful change for the rest of life.

Today, the concept of neuroplasticity, the brain’s ability to change its structure and function in response to experience, is a central principle of brain science. The brain can change throughout life, but not without limits, not instantly and not effortlessly.

Neuroplasticity therefore reframes the brain as neither rigid nor infinitely malleable, but as a living system shaped by experience, effort and time.

The roots of neuroplasticity can be traced to the mid-20th century. In 1949, psychologist Donald Hebb proposed that connections between neurons, the brain’s nerve cells, become stronger when they are repeatedly activated together.

This principle later became known as “Hebbian learning”. At the time, Hebb’s idea was considered relevant mainly to childhood development. Adult brains were still thought to be relatively unchangeable.

That assumption has since been overturned. From the late 20th century onward, studies showed that adult brains can reorganise in response to learning, changes in sensory input, or physical injury. Sensory changes include alterations in vision, hearing or touch due to training, loss of input or environmental change.

More recently, advances in brain imaging have allowed researchers to observe these changes directly in living people. These studies show that learning alters patterns of brain activity and connectivity across the lifespan.

Neuroplasticity is now understood not as a rare exception, but as a basic property of the nervous system. It operates continuously, within biological limits shaped by age, genetics, prior experience and overall brain health.

How the brain changes

Neuroplasticity involves changes in how existing brain cells communicate with one another.

When you learn a new skill, specific synapses, the tiny junctions where neurons pass signals to each other, become stronger and more efficient. Neural networks, which are groups of neurons that work together, become better organised. Communication between brain regions involved in that skill improves.




Read more:
No, your brain doesn’t suddenly ‘fully develop’ at 25. Here’s what the neuroscience actually shows


At the cellular level, plasticity involves changes in synaptic structure, the release of chemical messengers called neurotransmitters, and the sensitivity of receptors that receive those signals. So, it changes how neurons communicate with each other.

In a few areas of the adult brain, particularly the hippocampus, which plays a key role in memory, limited adult neurogenesis, the creation of new neurons, also occurs. Although influenced by factors such as stress, sleep and physical activity, its significance in humans is still debated.

Hand with a blue pen points to the right hippocampus on a MRI scan
The hippocampus is constantly rewiring to store new information.
FocalFinder/Shutterstock

Crucially, neuroplasticity is experience-dependent. The brain changes most reliably in response to repeated, focused and meaningful engagement that requires attention, effort and feedback. Passive exposure to information has far less impact.

What strengthens and weakens plasticity

Over the past decade, research has identified several factors that strongly influence how plastic the brain can be.

1. Practice and challenge are essential.

Repeatedly engaging in tasks that stretch your abilities leads to changes in both brain activity and brain structure, even in older adults.

2. Physical exercise is one of the most powerful enhancers of plasticity.

Aerobic activity increases levels of brain-derived neurotrophic factor, or BDNF, which supports neuron survival and strengthens synaptic connections. Regular exercise is consistently linked to better learning, memory and overall brain health.

3. Sleep plays a critical role in consolidating brain changes.

During deep sleep, important neural connections are strengthened while less useful ones are weakened, supporting learning and emotional regulation, as shown in neuroscience research.

Woman asleep in bed
Sleep is essential for brain health.
Prostock-studio/Shutterstock

4. Chronic stress can seriously impair plasticity.

Long-term exposure to stress hormones is associated with reduced complexity of neural connections in memory-related brain regions and heightened sensitivity in threat-processing systems, undermining learning and flexibility.

When plasticity works against us

One of the most important and often misunderstood aspects of neuroplasticity is that it is value-neutral. The brain adapts to repeated experiences whether those experiences are helpful or harmful.

This helps explain why conditions such as chronic pain, anxiety disorders and addiction can become self-reinforcing. Through repeated patterns of thought, feeling or behaviour, the brain learns responses that are unhelpful but deeply ingrained, a process known as maladaptive plasticity.

The hopeful side of this insight is that plasticity can also be deliberately directed toward recovery. Psychological therapies such as cognitive behavioural therapy are associated with measurable changes in brain activity and connectivity, particularly in networks involved in emotional regulation. Rehabilitation after stroke or brain injury relies on the same principles, using repeated, task-specific practice to compensate for damaged areas.

Clearing up common myths

Perhaps the most persistent myth is that neuroplasticity means the brain can change rapidly or without limits. In reality, meaningful neural change takes time, repetition and sustained effort, within biological constraints.

Another misconception is that plasticity disappears after childhood. While children’s brains are especially flexible, strong evidence shows that plasticity continues throughout adulthood and into older age.

Claims that brief brain-training programmes dramatically increase intelligence or prevent dementia are not supported by solid scientific evidence. The issue is that meaningful brain change happens most when learning is challenging, varied, and connected to real life.




Read more:
Brain-training games remain unproven, but research shows what sorts of activities do benefit cognitive functioning


Activities such as learning a language, exercising regularly, playing a musical instrument, or engaging in complex social interaction are far more effective at strengthening the brain than tapping through app-based puzzles.

In short, brain-training games can be fun and mildly useful, but they train you to play games well, not to think better overall.

Our understanding of neuroplasticity has come a long way since Hebb’s early ideas. What was once thought impossible is now accepted scientific fact. Embracing neuroplasticity means recognising that brains can change, while remaining realistic about how slowly and selectively that change occurs.

More than a century ago, Spanish neuroscientist Santiago Ramón y Cajal wrote that every person can become the sculptor of their own brain. Modern science shows that this sculpting never truly ends. It simply requires effort, patience and persistence.

The Conversation

Siobhan Mc lernon receives funding from The Burdett trust for Nursing

Laura Elin Pigott does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Scientists once thought the brain couldn’t be changed. Now we know different – https://theconversation.com/scientists-once-thought-the-brain-couldnt-be-changed-now-we-know-different-271252

Why some people speak up against prejudice, while others do not

Source: The Conversation – UK – By Mete Sefa Uysal, Lecturer in Social & Political Psychology, University of Exeter

guruXOX/Shutterstock

When people encounter racism or discrimination, they don’t all respond in the same way. Some calmly challenge the remark, some file a complaint, others confront the offender aggressively – and many say nothing at all.

A common assumption is that speaking up against discrimination is a matter of personal courage, political ideology or education. But my recent research suggests that people’s cultural values, shaped by their backgrounds and life experiences, strongly influence how they confront discrimination.

Confrontation comes in very different forms. Some choose to confront non-aggressively (such as calmly pointing out prejudice, explaining why it is offensive or sharing how it impacts them emotionally). Others prefer relatively more aggressive confrontation (such as shouting back, threatening or physical retaliation). These responses carry different risks and consequences, both for the person confronting and for wider social relations.

My recent study with colleagues Thomas Kessler and Ayşe K. Uskul looked at how people’s cultural views of honour affected how they might respond to an insult or discrimination.

Honour is often misunderstood as a personal trait or a relic of “traditional” cultures. In psychology, honour is better understood as a cultural system that develops when people cannot rely on institutions – such as courts or police – to protect them from harm or injustice.

Honour cultures, common in Latin America, north Africa, south and west Asia and the southern US, often developed under harsh historical, social and ecological conditions, for example, scarce resources unprotected by central authorities.

In such contexts, reputation matters. Maintaining honour requires projecting a reputation for toughness. It means signalling a readiness to retaliate against perceived threats or insults to protect oneself and one’s family.

Being seen as weak or passive can invite further mistreatment, so individuals and groups learn to defend their dignity themselves. Honour codes travel with people through migration, continuing to shape how they interpret threats, insults and unfair treatment in new social environments.

The role of honour

Our study sought to understand how internalised honour codes shape responses to discrimination. Specifically, we looked at two communities: south and west Asians in the UK and Turkish migrants in Germany.

People in these communities may have grown up in an honour culture, where personal retaliation against insults is expected. Or, they may have learned these codes from parents and grandparents, while living in countries where such codes are not widespread.

Our findings show that honour codes play a central role in how people say they would confront discrimination. We asked participants a series of questions about their views on honour, as well as their experiences of discrimination. We then asked them to rate the different confrontation styles that they might use when someone discriminates against them based on their ethnic or cultural background.

We found that broadly, people who experienced discrimination more frequently said they were more likely to confront it. But the style of confrontation they chose depended strongly on their cultural values.

A key finding concerned collective honour: the belief that you have a responsibility to defend the dignity of your ethnic or cultural group. Participants who strongly endorsed collective honour reported they were more likely to confront prejudice in any form, whether calmly or aggressively. For them, remaining silent felt like allowing an insult to stand.

A stand up to racism protest
Protest: one way to respond to discrimination.
Martin Suker/Shutterstock

In contrast to those who view honour as a collective quality, there are also those who view honour as more of an individual, internalised quality. This can manifest in how people rate the importance of family reputation, and their readiness for retaliation against insults.

People who emphasised family reputation values – concern with maintaining respectability and avoiding shame – said they were more likely to confront discrimination in non-aggressive ways. They also reported being less likely to respond aggressively. Maintaining dignity, for them, meant self-control.

Those who strongly endorsed retaliation values – belief that failing to respond to insults signals weakness and dishonour – were more likely to confront prejudice aggressively and less likely to use calmer strategies. In other words, honour does not push people uniformly toward violence or to remain silent. Different honour codes lead to very different ways of speaking up.

Interestingly, broader structural factors — such as financial insecurity or distrust in the police and authorities — played a smaller role than expected in how people responded to discrimination. What mattered most was how often people actually experienced discrimination.

Repeated exposure to discrimination increased the likelihood of aggressive confrontation, especially among those who endorsed retaliation norms. This suggests that speaking up is shaped less by abstract perceptions of injustice and more by life experiences.

Why this matters

Political rhetoric around immigration has contributed to a broader climate of hostility and suspicion of some communities. This is evident in the waves of anti-immigration protests the UK has seen in recent years, and their effects on communities. According to Home Office data released in late 2025, police recorded 10,097 racially or religiously aggravated offences in August 2024 alone.

Against this backdrop, those who speak up — whether in calm advocacy or in heated confrontation — risk being judged against a narrow standard of “civility” that disregards the personal and cultural experiences that shape their responses.

For some people, walking away preserves dignity. For others, it undermines it. This does not mean all confrontational responses are equally effective or desirable.

But it does mean that judging these responses without understanding their cultural roots risks blaming individuals for navigating systems that were never designed to protect them. If we want more constructive conversations about discrimination and how we speak up against it, our research can offer a place to start.

The Conversation

Mete Sefa Uysal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why some people speak up against prejudice, while others do not – https://theconversation.com/why-some-people-speak-up-against-prejudice-while-others-do-not-272867

How interwar fiction made sense of an increasingly noisy world

Source: The Conversation – UK – By Anna Snaith, Professor of Twentieth-Century Literature, King’s College London

The logo of the Anti-Noise League. Quiet/Noise Abatement League Catalogue

Noise was first considered a public health issue in interwar Britain – called the “age of noise” by the author and essayist Aldous Huxley. In this era, the proliferation of mechanical sounds, particularly the rumble of road and air traffic, the blare of loudspeakers and the rising decibels of industry, caused anxiety about the health of the nation’s minds and bodies.

Interwar writers, such as Virginia Woolf, George Orwell and Jean Rhys, tuned in to the din. Their fiction is not just an archive of past sound-worlds but also the place where sound became noise and vice versa. As sound historian James Mansell has argued: “Noise was not just representative of the modern; it was modernity manifested in audible form.”

We now have more data and scientific evidence on the effects of environmental noise. The World Health Organization recognises noise, particularly from road, rail and air traffic, as one of the top environmental health hazards, second only to air pollution.

In the interwar period, without comprehensive data on noise and health, early campaigners relied on narrative. They created a particular story about noise and nerves to galvanise the public into keeping it down.

A comic strip mocking the Anti-Noise League
A comic strip mocking the Anti-Noise League by Ernie Bushmiller (1941).
Swann

In 1933, the first significant UK noise abatement organisation, the Anti-Noise League, was founded by physician Thomas Horder. The league consisted of doctors, psychologists, physicists, engineers and acousticians (physicists concerned with the properties of sound) who lobbied government for a legislative framework around noise.

They sought to educate the public on the dangers of needless noise through exhibitions, publications and their magazine, Quiet.

Their campaigns drew attention to the very real health effects of environmental noise. But they also saw noise as waste: something to be eliminated in the pursuit of a maximally productive and efficient citizenry.

They drew on ideas of Britishness associated with what they called “acoustic civilisation” (or teaching the nation to be quieter) and “intelligent” behaviour to enact a programme of noise reduction as sonic nationalism.

Noise in modernist fiction

This interwar preoccupation with unwanted sound is also a sonic legacy of the first world war. Exposure to the deafening din of artillery, exploding shells and grenades caused catastrophic auditory injury. So much so, that the din was associated with loss of life and the devastating effects of shell shock.

The extreme noise of warfare also pushed doctors and psychologists to study how sound affects health. This work continued into the 1930s through government-backed bodies such as the Industrial Health Research Board. As a result, people in the interwar years became much more aware that the everyday sounds of machines and traffic could also be harmful.

But it wasn’t only doctors and acousticians who wrote about noise. Authors such as Rebecca West and H.G. Wells worked with the Anti-Noise League, while others, like Winifred Holtby, publicly refuted their findings. But more broadly, in the pages of interwar fiction, modernist writers engaged deeply with the shifting noisescapes around them.

The unprecedented noise levels of the wars, together with the proliferation of sounds in urban and domestic spaces and the auditory training required by new forms of sound technology, caused an attentiveness to sound and hearing. This was harnessed both metaphorically and structurally in the period’s literature.

Modernist writers such as Woolf, Orwell and Rhys listened intently to machines and the sound worlds they created. Once we start to listen for it, noise is everywhere in fiction of the period.

Proletarian factory novels of the 1930s such as Walter Greenwood’s Love on the Dole (1933) or John Sommerfield’s May Day (1936) draw new attention to toxic and harmful high decibel industrial environments.

Interwar novels such as Virginia Woolf’s Mrs Dalloway (1925) or George Orwell’s Coming Up for Air (1939), each with first world war veteran protagonists, register urban noise via the auditory effects of the conflict zone, or a kind of communal noise sensitivity, as well as through the healing or connective properties of sound. In Dorothy Sayers’ Nine Tailors (1934) a character is (spoiler alert) killed by the sound of a church bell.

Rhys’ short story Let Them Call It Jazz (1962) is set in London in the years following the second world war. It depicts the hostile environment faced by immigrants, such as those arriving from the Caribbean on HMT Empire Windrush, as protagonist Selina Davis is imprisoned for noise disturbance. She has been singing Caribbean folk songs in a “genteel” suburban neighbourhood.

The tale is one of cultural identity, the resistant power of sound, and the politicisation of noise. Black music is a form of sonic resistance; noise is both a silencing strategy for bodies and practices deemed “aberrant” and a resistant practice that exceeds and disrupts exclusionary codes of value and hierarchy.

These works, and many more, demonstrate that modernist writers, if we listen carefully, are theorists of sound who responded in complex ways to their shifting soundscapes. They counter the association of noise with negative affect or “unwanted” excess, by finding aesthetic and political possibility in noise.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Anna Snaith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How interwar fiction made sense of an increasingly noisy world – https://theconversation.com/how-interwar-fiction-made-sense-of-an-increasingly-noisy-world-272846