Thai villagers have moved four times to escape rising sea levels – life on the climate-change frontline

Source: The Conversation – UK – By Danny Marks, Assistant Professor in Environmental Policy and Politics, Dublin City University

Danny Marks and a researcher walking along a small wooden pathway to the village. Danny Marks

The village of Khun Samut Chin, 50km southwest of Bangkok, Thailand, is a small, rustic fishing village similar to thousands scattered across Asia – except that it is slowly being swallowed by the sea.

Much of the country’s coastline faces severe erosion, with around 830km eroding each year at rates exceeding one metre. But in this village, the situation is far worse. Erosion occurs at three to five metres annually, the land subsides by one to two centimetres each year, and since the 1990s, around 4,000 rai (6.4km²) has already been lost to the sea.

All that remains of the original site is a Buddhist temple, now standing alone on a small patch of land that juts out into the sea so much so that locals call it “the floating temple”.

The severe erosion is partially due to climate change, but has been compounded by other human-driven factors. Upstream dams, built to provide flood control and irrigation to farmers, have reduced sediment flows in the Chao Phraya River delta, where the village is located.

Excessive groundwater extraction by nearby industries has increased land subsidence. Meanwhile, the construction of artificial ponds for commercially farming shrimp has led to widespread clearing of mangrove forests that once served as a buffer against erosion.

An image showing a line of concrete and bamboo dykes.
A wall of small concrete and bamboo dykes put in place as part of an attempt to stop coastal erosion.
Danny Marks

People move away

My new research has found that villagers have been forced to move away from the sea four times, losing both land and livelihoods in the process. The government has not provided compensation for damaged homes or financial assistance to help them relocate.

Many younger villagers, wearying of constant displacement and finding it increasingly difficult to find fish as sediment makes the sea shallower, have left for jobs in Bangkok on construction sites, in factories and other workplaces. Those who remain are mostly older villagers. Today, the local school has only four pupils, making it the smallest in Thailand.

Khun Samut Chin lies at the forefront of climate change. An estimated 410 million people, 59% in tropical Asia, could face inundation by sea level rise by 2100. Without concerted efforts to change our emission levels, many more coastal communities around the world will face similar struggles in the years to come.




Read more:
How AI can improve storm surge forecasts to help save lives


In theory, formal adaptation plans are government-led strategies designed to help communities cope with climate change. The theories assume that the state will decide when, where and how people should move, build protective structures like seawalls, and provide funding to affected communities.

In practice, however, as seen in Khun Samut Chin and many other places across Asia, low-income and relatively powerless coastal communities are often left to abandon their homes through forced displacement or try to stay put, with little or no government support, even when they ask for help.

Not giving up

Wisanu, the villager leader, says that Thai politicians have prioritised urban and industrial centres because they hold more voters and economic power. A government official told me that high land costs and limited budgets make relocation unfeasible. Instead, the state has erected bamboo walls as a temporary fix which have slowed down, but not stopped, the erosion.

Villagers are frustrated that the government has yet to implement any large-scale projects and that they are repeatedly asked to take part in consultations and surveys without any tangible results. Nor has the government provided much support to offset reduced incomes from fishing or improved transportation linkages, which remain sparse.

Coastal erosion in Thailand.

In response, the villagers have taken matters into their own hands. They have initiated a homestay programme. About 10 households, including the leader’s, host tourists who pay 600–700 Baht (£13-£16) per night, with 50 Baht going to a community fund for erosion mitigation efforts, such as purchasing or repairing bamboo dykes.

They market the programme through Facebook and other social media platforms as a place where visitors can experience life at the frontline of climate change, visit the temple, and help by replanting mangroves and buying food from the villagers. Wisanu, whose household manages five homestays, told me that the programme “enables us not to get rich but lets us walk”.

The villagers also believe that the programme helps raise awareness of their plight. They have also lobbied the local government to keep the school open and reconstruct a storm-damaged health centre.

This village offers a glimpse into what many others will likely face in the future. It shows that “managed retreat” is often not managed at all, or at least not by the state. Global frameworks like the Paris agreement and Intergovernmental Panel on Climate Change reports assume that governments have the capacity and political will to plan and fund coastal adaptation efforts.

Khun Samut Chin, however, shows how far reality can diverge from these assumptions: the sea encroaches, the state is absent, and villagers are left to mostly fend for themselves.

Yet they refuse to give up. They continue to stay, host tourists, replant mangroves, repair bamboo dykes and resist the demise of their village. They fight not only against erosion but also political neglect. If governments and global institutions fail to help them, this community will be washed away not by the water alone, but also by our inaction.

The Conversation

Danny Marks receives funding from by a seed grant from Utrecht University’s Water, Climate and Future Deltas Hub (entitled: “Human costs of shrinking deltas: Adaptation pathways of vulnerable groups to sea-level rise in three Asian deltas”).

ref. Thai villagers have moved four times to escape rising sea levels – life on the climate-change frontline – https://theconversation.com/thai-villagers-have-moved-four-times-to-escape-rising-sea-levels-life-on-the-climate-change-frontline-267278

A history of the dukes of York

Source: The Conversation – UK – By Mark McKinty, Early Career Researcher in Spanish Studies, Queen’s University Belfast

From New York City to Duke of York Island in Antarctica, the Dukedom of York has a wider cultural resonance than you might immediately realise.

The Duke of York military slow march can often be heard ringing out during the Changing the Guard in London and one of the city’s best-known theatres carries the name. The same is true for pubs in places like London and Belfast, and a second world war battleship and a passenger steamer share the same name too.

Equally, the holders of the title Duke of York have, for over six centuries, held a prominent position in British royalty and society. Customarily conferred on the second son of the reigning monarch, this dukedom has been closely associated with being the “spare to the heir” – the brother born to support the crown rather than inherit it.

Close to the sovereign but destined for a different journey, the story of the dukes of York is one of privilege and dutiful service, with a liberal peppering of scandal and twists of fate.

The current holder of the dukedom, Prince Andrew, has agreed to no longer use the titles and honours conferred upon him – the first time this has happened in the dukedom’s history.

This is a long history of men whose lives were shaped by their unique position within the monarchy. Proximity to power without possession may be the defining factor, but the dukes of York more often than not actively or accidentally flip that rule on its head – almost half of these “spares” found themselves becoming king, one way or another.

The HMS Duke of York
HMS Duke of York on an Arctic convoy to Russia in 1942.
Wikipedia/Imperial War Museum

There have been 11 men officially styled Duke of York and three holders of the title Duke of York and Albany – a fusion of two titles, one from Scotland and one from England, devised as a demonstration of unity following the 1707 Act of Union.

Edmund of Langley was the first duke, after being granted the title by his father Edward III in 1385. He is the Duke of York who appears in Shakespeare’s Richard II, a play named after the duke’s nephew and son of Edward, the Black Prince.

Following the legal principle of male primogeniture, Edward of Norwich, 2nd Duke of York, inherited the title upon his father’s death in 1402. Edward was killed at the Battle of Agincourt in 1415 and, without children, the title passed to his nephew Richard of York – 3rd Duke of York.

Richard’s death in battle in 1460, during the War of the Roses, then left the title to Edward Plantagenet. When Edward won the Battle of Towton in March 1461, subsequently becoming Edward IV, the Dukedom of York merged with the crown and became extinct, bringing a close to the 76-year existence of this first iteration of the title.

Incredibly, these are the only examples of the title being inherited. In over 560 years since, no Duke of York (or Duke of York and Albany) has ever directly passed the title to a legitimate heir. Of the subsequent ten men bearing either title, four died without heirs, five found themselves as heir to the throne following the death of their elder brother (or his abdication, as happened in 1936) and then king. And in Prince Andrew’s case, without a male heir – having had two daughters.

The duke who disappeared

Edward IV – himself a former Duke of York, recreated the dukedom for his second son, Richard of Shrewsbury, thus starting the long association of this title with the second-born son. Richard was married at the age of four and was one of the Princes in the Tower who disappeared and was presumed dead in 1483 aged 9.

The dukes who became kings

The title was revived in 1494 for Henry Tudor but again died out when Henry Tudor became King Henry VIII (1509). The same happened to Charles Stuart, who was given the dukedom in 1605 but also became king (Charles I, 1625).

A portrait of James Stuart.
James Stuart, the duke who gave his name to New York City.
Wikipedia

James Stuart, second son of Charles I, was technically Duke of York from birth, but this was formalised in 1644. In 1660, James was granted the parallel Scottish title of Duke of Albany. New York state, its capital Albany, and New York City derive their names from this duke. James succeeded his childless brother Charles II and became James II of England and James VII of Scotland in 1685, when the title merged with the crown. James was later deposed in the 1688 revolution.

In 1892 Queen Victoria bestowed the title on her grandson, Prince George, the second son of the Prince of Wales and future Edward VII. However, this creation came following the death of George’s older brother. He became George V (1910) and the title merged with the crown.

George made his second son, Prince Albert, Duke of York in 1920. Initially more than comfortable being in the shadows of his elder brother Edward VIII, the abdication crisis of 1936 – the year of three kings – unexpectedly saw Albert become George VI. Because of this duke, the future Queen Elizabeth II was initially styled “Princess Elizabeth of York”.

Three double dukes

Although the dukedoms of York and Albany have been simultaneously held by the same person at times, three men have held the unified “double dukedom” as Duke of York and Albany in the 18th century, yet all died without heirs.

They were Ernest Augustus of Brunswick-Lüneburg, who served in the nine years’ war and the war of the Spanish succession, Prince Edward, who was briefly heir-presumptive, and Prince Frederick Augustus, second son of George III.

This Duke of York served a lengthy period as commander-in-chief of the British Army, including during the Napoleonic wars. He is perhaps the most likely inspiration for the “Grand Old Duke of York” rhyme. He is also memorialised on the Duke of York Column where Regent Street meets The Mall in London.

The Duke of York Column in London
The Duke of York Column in London, erected to honour Prince Frederick Augustus, also known as the Grand Old Duke of York.
Wikipedia/Prioryman, CC BY-SA

Queen Victoria again separated these titles, creating her fourth son, Prince Leopold, Duke of Albany in 1881, and her eldest grandson Duke of York in 1892.

Andrew became Duke of York in 1986 when he married Sarah Ferguson. While he has agreed to no longer be styled as such, he technically has not been stripped of the title. Nevertheless, with no male heir, the dukedom will become extinct upon his death, regardless.

Although it could be assumed that the title would have been recreated for Prince Harry as the second son of King Charles III, his self-imposed exile and ongoing controversies mean that the future of the Dukedom of York remains uncertain. Perhaps the next revival will take some consideration.

The Conversation

Mark McKinty does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A history of the dukes of York – https://theconversation.com/a-history-of-the-dukes-of-york-268145

Why does putting back the clocks an hour disrupt us so much?

Source: The Conversation – UK – By John Groeger, Professor of Psychology, School of Social Sciences, Nottingham Trent University

It’s not just you – the autumn clock change really does feel that bad. Cast Of Thousands/Shutterstock

The disruption of sleeping and waking patterns from the daylight saving clock change reveals a great deal about our everyday reliance on the interaction of sleep pressure and circadian clocks.

First, you need to understand the intricate changes happening in your body the night the clocks go back an hour. On Saturday evening, assuming we are not in bright light, our bodies will begin the daily chore of secreting melatonin, a key hormone for the timing of sleep. This will accumulate in the blood stream and a few hours later it will reach its peak concentration before declining steadily until morning.

Melatonin does not make most of us sleep, and certainly doesn’t keep up asleep. It is more like a reminder, signalling that sleep should not be far away. Even brief periods of normal electric light delay or even stop this sleep signal, depending on its brightness and wavelength or colour.

In the evening as melatonin rises, the heat generated by our internal organs increases to its highest level of the day, followed by a drop – which is another sleep signal. This is why having a hot bath before bedtime can help us to sleep.

The body’s core temperature continues to drop for the first couple of hours of sleep, which is mostly slow wave sleep. This is when more of the neurons in the brain are firing simultaneously, and when our heartrate slows. It becomes more regular as we have this first episode of deep sleep. Our coldest core body temperature more or less coincides with the highest level of melatonin, showing the synchrony of these two circadian timing signals.




Read more:
Sleep quality, circadian rhythm and metabolism differ in women and men – new review reveals this could affect disease risk


A minute before 02:00 on Sunday 26 October our body’s timing systems and the
clocks will probably be aligned. Our internal core will be approaching its coldest temperature. As the body heats again, and the melatonin signal decreases, another circadian process begins – the slow sustained release of cortisol which will culminate on waking.

If melatonin is a sleep signal, then cortisol is a signal to wake. Unless we are very stressed during the daytime or drink a great deal of caffeine, it will be at its strongest at the time we typically wake. This is why waking up can sometimes seem both energising and stressful, and, why sleep is more difficult when we are stressed.

These three critical bodily timing systems, melatonin, core body temperature and cortisol, are synchronised by a central clock in the suprachiasmatic nucleus of the brain, which co-ordinates the time of the clocks in each cell of the body. The pattern of each signal repeats about every 24 hours, but can be disrupted by different aspects of our environment such as light, vigorous exercise and stress.

Woman set at desk rubbing her eyes.
It can take a few days to adjust to daylight saving time.
Nicoleta Ionescu/Shutterstock

These cycles are not fixed at exactly 24 hours. They can be a few minutes shorter or longer than 24 hours. This enables our sleep-wake regimen to gradually change with the seasons.

But the change is slow. Abrupt changes, flying east or west (which extends or shortens sunlight exposure, affecting melatonin), heat waves, cold snaps (raising or lowering core body temperature) or stress (which increases daytime cortisol) cause disruption in this regimen. We just haven’t evolved to cope with sudden changes.

It will take days for the biological and actual clock to realign. Just as flying from London to New York takes more adjustment time than New York to London, the springtime change often feels gentler, because it seems to be easier to move your clock forward than backwards.

We are likely to lose out on sleep in the morning, particularly REM sleep, which kicks in later and is involved in emotion regulation. Our biological clock will still begin the cortisol-induced daily waking process at the same time it did the day before. But you will be awake as it peaks, which may resulted in deflated mood.

This disruption is not the same for all of us. About one in a 100 of the general population have a genetic disorder called delayed phase sleep syndrome, which makes it impossible to sleep until the early hours of the morning. Their melatonin levels increase much later than in other people, which means they will probably benefit from the clocks going back, if only for a short while.

Similarly, about ten to 20 in 100 late-adolescent children – compared to adults – are biologically driven to initiate sleep later. And for them, temporarily, their sleep may align more closely with the rest of the household. But they too will be sleepier in the morning.




Read more:
Can’t sleep? Your ability to adapt to shiftwork and the changing seasons may be determined by your genes


Another group in the population, about 1% of those in middle age, feel they need to go to bed far earlier than most, usually in the early evening, and wake very early in the morning. It isn’t clear why advanced-phase sleep syndrome is more frequent in this age group, although the circadian system seems to weaken as we age. This group is more compromised by clocks being put back.

The autumn clock change is also often difficult for menopausal women who experience hot flushes – their body clock appears to be advanced and tend to need to sleep earlier. Clocks going backwards mean they will need to wait longer for sleep than they might wish and wake earlier.

The daylight saving disruption rarely lasts more than a week. But one is left asking why we put our bodily clocks under this abrupt strain. We challenge the synchrony of our bodily clocks, for the sake of fleeting moments of additional light.

The Conversation

John Groeger does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why does putting back the clocks an hour disrupt us so much? – https://theconversation.com/why-does-putting-back-the-clocks-an-hour-disrupt-us-so-much-268016

England’s new phonics target sets schools an almost impossible task

Source: The Conversation – UK – By Alice Bradbury, Professor of Sociology of Education, UCL

SpeedKingz/Shutterstock

The target for the proportion of children passing the phonics screening check – a test of how well children aged five and six in England can “decode” words – has been raised to 90%.

This increase, from 84%, comes as part of the government’s mission to “drive up standards”. It also marks a further commitment to the use of systematic synthetic phonics to teach children to read.

Systematic synthetic phonics has become dominant in education policy and in English primary schools, particularly in the last 15 years. Teachers are now well versed in teaching children how to decode words, which means that they say the right sounds – phonemes – in relation to the letters or groups of letters they see on the page.

Children who don’t pass, meaning they don’t sound out the required number of the 40 words in the test, have to take it again in year two. But results in the phonics screening check have plateaued for nearly a decade. This suggests that however hard schools try, and however good they become at preparing the children for the test, some children will still struggle to master phonic decoding.

It is a well established idea in assessment research that when a high-stakes test is introduced, the proportion of children reaching the benchmark increases initially, as teachers become familiar with how to prepare children for the test. However, there is then a ceiling figure which is reached within a few years.

In line with this, national figures for the phonics screening check show that the proportion of children passing initially increased. Pass rates rose from 58% in 2012 to 81% in 2016, as teachers began to prioritise systematic synthetic phonics teaching and learnt how to prepare children for the test.

But, since then, the pass rate has plateaued at about 80%. The only exception is a post-pandemic dip to 75% in 2022. This suggests that despite the dominance of systematic synthetic phonics as an approach, about 20% of children do not find this system works for them at this age.

Meanwhile, the cumulative figure for children who pass the test in either year one or year two stands at around 89%. This suggests that 10% of children simply need more time to pass the phonics screening check, or more input in order to master the skill of decoding. The remaining 10% who never pass may have additional learning needs – which have likely been identified long before the test – or will need to learn to read in a different way.

What this new target does is effectively require the system as a whole (for the target is a national one, not a school-level one) to ensure that those 10% of children who pass in year two, instead pass the first time around.

The importance of age

So, what can schools do to ensure that all children pass in year one? The figures show that boys are less likely to pass at this point – 76% compared to 84% of girls in 2025. Children born in the summer months are also less likely to pass, as they are younger when they take the test: 73% of August-born children pass, versus 84% of September-borns.

Department for Education data on phonics screening check pass rate by birth month:

Bar chart showing attainment by birth month.
Percentage of pupils meeting the expected standard in the phonics screening check by month of birth in English state funded schools.
Department for Education

The phonics screening check is so closely related to month of birth that it could be argued that it is a test of age rather than decoding. By the time the test is repeated in year two, the gap between August and September-borns has reduced to 7%, and 85% of August-born children pass by this stage.

The disadvantage gap

Moreover, the disadvantage gap – the difference in achievement between children from disadvantaged backgrounds and their more well-off peers – is significant. For the phonics screening check, it’s 17%, meaning that 67% of disadvantaged pupils pass compared to 84% of their peers. This would suggest that if the government wants more children to pass the test, finding ways to reduce the impact of disadvantage on children’s learning would be a highly effective way of improving the figures overall.

What remains, of course, is a deeper question of why the number who pass the phonics screening check should be such a key focus. Research has found no clear evidence that the phonics screening check improves how well children can read at primary school, or that it reduces the attainment gap.

It has also been critiqued as reducing children’s enjoyment of reading. The international comparison data shows a decline in children’s enjoyment of reading since the early 2000s. The proportion of children who really enjoy reading in England is far below the international average.

In the meantime, this is a target that is going to be very difficult to reach. It may result in an even more intense focus on phonics than we have seen thus far, at the expense of other aspects of learning to read.

The Conversation

Alice Bradbury receives funding from the Helen Hamlyn Trust which funds the Helen Hamlyn Centre for Pedagogy at UCL. She is a member of the Labour Party and the Universities and College Union.

ref. England’s new phonics target sets schools an almost impossible task – https://theconversation.com/englands-new-phonics-target-sets-schools-an-almost-impossible-task-267916

Companies now own more than $100 billion in bitcoin – but the shine may be wearing off crypto treasury companies

Source: The Conversation – UK – By Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

Mehaniq/Shutterstock

One American company called Strategy owns more than 3% of all bitcoin in existence. Its executive chairman, Michael Saylor, is the pioneer of a new business model where publicly listed companies buy cryptocurrency assets to hold on their balance sheet.

Strategy, formerly called MicroStrategy, first bought US$250 million (£187 million) worth of bitcoin in mid-2020 during the depths of the COVID economic slump. As it continued to buy bitcoin, its share price soared, and it kept buying. As of October 2025, Strategy held 640,418 bitcoin, worth around $70 billion.

In the years since, more than 100 other public companies have followed Saylor’s lead and become bitcoin treasury companies, together holding more than $114 billion of bitcoin. There’s been a new rush into crypto treasury assets in 2025 following the general crypto enthusiasm of the new Trump administration.

But holding bitcoin assets also comes with some big risks, particularly given the volatility of cryptocurrency prices, and the share prices of some of these companies are now coming under pressure.

In this episode of The Conversation Weekly podcast, we speak to Larisa Yarovaya, director of the centre for digital finance at the University of Southampton, about whether bitcoin treasury companies are the future of corporate finance, or another speculative bubble waiting to burst.

This episode of The Conversation Weekly was written and produced by Katie Flood, Mend Mariwany and Gemma Ware. Mixing and sound design by Michelle Macklem and theme music by Neeta Sarl.

Newsclips in this episode from CBC News, Bloomberg Television, Yahoo Finance, Altcoin Daily, Strategy and CNBC Television.

Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here. A transcript of this episode is available on Apple Podcasts or Spotify.

The Conversation

Larisa Yarovaya is affiliated with the British Blockchain Association.

ref. Companies now own more than $100 billion in bitcoin – but the shine may be wearing off crypto treasury companies – https://theconversation.com/companies-now-own-more-than-100-billion-in-bitcoin-but-the-shine-may-be-wearing-off-crypto-treasury-companies-268127

Sanctions on Russia have failed to stop the war so far – will Trump’s latest package be any different?

Source: The Conversation – UK – By Sergey V. Popov, Senior Lecturer in Economics, Cardiff University

Donald Trump has finally decided to hit Russia with sanctions – the first package he has imposed since he came back to the White House in January.

The sanctions target Rosneft and Lukoil, Russia’s two largest oil companies, as a retaliation for Vladimir Putin’s refusal to agree to a ceasefire in Ukraine. The announcement came in the wake of the decision to call off a planned summit between the two leaders in Budapest next month.

The US treasury secretary, Scott Bessent, said in a statement: “We encourage our allies to join us in and adhere to these sanctions.” In fact the EU has imposed 19 rounds of sanctions against Russia since the full-scale invasion in 2022.

The UK government has passed sanctions which it estimates have cost Russia more than £28 billion since the start of the war. And the Biden administration also repeatedly imposed sanctions on Russia after the invasion.

In March 2022, I wrote a piece for The Conversation explaining why I thought the sanctions imposed on Russia in the aftermath of the full-scale invasion of Ukraine wouldn’t topple Putin. Sanctions often fail to achieve their goals the Russian economy has specifically been set up to resist western sanctions.

Three years on, Russia’s land grab continues to ravage Ukraine, albeit clearly with much less success than expected by Russia’s generals. A lot of this resistance is due to Ukrainian military heroism and creativity, and a lot of it due to humanitarian and military assistance from EU, the US and other allies. But how much of it was due to sanctions is open to debate.

Russia’s economy is now focused on waging war. And even in these days of drone combat, to wage war you needs soldiers. The amounts paid to people joining up in Russia are unprecedented. Not only is their enlistment pay about the price of a decent apartment in a regional capital, but any debt they hold up to 10 million rubles (£76,500) is wiped out.

Their salary is not a large amount by western standards – a policeman in New York earns a comparable amount in a year. But when the alternative in Russia is being a security guard for £400 per month, it is clear why many people who see no future – especially convicts who are also given pardons – enlist in the Russian armed forces.

Reservists and volunteers mean that Russia is able to maintain its fighting force. While the sanctions clearly hurt Russia’s economy, having sufficient soldiers is priority number one – and this is still largely unaffected.

Russia is managing to pay for the war, sanctions or no sanctions, by passing on the cost to the public. VAT is forecast to rise from 20% to 22% in 2026 and the revenue threshold under which businesses will be required to pay will come down. This will lower investment into things like barber shops, but investment in military production will not be affected.

The sanctions do hurt the Russian economy – lifting sanctions is always the most important demand anytime Russia is consulted about a ceasefire – but not so much that the war economy is slowing down.

Finding loopholes

Thus far, Russia has managed to circumvent sanctions. Europe still buys large quantities of oil and gas from Russia (more than it has given Ukraine in aid, in fact). Moscow has also exported massive amounts to India and China, but the quantities are expected to fall sharply as a result of the US sanctions.

Earlier this year, the US president also announced a massive tariff hike on Indian exports in retaliation for India buying Russian energy supplies.

All of this will make the war more expensive – but it will not stop it. For a start, Russia controls a big “shadow fleet” of ships that have been transporting its oil and other banned goods such as military equipment and stolen Ukrainian grain. The EU has imposed port bans on 117 ships believed to be part of this shadow fleet. But experience suggests that this is by no means a foolproof way of preventing them from operating.

Death by 1,000 cuts

It’s tempting to imagine sanctions as trying to cause death by 1,000 cuts. The EU has made 19 cuts, so we are still 981 away – 980 with Trump’s latest move.

The west could have done more and it could have done it sooner. It could have acted as early as 2008 when Russia signalled its aggressive intent by invading Georgia. It could have imposed more effective sanctions after Russia annexed Crimea and parts of eastern Ukraine in 2014.

In any case, these sanctions are designed against a western democracy, if they were imposed against the US or the UK, they would have changed governments. In western democracies governments have power at the discretion of the voters who can take these mandates back. Sanctions against autocracies, where power is not in the hands of the people, need to be different.

The good news is that the Trump administration is finally doing something besides putting out the red carpet for Putin. There is hope.

The Conversation

Sergey V. Popov does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Sanctions on Russia have failed to stop the war so far – will Trump’s latest package be any different? – https://theconversation.com/sanctions-on-russia-have-failed-to-stop-the-war-so-far-will-trumps-latest-package-be-any-different-268228

Russia turns to an old ally in its war against Ukrainian drones: the weather

Source: The Conversation – UK – By Peter Lee, Professor of Applied Ethics and Director, Security and Risk Research, University of Portsmouth

Russia has long used harsh weather as a defensive ally. During Napoleon’s 1812 invasion, his Grand Army was defeated as winter closed in – the ground became impassable and logistical support to his army collapsed. Similarly, Hitler’s Operation Barbarossa in 1941-42 was halted by heavy rains and deep mud followed by freezing temperatures.

Today, in a different kind of war, Russia is again turning to its old ally, harsh weather – but this time to help in its offensive against Ukraine.

Ukraine’s extensive use of small, low-cost drones has transformed attack and defence strategies across the front lines. The Modern War Institute reports that drones are responsible for around 70% of Russia’s battlefield casualties, although it is unclear which kind of drones are killing in greater numbers: loitering one-way attack drones (known as “kamikaze drones”) or quadcopter first-person view (FPV) drones, armed with small explosives.




Read more:
How drone attacks are changing the rules and the costs of the Ukraine war


Beyond direct strikes, small drones play a vital role in intelligence gathering, surveillance and reconnaissance. They allow Ukrainian units to identify targets and coordinate ground operations with far greater precision. Real-time aerial imagery enables artillery crews to rapidly adjust fire, making bombardments more accurate – and more lethal.

Drones have become the eyes and, increasingly, the hands of Ukraine’s ground forces, increasing their defensive effectiveness against the larger and often better-equipped Russian ground forces. Mass Ukrainian use of FPV and one-way-attack drones has significantly improved defensive effectiveness, blunting larger Russian ground force assaults by using real-time targeting data and precise strikes.

Both sides in the war also regularly deploy electronic jamming, rendering radio-controlled FPV drones inoperable. Russia has, of necessity, become a global leader in this area.

The jamming disrupts the radio links and video feeds that pilots rely on for navigation and targeting. This places Ukraine’s forces, which rely heavily on drones to offset Russia’s advantages, at a considerable disadvantage.

Bad weather and drones

When bad weather combines with electronic interference, the effect is even more damaging. Snow, fog, wind and cold already limit drone endurance and visibility – sharply reducing Ukraine’s technological edge in aerial reconnaissance and precision battlefield drone strikes.

Russia, in contrast, gains relative advantage in such conditions. Its older, heavier ground-based systems – tanks, artillery and armoured vehicles – are more resilient against poor weather. The battlefield becomes a place where Russia’s attritional approach to war grinds out bloody advances.

Small quadcopter drones are light, have limited endurance and are easily influenced by weather. Take the DJI Mavic 3 series, used by many Ukrainian units for frontline reconnaissance. It is only effective in temperatures between –10°C and +40 °C and winds below about 12 metres per second. Strong gusts or freezing weather can quickly push this small drone off course.

More advanced Ukrainian systems, such as the winged Shark uncrewed aerial system, can operate from –15 °C to +45 °C and withstand winds up to 20 metres per second. Yet even these are restricted to dry conditions: rain or snow can ground them.

In cold conditions, batteries drain more quickly, cutting both flight time and operational range. Icing can ground large numbers of drones if it changes the characteristics of the quadcopter propellers – ice makes propeller blades thicker, heavier and less aerodynamic, reducing performance.

Winged drones can suffer from wing-tip icing, which changes their flight characteristics – reducing lift, increasing drag and the danger of stalling, and degrading control. Fog and snow also reduce visibility, limiting the ability to identify or track targets.

Russian offensive

In October 2025, Russia timed several large ground assaults to coincide with poor weather. This approach exploited conditions that significantly reduced Ukraine’s ability to defend itself with drones. Fog, rain and cloud cover made small reconnaissance drones unreliable or even unflyable. Visibility dropped so low that attacks on individual soldiers become far less effective.

In theory, Ukraine’s allies can offset some of this loss through satellite intelligence. US reconnaissance satellites can still gather valuable information on cloudy days by using synthetic aperture radar to detect ground movements. Yet even these advanced systems cannot see visually through thick cloud banks or heavy rainstorms.

Historically, Russia’s severe weather served a defensive purpose, turning back invading armies from Napoleon to Hitler. In the present war, however, Russia is using the same harsh climate offensively, turning natural concealment into a tactical advantage as it advances across Ukrainian ground.

Winter has not yet arrived, but Ukrainian and Russian military planners will be watching the weather. Ukraine’s vaunted ability to innovate and respond to Russian tactics will be tested even further in the months ahead.

The Conversation

Peter Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Russia turns to an old ally in its war against Ukrainian drones: the weather – https://theconversation.com/russia-turns-to-an-old-ally-in-its-war-against-ukrainian-drones-the-weather-268019

Ancient antelope teeth offer surprise insights into how early humans lived

Source: The Conversation – Africa (2) – By Megan Malherbe, Research Assistant Scientific Collection Institute of Evolutionary Medicine Faculty of Science, University of Zurich

Understanding what the environment looked like millions of years ago is essential for piecing together how our earliest ancestors lived and survived. Habitat shapes everything, from what food was available, to where water could be found, to how predators and prey interacted.

For decades, scientists studying South Africa’s Cradle of Humankind have tried to reconstruct the landscape in which species like Australopithecus sediba, Paranthropus robustus and Homo naledi once lived. These were hominins that inhabited the region between roughly 2.5 million and 0.25 million years ago. The Cradle of Humankind is a Unesco world heritage site that has remained the single richest source of early human fossils for over 90 years.

A long-standing idea has been that the Cradle experienced a dramatic environmental change around 1.7 million years ago: a shift from woodlands to open grasslands. This shift likely happened as global climates became cooler and drier, with stronger seasonal patterns. These broader changes, linked to the expansion of polar ice sheets and shifts in atmospheric circulation, reduced the availability of year-round rainfall in southern Africa.

Trees and shrubs, which depend on consistent moisture, gave way to hardy grasses better suited to long dry seasons and intense sunlight. In the woodlands, dense trees and shrubs had once provided leafy vegetation for browsing animals. As the landscape opened up, short grasses became dominant, supporting grazing animals.

This supposed sudden transformation was thought to have reshaped the setting in which early humans evolved, possibly influencing their diets, mobility and survival strategies.

But was there really such a sudden switch?

I’m a palaeoecologist who’s part of a team that specialises in reconstructing ancient environments by studying fossil animals. We set out to test the “sudden switch” idea, using a large dataset of fossil antelope teeth. Antelopes (bovids) are particularly useful for reconstructing past environments in Africa: they are abundant in the fossil record, they occupy a wide range of habitats today as well as in the past, and their teeth preserve clear signals of what they ate.

We examined more than 600 fossil teeth from seven well-dated sites in the Cradle, covering a broad time span from 3.2 million to 1.3 million years ago.

The results of our study were striking. Across all seven sites, spanning nearly two million years, the antelopes show consistently strong grazing signals. Grass-eating was dominant throughout the period, challenging the old model of a sudden woodland-to-grassland shift 1.7 million years ago. Instead, the evidence points to a more stable but varied landscape: a mosaic environment. Some fossil species even showed different feeding strategies from their modern relatives, highlighting that ancient antelopes adapted to past conditions in distinct ways.

This tells us more about the world early humans evolved in – but it also reminds us to be cautious. Fossil animals didn’t always behave like their modern relatives, so drawing direct parallels risks oversimplifying the past.

Dating the sites

To interpret the fossils in context, we needed to be sure of when each site formed. Previous work often relied on broad age estimates based on the types of animals found in each sediment layer – a method called biochronology – which could only give a rough idea of when different species lived. This made it difficult to line up fossils from the many cave sites in the Cradle on a reliable timeline. Thanks to recent improvements in radiometric dating, a method that finds the precise age of rocks by measuring how radioactive elements change into other elements over time, the chronology of the Cradle has been refined.

The layers of calcite deposited in caves (known as flowstones) were recently shown by geochronologists to have formed at the same time across multiple sites, providing a regional framework for the whole area. This means researchers can now compare fossils from different caves knowing they represent the same windows of time. It’s a huge step forward in testing whether environmental shifts were truly regional events.

Reading diets from teeth

The method used in this study is called dental mesowear analysis. It records the long-term impact of diet on the tooth surfaces of herbivores throughout their life. In simple terms, different diets wear teeth in different ways:

  • browsers (like kudu or giraffes), which eat leaves and twigs, usually have sharper cusps, because their food causes less wear on the teeth

  • grazers (like wildebeest or zebra), which feed mostly on grasses rich in silica and often covered in grit, develop blunter cusps from heavy tooth grinding

  • mixed feeders show intermediate wear, reflecting generalist behaviour and a diet that shifts with seasons or local vegetation.

By scoring cusp shape and relief on each fossil tooth, we assessed whether past populations leaned more towards browsing or grazing.

The results showed there was a mix of different habitats in this environment at that time: open grassy areas mixed with patches of trees and shrubs. This would have created a patchwork of ecological niches, offering early humans a diverse range of resources.

Some sites – including the famous Sterkfontein Caves, home to one of the most complete early hominin skulls ever found, “Mrs Ples” – showed a bimodal pattern in tooth wear, meaning that even within the same community, some antelopes were grazing while others were browsing. This suggests that vegetation structure shifted locally or seasonally, and that animals adapted their diets accordingly. They switched between food sources as conditions changed.




Read more:
Elephant teeth: how they evolved to cope with climate change-driven dietary shifts


Lessons from antelope diets

One of the most important findings is that some fossil antelopes fed very differently than their modern relatives. For example, certain groups that today are almost exclusively browsers were much more grass-focused in the Cradle fossil record. Others showed unexpected flexibility, with individuals of the same tribe in the same site adopting different strategies.

This has two key implications.

We cannot always rely on modern analogies. Assuming extinct animals behaved like their living relatives can be misleading, since the fossil record shows surprising shifts in diet. This means reconstructions based only on which species were present may give the wrong impression or oversimplify the reality.

Flexibility was crucial. The fact that antelopes could switch between grazing and browsing indicates that the Cradle’s environment was dynamic, and that survival often depended on adaptability. This echoes what we know about early humans, who also seem to have thrived by exploiting a wide range of resources.

The Conversation

Megan Malherbe is affiliated with the Institute of Evolutionary Medicine at the University of Zurich, and the Human Evolution Research Institute at the University of Cape Town.

ref. Ancient antelope teeth offer surprise insights into how early humans lived – https://theconversation.com/ancient-antelope-teeth-offer-surprise-insights-into-how-early-humans-lived-267169

Japan’s sumo association turns 100 – but the sport’s rituals have a much older role shaping ideas about the country

Source: The Conversation – USA (3) – By Jessamyn R. Abel, Professor of Asian Studies and History, Penn State

Sumo wrestlers Daieisho and Roga compete in a Grand Sumo Tournament bout at the Royal Albert Hall in London on Oct. 19, 2025. AP Photo/Frank Augstein

A visitor to Japan who wanders into a sumo tournament might be forgiven for thinking they had intruded upon a religious ceremony.

Tournaments begin with a line of burly men wearing little more than elaborately decorated aprons walking in a line onto a raised earthen stage. Their names are called as they circle around a ring made of partially buried bales of rice straw. Turning toward the center, they clap, lift their aprons, raise their arms upward, and then exit without a word.

Then two of those men face each other, crouching, clapping their hands together and stomping on the ground. They pause repeatedly to rinse their mouths with water and toss salt into the ring.

Overseeing their movements is a man outfitted in a colorful kimono and a black hat resembling that of a Shinto priest and holding a tasseled fan. After a subtle gesture with his fan, they finally grapple – and only then would the uninformed observer realize that the performance was an athletic event.

Every sport has its rituals, from the All Blacks rugby team’s pregame haka to the polite handshake between victor and vanquished over the tennis court net. Some, like many sumo rituals, have roots in religious practices. A few hundred years ago, competitions were frequently held at temples and shrines as part of festivals.

Two men in white robes bow, standing on the side of a dirt ring, as another man in white robes sits between them.
Sumo referees perform the Shinto ritual to purify and bless the ring ahead of a tournament at the Royal Albert Hall in London on Oct. 15, 2025.
AP Photo/Frank Augstein

Today, sumo is a modern sport with records, rules and a governing institution that celebrated its 100th anniversary in October 2025. But those religious roots are still visible. The salt the wrestlers throw, for example, is a purifying element. The clapping is a way of drawing the attention of the gods.

As a historian of modern Japan and a scholar of sports and diplomacy, I have seen many ways in which sports are much more than “just a game.” Sport rituals are an important part of those wider meanings. In fact, sumo and its rituals have helped shape foreign perceptions of Japan for at least 170 years.

First impressions

The first sumo tournament known to have been observed by American spectators was held in March 1854, in honor of a treaty establishing diplomatic relations between the United States and Japan. Described in the personal journal kept by Commodore Matthew Perry, the leader of the mission to Japan, the exhibition before gawking American sailors seemed designed to impress.

Before the matches began, the athletes put on a performance of strength, loading the American ships with a gift of some 200 bales of rice from the Japanese government. Perry describes how two dozen huge men, “naked with the exception of a narrow girdle around the loins,” paraded before the American crew before getting to work, each shouldering two 135-pound bales.

If the actual sumo competition was intended to inspire appreciation of Japanese culture, it backfired. Perry’s descriptions of the wrestlers were full of unflattering animal metaphors. He wrote that they resembled “stall-fed bulls” more than human beings and made noises like “dogs in combat.”

At the time, sports as we know them today were just emerging in England and the United States. Some of the earliest rules of soccer were recorded in the 1840s, and baseball’s growing popularity led to the development of professional leagues after the U.S. Civil War.

With this American idea of sports in Perry’s mind, the sumo tournament did not impress him. He called the bouts a “farce” and judged the wrestlers’ physique as one that “to our ideas of athletic qualities would seem to incapacitate him from any violent exercise.”

An illustration in muted colors shows two large men wrestling on a platform between red posts, as a large audience watches.
An illustration of an 1846 sumo tournament by Utagawa Kunisada.
Chunichi.co.jp/Wikimedia Commons

In the mid-19th century, Japan was relatively isolated from the Western world. Most Americans knew almost nothing about the country and considered it backward, even barbaric. The two cultures’ differing ideas of sports meant that sumo only added to American views of Japan as strange and uncivilized.

A competing sport

Sports diplomacy had a more positive impact on American views of Japan in the early 20th century, thanks to a different game: baseball.

After the fall of the shogunate in 1868, the new Japanese government – made up of oligarchs ruling in the name of the Meiji Emperor – employed Americans to help implement reforms. Some of them brought along America’s pastime, which became very popular within a few decades.

By the 1910s and ‘20s, Japanese college teams were regularly traveling to the U.S., where newspapers praised their skills and their sportsmanship.

A black and white photo shows two rows of men in suits posing outside a large white building.
The Osaka Mairuchi baseball team from Japan visits the U.S. White House in 1925.
National Photo Company Collection/Library of Congress/Wikimedia Commons

Some of the rituals in a Japanese baseball game, like a ceremonial first pitch, were familiar to American observers. Others, like a team bow toward the umpire, were quite a contrast, but struck them as superior to the rowdiness of American players and fans.

At the time, Japan’s Westernizing reforms and recent military victories over China and Russia had already improved Americans’ impressions of the country. Former baseball player Harry Kingman, writing about a game he watched during a 1927 stint coaching a Tokyo college team, explained the Japanese turn toward baseball as part of the nation’s modernization.

Sumo, however, continued to be the most popular sport in Japan until the 1990s, when baseball took that title. But the initial popularity of this American import caused some anxiety within the sumo world: A foreign game seemed to be taking over and stealing sumo’s fans.

Amid these changes, professional sumo’s governing institutions, which were divided into competing associations based in Tokyo and Osaka, joined forces. They officially unified in 1925 as the organization that would become today’s Japan Sumo Association.

Can sumo be cool?

Japanese popular culture now captivates people around the world. In 2002, journalist Douglas McGray wrote about the soft power conferred by what he called the nation’s “gross national cool.” But he noted sumo as an exception, blaming its leadership’s insular attitudes.

Perhaps sumo’s biggest hurdle to building an international fan base is its attitude toward foreigners. Immigration is controversial in Japan. The population is relatively homogeneous, and barriers to naturalization are high.

A man in a blue suit shakes hands with a much larger man in a gray suit.
Thomas Foley, then the U.S. ambassador to Japan, presents sumo grand champion Akebono with a letter of appreciation from Secretary of State Colin Powell in 2001.
AP Photo/Tsugufumi Matsumoto

In contrast to sports like baseball, soccer and rugby, where “imported” players abound, there are few foreign sumo wrestlers, and their success seems to rankle. In 1993, a Hawaiian named Akebono became the first foreigner to reach the top rank of “yokozuna,” sparking a temporary hold on recruiting sumo wrestlers from outside Japan.

Constraints were gradually softened, and the number of non-Japanese professional wrestlers has been rising. They still represent a small minority, but their success often sparks discussions about the place of foreigners in the sport.

Though sumo has gained some traction outside of Japan, its rituals still occasionally create negative impressions of Japanese culture. At a tournament in 2018, for example, a local official collapsed while giving a speech. Female medics who rushed to help him were told to leave the sumo ring, considered a sacred space polluted by a woman’s presence. The chairman of the Japan Sumo Association later apologized, but the incident brought criticism that the sumo world was clinging to anachronistic traditions.

Sumo continues to change. A 1926 Tokyo government ban on women’s sumo is no longer in force, and there are now some female wrestlers in amateur clubs. But women are still barred from professional competition.

Tournaments are certainly popular with tourists, but they generally go for a one-time experience. One might ask if sumo can change enough to play an effective role in Japan’s sports diplomacy. The answer depends on whether sumo leaders are more interested in maintaining the sport’s Japanese identity or building global connections.

The Conversation

Jessamyn R. Abel has received funding from the National Endowment for the Humanities, the Japan Foundation, the Northeast Asian Council of the Association for Asian Studies, and the McCourtney Institute for Democracy at Penn State University. She is currently a member of the Board of Directors of the Association for Asian Studies.

ref. Japan’s sumo association turns 100 – but the sport’s rituals have a much older role shaping ideas about the country – https://theconversation.com/japans-sumo-association-turns-100-but-the-sports-rituals-have-a-much-older-role-shaping-ideas-about-the-country-263093

Rift Valley fever: what it is, how it spreads and how to stop it

Source: The Conversation – Africa (2) – By Marc Souris, chercheur, Institut de recherche pour le développement (IRD)

Rift Valley Fever (RVF) is a viral disease transmitted by mosquitoes that mainly affects livestock. It can also infect humans. While most human cases remain mild, it can cause death. The disease causes heavy economic and health losses for livestock farmers.

As a researcher, I have contributed to several studies on this mosquito-borne virus.

So, what exactly is Rift Valley fever, how it is treated, and how it can be controlled?

What is Rift Valley fever?

Rift Valley fever is a zoonosis (a disease affecting animals that can be transmitted to humans). It is caused by the RVF virus, a phlebovirus from the Phenuiviridae family (order Bunyavirales). The disease primarily affects domestic animals, mainly cattle, sheep and goats, but also camelids and other small ruminants. It can occasionally infect humans.

In animals, the disease causes high morbidity: reduced milk production, high newborn mortality, mass abortions in pregnant females, and death in 10% to 20% of cases. This leads to serious economic losses for farmers.

Most people who get Rift Valley fever have no symptoms or just flu-like syndrome. But in a few people, it can become very serious, causing complications such as eye disorders, meningoencephalitis (inflammation of the brain), or hemorrhagic fever. The fatality rate among infected people is around 1%.

How it’s transmitted

In animals, the disease is mainly spread through bites from infected mosquitoes. At least 50 mosquito species can transmit the Rift Valley fever virus, including Aedes, Culex, Anopheles and Mansonia species. Mosquitoes become infected when they feed on animals carrying the virus in their blood, then transmit it to other animals through their bites. In Aedes mosquitoes, vertical transmission – from infected females to their eggs – is also possible, allowing the virus to survive in the environment.

For humans, the most common way to get infected is through direct contact with the blood or organs of an infected animal. This often happens during veterinary work, slaughtering, or butchering.

While it is also possible for human to get the virus from a mosquito bite, this is not common. No human-to-human transmission has been observed to date.

The origins and spread

A serious outbreak of Rift Valley fever began to be reported in Senegal in late September 2025. The west African country has been battling to control it.

The disease was first discovered in 1931 in the Rift Valley in Kenya in east Africa, during a human epidemic of 200 cases. The virus itself was isolated and identified in 1944 in neighbouring Uganda.

Since then, numerous outbreaks of the disease have been reported in Africa: in Egypt (1977), Madagascar (1990, 2021), Kenya (1997, 1998), in Somalia (1998), in Tanzania (1998), the Comoros (2007-2008) and Mayotte (2018-2019).

In west Africa, the main epidemics affected Mauritania (1987, 1993, 1998, 2003, 2010, 2012), Senegal (1987, 2013-2014) and Niger (2016).

Its spread into the Sahel and west African regions has been largely driven by the movement of livestock, and by environmental factors.

To date, around 30 countries have reported animal and/or human cases in the form of outbreaks or epidemics.

Why and how outbreaks occur

Rift Valley fever reemerges in cyclical patterns, with major outbreaks occurring in Africa every five to 15 years. The trigger for these outbreaks is closely linked to specific environmental conditions, like periods of heavy rainfall that create ideal breeding conditions for mosquitoes.

In east Africa, epidemics typically follow periods of exceptionally heavy rainfall or flooding in normally dry regions. For instance, the severe outbreaks of 1998-1999 were directly linked to intense rains caused by the El Niño climate phenomenon.




Read more:
West Africa’s trade monitoring system has collapsed – why this is dangerous for food security


In the Sahel region, the relationship with rainfall is less predictable. Outbreaks can appear in unexpected, poorly monitored areas, and genetic analysis of viruses in Mauritania suggests that new strains can be introduced directly from other regions.

A key mystery is how the virus persists in the environment between these major outbreaks. It is believed to survive in the environment within a “wild reservoir” of animals – such as certain antelopes, deer, and possibly even reptiles – though this reservoir has not yet been fully identified.

Once an initial outbreak occurs, the virus can spread to new areas. This happens through the movement of infected livestock, the accidental transport of infected mosquitoes (for example, in vehicles or cargo), and when environmental conditions are conducive.

Clinical symptoms and treatments

Adult cattle and sheep may show nasal discharge, excessive salivation, loss of appetite, weakness, diarrhoea.

In humans, after an incubation period of two to six days, most infections are asymptomatic or mild, with flu-like symptoms lasting four to seven days. People who recover from the infection typically gain natural immunity.




Read more:
Preventing the next pandemic: One Health researcher calls for urgent action


However, in a small percentage of individuals, the disease can take a severe turn:

  • Eye lesions affect up to 10% of symptomatic cases. They appear one to three weeks after initial symptoms and can heal on their own or lead to permanent blindness.

  • Meningoencephalitis (inflammation of the brain and meninges) occurs in 2%-4% of symptomatic cases, one to four weeks after symptom onset. Mortality is low, but neurological after-effects are common.

  • Hemorrhagic fever (diseases that cause fever and bleeding due to damage to the blood vessels) occurs in less than 1% of symptomatic cases, usually two to four days after symptoms begin. About half of these patients die within three to six days.

There is no specific treatment for severe cases of Rift Valley fever in humans.

Surveillance, prevention and control

Veterinary surveillance with immediate reporting and monitoring of infection in animals is essential to control the disease. During outbreaks, controlled culling of infected animals and strict restrictions on the movement of livestock are the most effective ways to slow virus spread.




Read more:
How does Marburg virus spread between species? Young Ugandan scientist’s photos give important clues


As with all mosquito-borne viral diseases, controlling vector populations is an effective preventive measure, though it is challenging, especially in rural areas.

To prevent new outbreaks, animals in endemic regions can be vaccinated in advance. A modified live virus vaccine provides long-term immunity after a single dose, but it is not recommended for pregnant females because it can cause abortions. An inactivated virus vaccine is also available, it avoids these side effects, but it requires several doses to provide adequate protection.

Threat, vulnerabilities and health risks

People at highest risk of infection include livestock farmers, abattoir workers and veterinarians. An inactivated vaccine for human has been developed. But it is not licensed yet and has only been used experimentally.

Raising awareness of risk factors is the only effective way to reduce human infections during outbreaks. Key risk factors include:

  • handling sick animals or their tissues during farming and slaughter

  • consuming fresh blood, raw milk, or meat

  • mosquito bites.

It is important to follow basic health precautions when Rift Valley fever appears. Wash your hands regularly. Wear protective gear when handling animals or during slaughter. Always cook animal products such as blood, meat and milk thoroughly. Use mosquito nets or repellents consistently.

The Conversation

Marc Souris receives funding from ANR (Agence Nationale de la Recherche, France) and IRD (Institut de Recherche pour le développement).

ref. Rift Valley fever: what it is, how it spreads and how to stop it – https://theconversation.com/rift-valley-fever-what-it-is-how-it-spreads-and-how-to-stop-it-267309