The Trump administration appears to be laying the groundwork for a possible military escalation against Latin American drug traffickers. Rawpixel.com / Shutterstock
At the start of September 2025, US president Donald Trump sent a naval task force into the Caribbean to tackle drug trafficking in the region. The initiative has led to strikes on four alleged drug boats off the Venezuelan coast so far, killing at least 21 people.
The strikes have been condemned by Venezuela and Colombia, while some international lawyers and human rights groups have questioned their legality. Human Rights Watch, for example, has suggested the strikes amount to “unlawful extrajudicial killings”. However, these attacks are unlikely to stop.
In a post on X on October 3, after US forces killed four people in an attack on a suspected drug boat, US defence secretary Pete Hegseth wrote: “These strikes will continue until the attacks on the American people are over!!!!” Trump claimed, without providing evidence, that this boat was carrying enough drugs to kill 25,000 to 50,000 people.
The Trump administration now looks to be considering moving its campaign in the Caribbean to a second phase. On October 5, while speaking at a US Navy base in Virginia, Trump boasted that drug traffickers are “not coming in by sea any more, so now we’ll have to start looking about the land”. A leaked memo sent to Congress a few days earlier also suggests the US government has decided it is in a “non-international armed conflict” with drug cartels.
Trump’s threats to escalate military pressure against the cartels may be part of a broader campaign to force Venezuela’s leader, Nicolás Maduro, from office. The White House sees his government as illegitimate and has consistently accused Maduro of being a central figure in the Latin American drugs trade. There is little proof that this is the case.
A wider military confrontation with cartels across the region may therefore be unlikely. But it should not be discounted. On October 7, CNN reported that the Trump administration has produced a classified legal opinion that seeks to justify lethal strikes against a list of cartels and suspected drug traffickers.
The opinion argues that the president is allowed to authorise deadly force against a broad range of cartels, beyond those the US government designated as terrorist organisations in early 2025. But is a direct military confrontation really a viable strategy to curtail the power and reach of cartels in the region?
Some observers, including the US-based Washington Office on Latin America, have suggested that “the US military’s overwhelming capacities would allow it to disrupt the activities of a specific criminal group, destroy complexes of drug labs and capture kingpins”.
These moves would not be without their challenges. In response to direct military action, it is possible that the cartels may look to attack US military personnel and civilians across the region. The cartels are vindictive in nature and have a history of targeting law enforcement, military personnel and government officials throughout Latin America.
Shortly after becoming president of Mexico in 2006, Felipe Calderón declared a “war on drugs” and deployed military force against the cartels. They retaliated violently, with many public officials assassinated in broad daylight. The cartels may well respond in a similar way if US forces launch operations against them.
This could include retaliation within US borders. In its 2024 National Drug Threat Assessment report, the US Drug Enforcement Administration (DEA) detailed how the cartels have deep networks within the US. These networks, which span from large cities such as Los Angeles and Chicago to rural areas, provides them with the infrastructure to carry out retaliatory attacks.
The US homeland security secretary, Kristi Noem, revealed in an interview with Fox News on October 5 that “cartels, gangs and terror groups” had already “put bounties on the heads of several federal immigration agents, offering US$10,000 (£7,500) to kill them and US$2,000 for their capture”.
“They’ve released their pictures; they’ve sent them between their networks”, Noem added. “It’s an extremely dangerous and unprecedented situation.” The cartels engage in various other criminal activities in addition to trafficking drugs, including the smuggling of migrants into the US.
The killing of a high-value drug kingpin or the arrest of a cartel boss also does not necessarily bring an end to that organisation. It only leads to fragmentation and the emergence of new tiers of leaders and groups that are often more violent than their predecessors.
Research supports this argument. The killing of the Los Zetas cartel leader, Heriberto “El Lazca” Lazcano, in October 2012 by Mexican marines was followed by higher levels of gang violence in the subsequent years as internal conflict between different factions intensified.
Addiction at home
Fentanyl and other opioids entering the US from Latin America have fuelled the worst drugs crisis in the country’s history. According to the US National Institute of Health, more Americans were killed by fentanyl-laced pills and other addictive drugs in 2021 alone than in all the wars the US has fought since the end of the second world war.
The DEA says Mexican criminal organisations, including the Sinaloa Cartel, play a key role in producing and delivering fentanyl and other illicit drugs into the US. But, to truly be successful in its war against the cartels, the US government needs to first address the problem of drug addiction at home.
According to a national 2023 survey on drug use in the US by American Addiction Centers, 48.5 million Americans aged 12 and older have battled a substance use disorder. This corresponds to 16.7% of the total population. A war on drugs needs to be a war against addiction in the US. Anything short of that will only fix the problem temporarily.
Amalendu Misra is a recipient of Nuffield Foundation and British Academy fellowships.
Source: The Conversation – UK – By Naida Redgrave, Senior Lecturer in Creative Writing & Co-Course Leader in Journalism, University of East London
Warning: includes some minor spoilers.
Brides is a warm and relatable story of two 15-year-old British Muslim schoolgirls travelling alone to Syria in 2014.
It’s not the first film to explore post-9/11 and 7/7 Britain through a Muslim lens. Films like My Brother the Devil (2012), Four Lions (2010), and After Love (2020) have each offered nuanced depictions of British Muslimhood. However, Brides is the first to address the personal impact of racism and Islamophobia through the lens of young Muslim women whose choices stem from complex social and emotional factors, rather than a duty to Islam.
The film comfortably passes the Bechdel test, which evaluates gender representation by assessing whether at least two named women engage in a conversation about something other than a man. It also passes the Riz test, an evaluative framework inspired by actor Riz Ahmed’s 2017 speech to the UK House of Commons. It measures whether Muslim characters are portrayed with agency beyond stereotypes of terrorism, oppression, or religiosity.
To achieve both is rare for Muslim representation on western screens and is what makes the film feel so refreshing. Woven throughout are delicate challenges to stereotypes often ascribed to Muslim characters.
The trailer for Brides.
Brides tells the story of Doe (Ebada Hassan) and Muna (Safiyya Ingar) who embark on a hazardous journey from the UK to Istanbul. They travel across Turkey and finally to the Syrian border. Many in the UK will recognise the real-world parallel.
Writer Suhayla El-Bushra and director Nadia Fall have stated that Brides reimagines the case of the “Bethnal Green trio”. In 2015, three east London schoolgirls, Amira Abase, Kadiza Sultana and Shamima Begum, fled the UK to become “Isis brides”, leaving their families in shock and generating much media outrage and public fury.
Yet rather than focus on radicalisation, this buddy-girl adventure is interspersed with short flashbacks and longer sequences that contextualise the girls’ desire to escape. These culminate in a racist attack on Doe by a white male classmate and Muna’s suspension from school as she retaliates violently to protect her friend. Before we arrive at this climactic point (shown shortly before the girls reach the border) there are many examples of the everyday racism and Islamophobia that blight their lives.
While the roots of Islamophobia reach deep into western orientalism (the stereotyping of eastern cultures), its modern form has dominated British political debate since the early 2000s. British mosques and Muslim communities, more generally, have persistently been portrayed as breeding grounds for anti-western rhetoric and even terrorism, through a constant stream of online and print stories.
Brides references these through a montage of Islamophobic headlines, such as the Sun’s notorious 2015 claim that one in five British Muslims have sympathy for jihadis.
Brides depicts the real-world consequences of media scaremongering through the various insults and threats that its young heroines are subject to. The unnamed white boy who later attacks shy Doe uses obscene language towards her, prods her hip with a pencil and invades her personal space in the classroom.
The more truculent and outgoing Muna is called a slur by a female classmate. Rather than punish the racist kids, the headteacher moves Muna to a different class and threatens her with the government’s counter-terrorist strategy, Prevent, for retaliating against the racist slur.
The headteacher also tells Doe that she should “rise above” racism after the white boy accuses her of not washing her hair and pulls off her headscarf in the playground.
The attack comes after Doe delivers a charity food parcel to the boy’s home from her community. His sense of personal humiliation is clearly the motivation for his later racist attack. Fall and El-Bushra’s decision to include this detail is striking, as humiliation is often discussed as a driver of misogynistic extremism, but rarely acknowledged as a root of racist or Islamophobic violence.
The uncomfortable classroom scenes echo many real-life incidents and show how, as groups like The Runneymede Trust have pointed out, government policy and the media can fuel Islamophobia in schools and everyday life.
The real Bethnal Green trio grew up in inner city London, but Brides is set in an unspecified coastal area of southern England. By choosing this location, the film again gestures toward the long-term rise of nationalism and Islamophobia in parts of the UK that have been hard hit by recession, under-investment and austerity politics, which was first noted by researchers in the 2000s.
Girls just want to have fun
The recent wave of Islamophobia and racism have been fuelled by the perception that misogyny is endemic in Muslim communities.
Although various religious doctrines are used to justify or condone violence against women, gendered violence and sexual abuse is a social problem that crosses all classes, regions and religions within the UK.
Brides highlights this point as Doe’s widowed mother is subject to the violent rages of her white boyfriend, Jon (Leo Bill) who also displays sexualised behaviour towards her daughter. As young women, both Doe and Muna attract unwanted sexual attention from older men of different racial and ethnic backgrounds, both in the UK and Turkey.
This emphasises that the sexualisation of young women is a result of patriarchy, rather than specific communities or religions. This serves as a corrective to numerous stereotypical representations of Muslim women in which they are shown only in relation to dominant men within their communities – often as either victims or terrorists.
Brides explores Muna and Doe’s friendship through banter and shared enjoyment of ordinary teen girl tastes and interests, such as fun fairs, junk food and romance.
In the end, the film is less about terrorism than about the systems that make dehumanisation seem reasonable. Fall’s timely and perceptive film succeeds not only as great female-centred drama, but as an important intervention into the crude racial politics of the contemporary moment.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
As colder months set in, respiratory infections begin to climb: everything from the common cold and flu to COVID-19. It’s a time when healthy lungs matter more than ever. Yet the very tissue that lets oxygen pass from air to blood is remarkably delicate, and habits such as vaping can weaken it just when protection is most needed.
The lungs are often pictured as two simple balloons, but their work is far more intricate. They act as a finely tuned exchange system, moving oxygen from inhaled air into the bloodstream while releasing carbon dioxide produced by the body’s cells.
At the centre of this process lies the blood–air barrier: a paper-thin layer where tiny air sacs called alveoli meet a dense network of hair-thin pulmonary capillaries. This barrier must remain both strong and flexible for efficient breathing, yet it is constantly exposed to stress from air pollution, microscopic particles and infectious microbes.
Vaping can add another layer of strain, and growing evidence shows that this extra pressure can damage the surface that makes every breath possible.
The cloud from an e-cigarette carries solvents such as propylene glycol, flavouring chemicals, nicotine (in most products) and even trace metals from the device itself. When this cocktail reaches the lungs it doesn’t stay on the surface. It seeps deeper, irritating the endothelium – the thin layer of cells lining the blood vessels that mesh with the air sacs.
Healthy endothelium keeps blood flowing smoothly, discourages unnecessary clotting and acts as a selective gatekeeper for the bloodstream – controlling which substances, such as nutrients, hormones and immune cells, can pass in or out of the blood vessels while blocking harmful or unnecessary ones.
My own research group has linked these changes to surges in inflammatory signals and stress markers in the blood after exposure to vaping aerosols. Together these findings indicate that the endothelium is struggling to maintain its protective role.
Laboratory work shows that vaping aerosols (even without nicotine) can loosen the tight seal of pulmonary endothelial cells. When the barrier leaks, fluid and inflammatory molecules seep into the alveoli. The result: blood–gas exchange is disrupted and respiratory infections become harder to fight.
COVID-19 is usually thought of as an infection of the airways, but the SARS-CoV-2 virus also injures blood vessels. Doctors now describe the condition as causing endotheliopathies – diseases of the blood-vessel lining. In severe cases, capillaries become inflamed, leaky and prone to clotting. That helps explain why some patients develop dangerously low oxygen levels even when their lungs are not full of fluid: the blood side of the barrier is failing.
The virus exploits a key protein called ACE2, normally a “thermostat” that helps regulate blood pressure and vessel health. SARS-CoV-2 uses ACE2 as its doorway into cells; once the virus binds, the receptor’s protective role is disrupted and vessels become inflamed and unstable.
Vaping and COVID-19: a dangerous combination
My team is using computer models to investigate how vaping may affect COVID-19 infections. Evidence already shows vaping can increase the number of ACE2 receptors in the airways and lung tissue. More ACE2 means more potential entry points for the virus – and more disruption exactly where the blood–air barrier needs to be strongest.
Both vaping and COVID-19 drive inflammation. Vaping irritates and inflames the blood-vessel lining while COVID-19 floods the lungs with pro-inflammatory molecules. Together they create a “perfect storm”: capillaries become leaky, fluid seeps into the air sacs and oxygen struggles to cross the blood–air barrier. COVID-19 also raises the risk of blood clots in the lung’s vessels, while vaping has been linked to the same, compounding the danger.
Vaping can also hinder recovery after a bout of COVID-19. Healing the fragile exchange surface requires every bit of support the lungs can get. Vaping adds extra stress to tissues the virus has already damaged, even if the vaper feels no immediate symptoms. The result can be prolonged breathlessness, persistent fatigue and a slower return to pre-illness activity levels.
The blood–air barrier is like a piece of delicate fabric: it holds together under normal wear but can tear when pushed too hard. Vaping weakens that weave before illness strikes, making an infection such as COVID-19 harder to overcome. The science is still evolving, but the message is clear: vaping undermines vascular health. Quitting, even temporarily, gives the lungs and blood vessels the cleaner environment they need to heal and to keep every breath effortless.
Keith Rochfort receives funding from Research Ireland.
Source: The Conversation – USA – By Cassandra Burke Robertson, Professor of Law and Director of the Center for Professional Ethics, Case Western Reserve University
Former FBI Director James Comey speaks to reporters on Capitol Hill in Washington on Dec. 7, 2018.AP Photo/J. Scott Applewhite
Former FBI Director James Comey was indicted by a federal grand jury on Sept. 25, 2025 – only the second time in history an FBI director has faced criminal charges.
The indictment came just five days after President Donald Trump took to social media to demand that Comey be prosecuted, and three days after Trump installed a former aide as the prosecutor to bring the case.
Legal experts across the political spectrum describe this as an unprecedented political prosecution that breaks fundamental democratic norms and mirrors tactics used by authoritarian leaders worldwide.
As a professor of law, I think Comey’s indictment is momentous because it tests a principle that has protected American democracy for 50 years: Presidents should not direct prosecutors to charge their political enemies.
When leaders can abuse the justice system to target critics and investigators, the rule of law collapses.
On Sept. 20, Trump posted on Truth Social demanding prosecution: “What about Comey, Adam ‘Shifty’ Schiff, Leticia??? They’re all guilty as hell… We can’t delay any longer… JUSTICE MUST BE SERVED, NOW!!!”
After the indictment, Trump called Comey “one of the worst human beings this country has ever been exposed to.”
The Fifth Amendment protects against vindictive and selective prosecution. To prove vindictive prosecution, a defendant must show through objective evidence that the prosecutor acted with “genuine animus” and that the defendant would not have been prosecuted except for that hostility.
Comey listens to the committee chairman at the beginning of the Senate Intelligence Committee hearing on Capitol Hill on June 8, 2017, in Washington. AP Photo/Alex Brandon
As the U.S. Court of Appeals for the 4th Circuit explained in United States v. Wilson in 2001, the government cannot prosecute someone to punish them “for doing what the law plainly allows him to do.” When circumstances create a realistic likelihood of vindictiveness, the burden shifts to the government to justify its conduct.
After Comey’s indictment, Jordan Rubin, a former prosecutor in the Manhattan D.A.’s office, stated: “If the Trump administration’s prosecution of James Comey isn’t ‘selective’ and ‘vindictive,’ then those words have lost all meaning.”
Additionally, three former White House ethics counsels – Norman Eisen, Richard Painter and Virginia Canter – wrote to Congress after Comey’s indictment, saying that in the U.S. “a president should never order prosecutions of his enemies. That happens in Putin’s Russia, and it has happened in other dictatorships, but not here. Until now.”
They concluded: “If the Trump administration can do this, then no American is safe from political prosecution.”
Broken judicial norms
For 50 years since the Watergate scandal that exposed President Richard Nixon’s abuses of power, American presidents have followed a core principle: They must not interfere in decisions about who gets investigated or charged, especially not for political reasons.
The three former ethics counsels emphasized that during their service, they “never once saw” Presidents George W. Bush, Barack Obama or Bill Clinton “suggest that the Department of Justice should prosecute a specific person, much less a political adversary.”
Comey was indicted on two counts – one count of making a false statement to Congress and one count of obstruction of a congressional proceeding, both in connection with his testimony before a Senate committee in September 2020.
The procedural breakdown reveals how fundamentally this case violates norms.
Career prosecutors wrote a memo in September 2025 stating they could not establish probable cause to charge Comey. When Siebert refused to proceed, Trump removed him and installed Lindsey Halligan, Trump’s former personal defense attorney. She has no prosecutorial experience.
Three days later, Halligan brought the indictment. She signed it alone – no career prosecutors put their names on it, as is usually done. The grand jury rejected one of the three charges prosecutors tried to bring, a rare signal of weak evidence.
Comey’s son-in-law, Troy A. Edwards Jr., a federal prosecutor in the same office where Halligan now works, resigned immediately, stating he was leaving “to uphold my oath to the Constitution.”
Prosecuting former law enforcement officials who investigated the country’s leader is not typical of democracies. It is a hallmark tactic of authoritarian rulers seeking to consolidate power.
Russia under Vladimir Putin provides the starkest example. Opposition leader Alexei Navalny was poisoned by security services, imprisoned on politically motivated charges and ultimately died in prison in 2024. Even the lawyers who defended Navalny faced criminal prosecution.
Russian President Vladimir Putin, right, and Hungarian Prime Minister Viktor Orban attend a joint news conference outside Moscow on Feb. 17, 2016. Maxim Shipenkov/Pool Photo via AP
And Hungary’s Viktor Orban created the Sovereignty Protection Office with powers to investigate any organization or person it suspects of receiving foreign support to influence public life or the democratic process. Orban also installed a loyalist chief prosecutor under whose office “numerous high-profile allegations of corruption have been either quietly shelved or investigated perfunctorily before being dropped,” according to EU Today.
The pattern is clear: When leaders can use the justice system to protect themselves, whether by prosecuting investigators, refusing to investigate corruption or intimidating the judiciary, democratic institutions erode and the rule of law becomes a tool of political control rather than a constraint on power.
What this means for America
Legal experts predict Comey will be acquitted – the evidence is weak and the political interference is blatant.
But as a scholar of legal ethics, I believe the damage is already done.
Trump has shown he can force prosecutors to charge his enemies. Future government officials now face an impossible choice: investigate powerful people, as Comey did, and risk prosecution, or decline to investigate and allow corruption to flourish.
Yet there may be a silver lining: When governments break norms this brazenly, they often create legal vulnerabilities.
Legal commentator Ed Whelan has pointed out that Halligan’s appointment may violate a 1986 Office of Legal Counsel memo authored by then-Deputy Assistant Attorney General Samuel Alito, which concluded that only one interim U.S. attorney appointment is permitted under the statute. Former interim U.S. Attorney Erik Siebert had already served that term. If Halligan wasn’t validly appointed, the indictment may be legally void.
The precedent this case sets affects every American. As the former ethics counsels wrote after Comey’s indictment: “No American should have to go through the experience of being prosecuted under these circumstances, and the rest of us should not have to live in fear that it may also happen to us.”
Cassandra Burke Robertson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Three scientists have been awarded the 2025 Nobel prize in chemistry for discovering a new form of molecular architecture: crystals that contain large cavities.
Susumu Kitagawa from Kyoto University, Japan, Richard Robson from the University of Melbourne, Australia, and Omar M. Yaghi from the
University of California, Berkeley, in the US, will share a prize sum of 11 million Swedish kronor (£870,000).
The prize recognises the pioneering contributions of the three scientists in the development of something called metal-organic frameworks (Mofs). Mofs are a diverse class of crystalline materials that have attracted much attention in chemistry due to the presence of microscopic open cavities in their structures. They are helping to revolutionise green technology, such as harvesting water from desert air and capturing CO₂.
The widths of the cavities can range from a few angstroms (an angstrom is a unit of length equal to one hundred-millionth of a centimetre) to several nanometres (a millionth of a millimetre). That means they are far too small to see with the naked eye or even with most forms of microscopes. But they’re the perfect size for housing various molecules.
The development of Mofs can be traced back to the late 1950s when researchers started to discover “coordination polymers”. These are materials made up of linked chains of metal ions (atoms that have lost or gained electrons) and carbon-based bridging molecules known as linkers. These materials did not contain cavities, but they were based on the same metal-organic chemistry that would later give rise to Mofs.
In the late 1980s, Robson’s research group reported that some coordination polymers could be prepared as framework-like structures where, crucially, the carbon-based linkers formed three-dimensional arrangements around clusters of liquid solvent molecules. As mentioned in Robson’s research article, this revealed “an unusual situation in which approximately two-thirds of the contents of what is undoubtedly a crystal are effectively liquid”.
In the mid-late 1990s, Yaghi’s group demonstrated that it was possible to prepare coordination polymers that retained their structures even after the solvent molecules were removed from the cavities. This was a surprising result, which dispelled the prevailing assumption that such frameworks are fragile and would collapse if the solvent was removed.
In 1997, Kitigawa’s research group showed that the open cavities could be used to absorb gas molecules. He also showed that, in many cases, the framework itself expands as gas molecules are absorbed into it and contracts as they are released. These coordination polymers with permanent, open cavities came to be known as Mofs.
In 1999, Yaghi constructed a very stable material, MOF-5, which has cubic spaces. Just a couple of grams can hold an area as big as a football pitch. Nobel prize outreach
The discoveries by the three scientists effectively marked the birth of modern Mof chemistry, with many thousands of research articles published on them since.
Wide range of applications
Why are Mofs so interesting for chemists? The microscopic cavities within Mofs provide a unique and controllable location for chemistry to take place. A key application of Mofs is gas storage. In many cases, these materials can hold gases at much higher densities than in their free gaseous state.
This offers significant advantages for green technologies such as fuel-cell-powered vehicles, in which hydrogen fuel has to be transported as efficiently as possible. Many Mofs work particularly well for specific gases, which means they can also help separate gas mixtures in exhaust streams, or capture CO₂ from the air to mitigate the effects of global warming.
Mofs can also act as effective catalysts for chemical reactions taking place in the cavities. One of the key advantages of Mofs as catalysts is that it is relatively straightforward for chemists to switch and swap the metals and carbon-based linkers in order to tune the properties for a particular purpose.
As well as gas molecules, Mofs can also accommodate other small molecules, such as pharmaceuticals. This means they can be used to store and deliver drugs to a particular target, where their porous nature allows for controlled release of therapeutic chemicals.
In recent years, Mofs have shown promise for many other applications, including batteries, thermal energy storage and chemical sensors (devices that can monitor and detect chemicals such as contaminants). Excitingly, there remain many other applications that have yet to be explored.
Despite having been discovered over three decades ago, Mofs remain one of the hottest research areas in materials chemistry and will no doubt do so for many years to come.
John Griffin receives funding from the Engineering and Physical Sciences Research Council (EPSRC) and the Faraday Institution, and has previously received funding from the Leverhulme Trust.
The formal launch of Ethiopia’s Grand Ethiopian Renaissance Dam in September 2025 made news across the world. There was pomp and ceremony as Africa’s largest hydroelectric dam was officially inaugurated after 14 years and US$5 billion worth of project work.
The project’s completion fulfils a national dream long in the making. It was formally initiated by the late Meles Zenawi, who served as president of Ethiopia from 1991 to 1995 and as prime minister from 1995 until his death in 2012. But the idea of a dam on the Ethiopian Nile dates back even further. As early as the 1950s, Emperor Haile Selassie recognised the potential of a dam for Ethiopia’s developmental needs.
This vision has occupied the Ethiopian national imagination since then. That is why Ethiopians celebrated the launch as a significant national achievement. Prime Minister Abiy Ahmed hailed the dam as a “shared opportunity” for the region, which stands to gain from surplus electricity exports. The dam’s opening was also celebrated with street processions across the country.
The completion of the dam is a major achievement. As a hydropower source, it is expected to deliver practical benefits such as electricity supply to a large number of Ethiopians. More than that, the dam is also being used to galvanise national pride and unity.
It is not surprising that the government has seized this moment. National pride and unity have been low in Ethiopia in recent years.
The quest for national cohesion has occupied Ethiopian state builders stretching back from the imperial state up until the present period. Previous attempts proved to be largely symbolic, however, with limited transformative power.
The Grand Ethiopian Renaissance Dam risks falling into the same pattern. Its inauguration comes in the aftermath of a largely unresolved conflict in Tigray, and amid intense political fragmentation and ongoing civil wars in Ethiopia. When the war with Tigray ended, others erupted in different parts of the country, most notably in the Amhara and Oromo regions.
Elusive national unity
Ethiopia is a diverse country of over 120 million people. It comprises multiple ethnic, linguistic and religious groups.
Over 80 languages are spoken, with Amharic as the lingua franca. The largest ethnic groups correspond with the most widely spoken languages: Oromiffa, Amharic, Tigrinya and Somali.
Since the late 19th century, various leaders have attempted to construct a nation alongside the state. A state comes into being with the determination of borders and international recognition. Crafting a nation is different. It is the process of establishing a sense of common identity and purpose among the inhabitants of a state. The process of nation-making has been violently contested and fraught in Ethiopian political history.
The results of this contested history can show up in the most unlikely places. For instance, the grand opening of the dam was timed to fall in the month of Meskerem (September). It came a few days before the Ethiopian new year – Enkutatash – which is celebrated on 11 September. This holiday is a major event in the Ethiopian calendar and is usually marked by national celebrations.
However, the holiday is rooted in Ethiopian Orthodox traditions. It carries huge symbolism for at least one particular group – Ethiopia’s Orthodox Christians, who are just over 43% of the population. But others could have felt excluded by this choice.
This has not dissuaded those in power, throughout history, from using such symbolic events and occasions to foster a sense of national unity.
Symbolic nationalism
The imperial regime of Haile Selassie made concerted efforts to unite the nation following the Italian occupation of 1935. At this time, state sovereignty was compromised, with some sections of the country still under foreign occupation. The country was divided between those that resisted the occupation and those who collaborated with the Italians.
To build legitimacy and unity, the emperor turned to the heroic efforts of the patriotic resistance against the Italian occupation as a source of national pride. He also engaged in a policy of state modernisation. One of the key developments from this period was the establishment of Ethiopian Airlines in 1945.
The airline has been highly successful – and profitable. It has contributed to strengthening the Ethiopian brand. But it has not in any visible measure delivered long-term national unity and prevented political violence in Ethiopia.
The post-1991 government also made use of symbols and events to foster national unity. Amid growing concerns about where the ruling coalition was taking the country, in the lead up to the year 2007 (2000 according to the Ethiopian calendar), the government organised millennium celebrations. These captured the nation’s imagination and provided temporary respite from political tensions. Indeed, the initial name of the Grand Ethiopian Renaissance Dam was the Millennium Dam.
What these symbolic attempts managed to achieve is short-lived national pride. They also drew attention away from the structural challenges facing the country.
What is needed to achieve long-term national unity
Ethiopia must come to terms with the deep-seated issues of inequality, historical and contemporary grievances of exclusion and marginalisation. These are the key drivers of recurring cycles of political violence. The country needs to have honest conversations, in non-partisan platforms.
The National Dialogue that is currently underway is a good place to start. The process was launched in 2022 to address key national questions thrown up in part by the disastrous Tigray war. The dialogue seeks to create conducive conditions for national consensus on the root causes of divisions in the country.
National dialogues are useful tools that have been used in diverse national contexts for the purpose of conflict transformation and addressing internal conflicts. These have potential to yield positive results if they are inclusive and have clear implementation plans.
There are fears, however, that the dialogue might be undermined by the Ethiopian government, especially in light of elections in 2026. The government might use the national dialogue to advance its position and present itself in a positive light in relation to its political opponents.
If the National Dialogue fails to achieve its intended objectives, then it will be left to the different communities in Ethiopia to organise their own non-partisan platforms where they can have these urgent national conversations. Here, they would need to find consensus on key areas of national concern and collectively seek solutions. In so doing, Ethiopians would have taken significant steps towards nation building.
Namhla Thando Matshanda receives funding from the National Research Foundation.
Source: The Conversation – Canada – By Ramona Pringle, Director, Creative Innovation Studio; Associate Professor, RTA School of Media, Toronto Metropolitan University
Imagine an actor who never ages, never walks off set or demands a higher salary.
But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites.
The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human.
All agree Tilly isn’t human
Ironically, at the centre of this polarizing debate is a rare moment of agreement: all sides acknowledge that Tilly is not human.
Her creator, Eline Van der Velden, the CEO of AI production company Particle6, insists that Norwood was never meant to replace a real actor. Critics agree, albeit in protest. SAG-AFTRA, the union representing actors in the U.S., responded with:
“It’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion, and from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience.”
Their position is rooted in recent history: In 2023, actors went on strike over AI. The resulting agreement secured protections around consent and compensation.
So if both sides insist Tilly isn’t human, the controversy, then, isn’t just about what Tilly is, it’s about what she represents.
Complexity as a starting point
Norwood represents more than novelty. She’s emblematic of a larger reckoning with how rapidly artificial intelligence is reshaping our lives and the creative sector. The velocity of change is dizzying, and now the question is how do we shape the hybrid world we’ve already entered?
It can feel disorienting trying to parse ethics, rights and responsibilities while being bombarded by newness. Especially when that “newness” comes in a form that unnerves us: a near-human likeness that triggers long-standing cultural discomfort.
But if all sides agree that Tilly isn’t human, what happens when audiences still feel something real while watching her on screen? If emotional resonance and storytelling are considered uniquely human traits, maybe the threat posed by synthetic actors has been overstated. On the other hand, who hasn’t teared up in a Pixar film? A character doesn’t have to feel emotion to evoke it.
Still, the public conversation remains polarized. As my colleague Owais Lightwala, assistant professor in the School of Performance at Toronto Metropolitan University, puts it: “The conversation around AI right now is so binary that it limits our capacity for real thinking. What we need is to be obsessed with complexity.”
Synthetic actors aren’t inherently villains or saviours, Lightwala tells me, they’re a tool, a new medium. The challenge lies in how we build the infrastructures around them, such as rights, ownership and distribution.
He points out that while some celebrities see synthetic actors as job threats, most actors already struggle for consistent work. “We ask the one per cent how they feel about losing power, but what about the 99 per cent who never had access to that power in the first place?”
Too often missing from this debate is what these tools might make possible for the creators we rarely hear from. The current media landscape is already deeply inequitable. As Lightwala notes, most people never get the chance to realize their creative potential — not for lack of talent, but due to barriers like access, capital, mentorship and time.
Now, some of those barriers might finally lower. With AI tools, more people may get the opportunity to create.
Of course, that doesn’t mean AI will automatically democratize creativity. While tools are more available, attention and influence remain scarce.
Sarah Watling, co-founder and CEO of JaLi Research, a Toronto-based AI facial animation company, offers a more cautionary perspective. She argues that as AI becomes more common, we risk treating it like a utility, essential yet invisible.
In her view, the inevitable AI economy won’t be a creator economy, it will be a utility commodity. And “when things become utilities,” she warns, “they usually become monopolized.”
Where do we go from here?
We need to pivot away from reactionary fear narratives, like Lightwala suggests.
Instead of shutting down innovation, we need to continue to experiment. We need to use this moment, when public attention is focused on the rights of actors and the shape of culture, to rethink what was already broken in the industry and allow space for new creative modalities to emerge.
Platforms and studios must take the lead in setting transparent, fair policies for how synthetic content is developed, attributed and distributed. In parallel, we need to push creative institutions, unions and agencies to collaborate in the co-design of ethical and contractual guardrails now, before precedents get set in stone, putting consent, fair attribution and compensation at the centre.
And creators, for their part, must use these tools not just to replicate what came before, but to imagine what hasn’t been possible until now. That responsibility is as much creative as it is technical.
The future will be synthetic. Our task now is to build pathways, train talent, fuel imagination, and have nuanced, if difficult, conversations.
Because while technology shapes what’s possible, creators and storytellers have the power to shape what matters.
Ramona Pringle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Microplastics are the crumbs of our plastic world, tiny pieces that come from bigger items breaking apart or from products like synthetic clothing and packaging. They’re now everywhere. Scientists estimate there are about 51 trillion of these particles floating in the world’s surface waters, and low levels have even been found in South African tap water.
That’s worrying because these particles can carry chemicals and bad bacteria, get eaten by fish and other wildlife, and may end up in our bodies.
We’re water scientists who are looking for ways to solve this problem. In a recent study, we tested a practical fix: two “magnetic cleaning powders” that can attach onto microplastics in water; the combined clumps can then be pulled out using a magnet. These materials are called magnetic nanocomposites (think: very fine powders with special surfaces).
The idea is simple: mix a small dose of powder into the water, let it attract and attach to microplastics, and then use a strong magnet to remove the powder-plastic clusters, leaving cleaner water behind.
Around the world, researchers have tried many different methods to capture microplastics, but our study is among the first to show that magnetic nanocomposites can work effectively not only under laboratory conditions but also in real-world samples, including municipal wastewater and drinking water.
This is the first study to use these specific nanomaterials for microplastic removal, proving both their high efficiency and their practical potential. Most existing filters struggle to catch the smallest plastics, the ones most harmful to health and the environment. The next step is to test these powders on a larger scale and develop simple, affordable systems that households and water treatment plants can use.
How well do the powders work?
In our research we found that the powders were able to remove up to 96% of small polyethylene and 92% of polystyrene particles from purified water. When we tried the same approach in both drinking water and water coming out of a municipal wastewater treatment plant, the results were just as strong. In drinking water the removal was about 94% and in treated wastewater the removal was up to 92%.
Another finding from this study is that the size of the plastic particles matters. The smaller the microplastic, the easier it is for the powders to attach to it, because tiny particles can reach more of the powder’s special “sticky” surface. We saw very good results for small plastics (hundreds of micrometres), but bigger particles (3-5 millimetres) were hardly removed at all. This is because they don’t mix with the powder as well and there’s less surface for the powder to attach onto.
In everyday terms, these magnetic powders are excellent for the small microplastics that are hardest to catch with normal filters.
Now for the big question: why do the powders attach to plastic? Think of it as being like tiny magnets. The powder and the plastics have special surfaces. The powder has parts that are sticky for plastics. This stickiness happens because of different kinds of forces. The plastic and powders have opposite charges which pull them together or allow them to stick together.
The key point is that the powders are engineered or specifically made to grab onto plastics so that microplastics naturally cling to them in water.
Once the powders attach onto the microplastics, we use a strong magnet (magnetic force: 250kg) to pull the powder–plastic clumps out of the water. The plastics are then separated from the powder by washing and filtration, dried, and weighed. This allows us to check how much plastic was removed. The separated powders are regenerated and reused, while the plastics are safely discarded, preventing them from re-entering the water.
We also looked at real-world questions: can you reuse the powders? And are they safe? The powders themselves are made from safe, lab-engineered materials: tiny sheets of carbon and boron nitride (a material also found in cosmetics and coatings) that are coated with magnetic iron nanoparticles. This makes them stable in water, and easy to pull out with a magnet after they’ve captured the microplastics.
After three rounds of use, the tested powders were effective in removing plastics up to 80%. That means you don’t need a new batch of powder every time, which is important for keeping costs down. Treating 1,000 litres of water with this method costs about US$41 (R763), making it competitive with many existing treatment options.
For safety, we tested the filtered powder (the “filtrate”) on plant growth. The results showed minimal to no toxicity, as three different plants were able to grow well in the presence of the filtrate. This is a strong sign that the method is environmentally friendly when used as intended.
What does this study mean for households and cities?
In the short term, magnetic powders could be built into small cartridges or filter units that attach to household or community water systems, helping remove microplastics before the water is used for drinking or cooking.
But the bigger picture is just as important. Microplastics are not only a South African problem but are also a global pollutant that crosses borders through rivers, oceans, and even the air we breathe. Low-cost, scalable solutions such as magnetic powders can make a real difference in resource-limited settings, where advanced filtration systems are too expensive or impractical.
Looking ahead, further work will focus on scaling up the method, testing it under more diverse water conditions, and designing simple, affordable devices that households or treatment plants can adopt.
In short: this specialised magnetic powder can tackle a tiny pollutant with big consequences. With sensible engineering and careful recovery, magnetic nanocomposites offer a promising, practical path to clean water while protecting the ecosystem from microplastic pollution.
Riona Indhur has received the prestigious National Research Foundation (NRF) postdoctoral research fellowship (Scarce Skills).
The project was funded by the National research Foundation and Water Research Commission of South Africa
Source: The Conversation – Africa (2) – By Laura Ferguson, Associate Professor, Population and Public Health Sciences, University of Southern California
Globally, nearly half of the deaths of children under five years are linked to malnutrition. In Kenya, it’s the leading cause of illness and death among children.
Children with malnutrition typically show signs of recent and severe weight loss. They may also have swollen ankles and feet. Acute malnutrition among children is usually the result of eating insufficient food or having infectious diseases, especially diarrhoea.
Acute malnutrition weakens a child’s immune system. This can lead to increased susceptibility to infectious diseases like pneumonia. It can also cause more severe illness and an increased risk of death.
Currently, the Kenyan national response to malnutrition, implemented by the ministry of health, is based on historical trends of malnutrition. This means that if cases of malnutrition have been reported in a certain month, the ministry anticipates a repeat during a similar month in subsequent years. Currently, no statistical modelling guides responses, which has limited their accuracy.
The health ministry has collected monthly data on nutrition-related indicators and other health conditions for many years.
Our multi-disciplinary team set out to explore whether we could use this data to help forecast where, geographically, child malnutrition was likely to occur in the near future. We were aiming for a more accurate forecast than the existing method.
We developed a machine learning model to forecast acute malnutrition among children in Kenya. A machine learning model is a type of mathematical model that, once “trained” on an existing data set, can make predictions of future outcomes. We used existing data and improved forecasting capabilities by including complementary data sources, such as satellite imagery that provides an indicator of crop health.
We found that machine learning-based models consistently outperformed existing platforms used to forecast malnutrition rates in Kenya. And we found that models with satellite-based features worked even better.
Our results demonstrate the ability of machine learning models to more accurately forecast malnutrition in Kenya up to six months ahead of time from a variety of indicators.
If we have advance knowledge of where malnutrition is likely to be high, scarce resources can be allocated to these high-risk areas in a timely manner to try to prevent children from becoming malnourished.
How we did it
We used clinical data from the Kenya Health Information System. This included data on diarrhoea treatment and low birth weight. We collected data on children who visited a health facility who met the definition of being acutely malnourished, among other relevant clinical indicators.
Given that food insecurity is a key driver of acute malnutrition, we also incorporated data reflecting crop activity into our models. We used a NASA satellite to look at gross primary productivity, which measures the rate at which plants convert solar energy into chemical energy. This provides a coarse indicator of crop health and productivity. Lower average rates can be an early indication of food scarcity.
We tested several methods and models for forecasting malnutrition risk among children in Kenya using data collected from January 2019 to February 2024.
The gradient boosting machine learning model – trained on previous acute malnutrition outcomes and gross primary productivity measurements – turned out to be the most effective model for forecasting acute malnutrition among children.
This model can forecast where and at what prevalence level acute malnutrition among children is likely to occur in one month’s time with 89% accuracy.
All the models we developed performed well where the prevalence of acute child malnutrition was expected to be at more than 30%, for instance in northern and eastern Kenya, which have dry climates. However, when the prevalence was less than 15%, for instance in western and central Kenya, only the machine learning models were able to forecast with good accuracy.
This higher accuracy is achieved because the models use additional information on multiple clinical factors. They can, therefore, find more complex relationships.
Implications
Current efforts to predict acute malnutrition among children rely only on historical knowledge of malnutrition patterns. We found these forecasts were less accurate than our models.
Our models leverage historical malnutrition patterns, as well as clinical indicators and satellite-based indicators.
The forecasting performance of our models is also better than other similar data-based modelling efforts published by other researchers.
As resources for health and nutrition shrink, improved targeting to the areas of highest need is critical. Treating acute malnutrition can save a child’s life.
Prevention of malnutrition promotes children’s full psychological and physical development.
What needs to happen next
Making these data from diverse sources available through a dashboard could inform decision-making. Responders could get six months to intervene where they are most needed.
We have developed a prototype dashboard to create visualisations of what responders would be able to see based on our model’s subcounty-level forecasts. We are currently working with the Kenyan ministry of health and Amref Health Africa, a health development NGO, to ensure that the dashboard is available to local decision-makers and stakeholders. It is regularly updated with the most current data and new forecasts.
We are also working with our partners to refine the dashboard to meet the needs of the end users and promote its use in national decision-making on responses to acute malnutrition among children. We’re tracking the impacts of this work.
Throughout this process, it will be important to strengthen the capacity of our partners to manage, update and use the model and dashboard. This will promote local responsiveness, ownership and sustainability.
Scaling up
The Kenya Health Information System relies on the District Health Information System 2 (DHIS2). This is an open source software platform. It is currently used by over 80 low- and middle-income countries. The satellite data that we used in our models is also available in all of these countries.
If we can secure additional funding, we plan to expand our work geographically and to other areas of health. We’ve also made our code publicly available, which allows anyone to use it and replicate our work in other countries where child malnutrition is a public health challenge.
Furthermore, our model proves that DHIS2 data, despite challenges with its completeness and quality, can be used in machine learning models to inform public health responses. This work could be adapted to address public health issues beyond malnutrition, like changes in patterns of infectious diseases due to climate change.
This work was a collaboration between the University of Southern California’s Institute on Inequalities in Global Health and Center for Artificial Intelligence in Society, Microsoft, Amref Health Africa and the Kenyan ministry of health.
This work was supported, in part, by the Microsoft Corporation.
Bistra Dilkina received in-kind support from Microsoft AI for Good for this work.
On October 6 1995, at a scientific meeting in Florence, Italy, two Swiss astronomers made an announcement that would transform our understanding of the universe beyond our solar system. Michel Mayor and his PhD student Didier Queloz, working at the University of Geneva, announced they had detected a planet orbiting a star other than the Sun.
The star in question, 51 Pegasi, lies about 50 light years away in the constellation Pegasus. Its companion – christened 51 Pegasi b – was unlike anything written in textbooks about how we thought planets might look. This was a gas giant with a mass of at least half that of Jupiter, circling its star in just over four days. It was so close to the star (1/20th of Earth’s distance from the Sun, well inside Mercury’s orbit) that the planet’s atmosphere would be like a furnace, with temperatures topping 1,000°C.
The instrument behind the discovery was Elodie, a spectrograph that had been installed two years earlier at the Haute-Provence observatory in southern France. Designed by a Franco-Swiss team, Elodie split starlight into a spectrum of different colours, revealing a rainbow etched with fine dark lines. These lines can be thought of as a “stellar barcode”, providing details on the chemistry of other stars.
What Mayor and Queloz spotted was 51 Pegasi’s barcode sliding rhythmically back-and-forth in this spectrum every 4.23 days – a telltale signal that the star was being wobbled back and forth by the gravitational tug of an otherwise unseen companion amid the glare of the star.
After painstakingly ruling out other explanations, the astronomers finally decided that the variations were due to a gas giant in a close-in orbit around this Sun-like star. The front page of the Nature journal in which their paper was published carried the headline: “A planet in Pegasus?”
The discovery baffled scientists, and the question-mark on Nature’s front cover reflected initial scepticism. Here was a purported giant planet next to its star, with no known mechanism for forming a world like this in such a fiery environment.
While the signal was confirmed by other teams within weeks, reservations about the cause of the signal remained for almost three years before being finally ruled out. Not only did 51 Pegasi b become the first planet discovered orbiting a Sun-like star outside our Solar System, but it also represented an entirely new type of planet. The term “hot Jupiter” was later coined to describe such planets.
This discovery opened the floodgates. In the 30 years since, more than 6,000 exoplanets (the term for planets outside our Solar System) and exoplanet candidates have been catalogued.
Their variety is staggering. Not only hot but ultra-hot Jupiters with a dayside temperature exceeding 2,000 °C and orbits of less than a day. Worlds that orbit not one but two stars, like Tatooine from Star Wars. Strange “super-puff” gas giants larger than Jupiter but with a fraction of the mass. Chains of small rocky planets all piled up in tight orbits.
The discovery of 51 Pegasi b triggered a revolution and, in 2019, landed Mayor and Queloz a Nobel prize. We can now infer that most stars have planetary systems. And yet, of the thousands of exoplanets found, we have yet to find a planetary system that resembles our own.
The quest to find an Earth twin – a planet that truly resembles Earth in size, mass and temperature – continues to drive modern-day explorers like us to search for more undiscovered exoplanets. Our expeditions may not take us on death-defying voyages and treks like the past legendary explorers of Earth, but we do get to visit beautiful, mountain-top observatories often located in remote areas around the world.
We are members of an international consortium of planet hunters that built, operate and maintain the Harps-N spectrograph, mounted on the Telescopio Nazionale de Galileo on the beautiful Canary island of La Palma. This sophisticated instrument allows us to rudely interrupt the journey of starlight which may have been travelling unimpeded at speeds of 670 million miles per hour for decades or even millennia.
Each new signal has the potential to bring us closer to understanding how common planetary systems like our own may (or may not) be. In the background lies the possibility that one day, we may finally detect another planet like Earth.
The origins of exoplanet study
Up until the mid-1990s, our Solar System was the only set of planets humanity ever knew. Every theory about how planets formed and evolved stemmed from these nine, incredibly closely spaced data-points (which went down to eight when Pluto was demoted in 2006, after the International Astronomical Union agreed a new definition of a planet).
All of these planets revolve around just one star out of the estimated 10¹¹ (roughly 100 billion) in our galaxy, the Milky Way – which is in turn one of some 10¹¹ galaxies throughout the universe. So, trying to draw conclusions from the planets in our Solar System alone was a bit like aliens trying to understand human nature by studying students living together in one house. But that didn’t stop some of the greatest minds in history speculating on what lay beyond.
The ancient Greek philosopher Epicurus (341-270BC) wrote: “There is an infinite number of worlds – some like this world, others unlike it.” This view was not based on astronomical observation but his atomist theory of philosophy. If the universe was made up of an infinite number of atoms then, he concluded, it was impossible not to have other planets.
Epicurus clearly understood what this meant in terms of the potential for life developing elsewhere:
We must not suppose that the worlds have necessarily one and the same shape. Nobody can prove that in one sort of world there might not be contained – whereas in another sort of world there could not possibly be – the seeds out of which animals and plants arise and all the rest of the things we see.
In contrast, at roughly the same time, fellow Greek philosopher Aristotle (384-322 BC) was proposing his geocentric model of the universe, which had the Earth immobile at its centre with the Moon, Sun and known planets orbiting around us. In essence, the Solar System as Aristotle conceived it was the entire universe. In On the Heavens (350BC), he argued: “It follows that there cannot be more worlds than one.”
Such thinking that planets were rare in the universe persisted for 2,000 years. Sir James Jeans, one of the world’s top mathematicians and an influential physicist and astronomer at the time, advanced his tidal hypothesis of planet formation in 1916. According to this theory, planets were formed when two stars pass so closely that the encounter pulls streams of gas off the stars into space, which later condense into planets. The rareness of such close cosmic encounters in the vast emptiness of space led Jeans to believe that planets must be rare, or – as was reported in his obituary – “that the solar system might even be unique in the universe”.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
But by then, understanding of the scale of the universe was slowly changing. In the “Great Debate” of 1920, held at the Smithsonian Museum of Natural History in Washington DC, American astronomers Harlow Shapley and Heber Curtis clashed over whether the Milky Way was the entire universe, or just one of many galaxies. The evidence began to point to the latter, as Curtis had argued for. This realisation – that the universe contained not just billions of stars, but billions of galaxies each containing billions of stars – began to affect even the most pessimistic predictors of planetary prevalence.
In the 1940s, two things caused the scientific consensus to pivot dramatically. First, Jeans’ tidal hypothesis did not stand up to scientific scrutiny. The leading theories now had planet formation as a natural byproduct of star formation itself, opening up the potential for all stars to host planets.
Then in 1943, claims emerged of planets orbiting the stars 70 Ophiuchus and 61 Cygni c – two relatively nearby star systems visible to the naked eye. Both were later shown to be false positives, most likely due to uncertainties in the telescopic observations that were possible at the time – but nonetheless, it greatly influenced planetary thinking. Suddenly, billions of planets in the Milky Way was considered a genuine scientific possibility.
For us, nothing highlights this change in mindset more than an article written for the Scientific American in July 1943 by the influential American astronomer Henry Norris Russell. Whereas two decades earlier, Russell had predicted that planets “should be infrequent among the stars”, now the title of his article was: “Anthropocentrism’s Demise. New Discoveries Lead to the Probability that There Are Thousands of Inhabited Planets in our Galaxy”.
Strikingly, Russell was not merely making a prediction about any old planets, but inhabited ones. The burning question was: where were they? It would take another half-century to begin finding out.
The Harps-N spectrograph is mounted on the Telescopio Nazionale de Galileo (left) in La Palma, Canary Islands. lunamarina/Shutterstock
How to detect an exoplanet
When we observe myriad stars through La Palma’s Italian-built Galileo telescope using our Harps-N spectrograph, it is amazing to consider how far we have come since Mayor and Queloz announced their discovery of 51 Pegasi b in 1995. These days, we can effectively measure the masses of not just Jupiter-like planets, but even small planets thousands of light years away. As part of the Harps-N collaboration, we have had a front-row seat since 2012 in the science of small exoplanets.
Another milestone in this story came four years after the 51 Pegasi b discovery, when a Canadian PhD student at Harvard University, David Charbonneau, detected the transit of a known exoplanet. This was another hot Jupiter, known as HD209458b, also located in the Pegasus constellation, about 150 light years from Earth.
Transit refers to a planet passing in front of its star, from the perspective of the observer, momentarily making the star appear dimmer. As well as detecting exoplanets, the transit technique enables us to measure the radius of the planet by taking many brightness measurements of a star, then waiting for it to dim due to the passing planet. The extent of blocked starlight depends on the radius of the planet. For example, Jupiter would make the Sun 1% dimmer to alien observers, while for Earth, the effect would be a hundred times weaker.
In all, four times more exoplanets have now been discovered using this transit technique compared with the “barcode” technique, known as radial velocity, that the Swiss astronomers used to spot the first exoplanet 30 years ago. It is a technique that is still widely used today, including by us, as it can not only find a planet but also measure its mass.
A planet orbiting a star exerts a gravitational pull which causes that star to wobble back and forth – meaning it will periodically change its velocity with respect to observers on Earth. With the radial velocity technique, we take repeated measurements of the velocity of a star, looking to find a stable periodic wobble that indicates the presence of a planet.
These velocity changes are, however, extremely small. To put it in perspective, the Earth makes the Sun change its velocity by a mere 9cm per second – slower than a tortoise. In order to find planets with the radial velocity technique, we thus need to measure these small velocity changes for stars that are many many trillions of miles away from us.
The state-of-the-art instruments we use are truly an engineering feat. The latest spectrographs, such as Harps-N and also Espresso, can accurately measure velocity shifts of the order of tenths of centimetres per second – although still not sensitive enough to detect a true Earth twin.
But whereas this radial velocity technique is, for now, limited to ground-based observatories and can only observe one star at the time, the transit technique can be employed in space telescopes such as the French Corot (2006-14) and Nasa’s Kepler (2009-18) and Tess (2018-) missions. Between them, space telescopes have detected thousands of exoplanets in all their diversity, taking advantage of the fact we can measure stellar brightness more easily from space, and for many stars at the same time.
Despite the differences in detection success rate, both techniques continue to be developed. Applying both can give the radius and mass of a planet, opening up many more avenues for studying its composition.
To estimate possible compositions of our discovered exoplanets, we start by making the simplified assumption that small planets are, like Earth, made up of a heavy iron-rich core, a lighter rocky mantle, some surface water and a small atmosphere. Using our measurements of mass and radius, we can now model the different possible compositional layers and their respective thickness.
This is still very much a work in progress, but the universe is spoiling us with a wide variety of different planets. We’ve seen evidence of rocky worlds being torn apart and strange planetary arrangements that hint at past collisions. Planets have been found across our galaxy, from Sweeps-11b in its central regions (at nearly 28,000 light years away, one of the most distant ever discovered) to those orbiting our nearest stellar neighbour, Proxima Centauri, which is “only” 4.2 light years away.
Illustration of Proxima b, one of the exoplanets orbiting the nearest star to our Sun, Proxima Centauri. Catmando/Shutterstock
Searching for ‘another Earth’
In early July 2013, one of us (Christopher) was flying out to La Palma for my first “go” with the recently commissioned Harps-N spectrograph. Keen not to mess up, my laptop was awash with spreadsheets, charts, manuals, slides and other notes. Also included was a three-page document I had just been sent, entitled: Special Instructions for ToO (Target of Opportunity).
The first paragraph stated: “The Executive Board has decided that we should give highest priority to this object.” The object in question was a planetary candidate thought to be orbiting Kepler-78, a star a little cooler and smaller than our Sun, located about 125 light years away in the direction of the constellation Cygnus.
A few lines further down read: “July 4-8 run … Chris Watson” with a list of ten times to observe Kepler-78 – twice per night, each separated by a very specific four hours and 15 minutes. The name above mine was Didier Queloz’s (he hadn’t been awarded his Nobel prize yet, though).
This planetary candidate had been identified by the Kepler space telescope, which was tasked with searching a portion of the Milky Way to look for exoplanets as small as the Earth. In this case, it had identified a transiting planet candidate with an estimated radius of 1.16 (± 0.19) Earth radii – an exoplanet not that much larger than Earth had potentially been spotted.
I was in La Palma to attempt to measure its mass which, combined with the radius from Kepler, would allow the density and possible composition to be constrained. I wrote at the time: “Want 10% error on mass, to get a good enough bulk density to distinguish between Earth-like, iron-concentrated (Mercury), or water.”
In all, I took ten out of our team’s total of 81 exposures of Kepler-78 in an observing campaign lasting 97 days. During that time, we became aware of a US-led team who were also looking for this potential planet. In true scientific spirit, we agreed to submit our independent findings at the same time. On the specified date. Like a prisoner swap, the two teams exchanged results – which agreed. We had, within the uncertainties of our data, reached the same conclusion about the planet’s mass.
Its most likely mass came out as 1.86 Earth masses. At the time, this made Kepler-78b the smallest extrasolar planet with an accurately measured mass. The density was almost identical to that of Earth’s.
But that is where the similarities to our planet ended. Kepler-78b has a “year” that lasts only 8.5 hours, which is why I had been instructed to observe it every 4hr 15min – when the planet was at opposite sides of its orbit, and the induced “wobble” of the star would be at its greatest. We measured the star wobbling back and forth at about two metres per second – no more than a slow jog.
Kepler-78b’s short orbit meant its extreme temperature would cause all rock on the planet to melt. It may have been the most Earth-like planet found at the time in terms of its size and density, but otherwise, this hellish lava world was at the very extremes of our known planetary population.
Illustration of the Kepler-78b ‘lava world’ – similar in size and density to Earth. simoleonh/Shutterstock
In 2016, the Kepler space telescope made another landmark discovery: a system with at least five transiting planets around a Sun-like star, HIP 41378, in the Cancer constellation. What made it particularly exciting was the location of these planets. Where most transiting planets we have spotted are closer to their star than Mercury is to the Sun (due to our detection capabilities), this system has at least three planets beyond the orbital radius of Venus.
Having decided to use our Harps-N spectrograph to measure the masses of all five transiting planets, it became clear after more than a year of observing that one instrument would not be enough to analyse this challenging mix of signals. Other international teams came to the same conclusion and, rather than compete, we decided to come together in a global collaboration that holds strong to this day, with hundreds of radial velocities gathered over many years.
We now have firm masses and radii for most of the planets in the system. But studying them is a game of patience. With planets much further away from their host star, it takes much longer before there is a new transit event or the periodic wobble can be fully observed. We thus need to wait multiple years and gather lots of data to gain insight in this system.
The rewards are obvious, though. This is the first system that starts resembling our Solar System. While the planets are a bit larger and more massive than our rocky planets, their distances are very similar – helping us to understand how planetary systems form in the universe.
The holy grail for exoplanet explorers
After three decades of observing, a wealth of different planets have emerged. We started with the hot Jupiters, large gas giants close to their star that are among the easiest planets to find due to both deeper transits and larger radial velocity signals. But while the first tens of discovered exoplanets were all hot Jupiters, we now know these planets are actually very rare.
With instrumentation getting better and observations piling up, we have since found a whole new class of planets with sizes and masses between those of Earth and Neptune. But despite our knowledge of thousands of exoplanets, we still have not found systems truly resembling our solar system, nor planets truly resembling Earth.
It is tempting to conclude this means we are a unique planet in a unique system. While this still could be true, it is unlikely. The more reasonable explanation is that, for all our stellar technology, our capabilities of detecting such Earth-like planets are still fairly limited in a universe so mind-bogglingly vast.
The holy grail for many exoplanet explorers, including us, remains to find this true Earth twin – a planet with a similar mass and radius as Earth’s, orbiting a star similar to the Sun at a distance similar to how far we are from the Sun.
While the universe is rich in diversity and holds many planets unlike our own, discovering a true Earth twin would be the best place to start looking for life as we know it. Currently, the radial velocity method – as used to find the very first exoplanet – remains by far the best-placed method to find it.
Thirty years on from that Nobel-winning discovery, pioneering planetary explorer Didier Queloz is taking charge of the very first dedicated radial velocity campaign to go in search of an Earth-like planet.
A major international collaboration is building a dedicated instrument, Harps3, to be installed later this year at the Isaac Newton Telescope on La Palma. Given its capabilities, we believe a decade of data should be enough to finally discover our first Earth twin.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Christopher Watson receives funding from the Science and Technology Facilities Council (STFC).
Annelies Mortier receives funding from the Science and Technology Facilities council (STFC) and UK Research and Innovation (UKRI).