Source: The Conversation – Canada – By Ramona Pringle, Director, Creative Innovation Studio; Associate Professor, RTA School of Media, Toronto Metropolitan University
Imagine an actor who never ages, never walks off set or demands a higher salary.
But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites.
The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human.
All agree Tilly isn’t human
Ironically, at the centre of this polarizing debate is a rare moment of agreement: all sides acknowledge that Tilly is not human.
Her creator, Eline Van der Velden, the CEO of AI production company Particle6, insists that Norwood was never meant to replace a real actor. Critics agree, albeit in protest. SAG-AFTRA, the union representing actors in the U.S., responded with:
“It’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion, and from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience.”
Their position is rooted in recent history: In 2023, actors went on strike over AI. The resulting agreement secured protections around consent and compensation.
So if both sides insist Tilly isn’t human, the controversy, then, isn’t just about what Tilly is, it’s about what she represents.
Complexity as a starting point
Norwood represents more than novelty. She’s emblematic of a larger reckoning with how rapidly artificial intelligence is reshaping our lives and the creative sector. The velocity of change is dizzying, and now the question is how do we shape the hybrid world we’ve already entered?
It can feel disorienting trying to parse ethics, rights and responsibilities while being bombarded by newness. Especially when that “newness” comes in a form that unnerves us: a near-human likeness that triggers long-standing cultural discomfort.
But if all sides agree that Tilly isn’t human, what happens when audiences still feel something real while watching her on screen? If emotional resonance and storytelling are considered uniquely human traits, maybe the threat posed by synthetic actors has been overstated. On the other hand, who hasn’t teared up in a Pixar film? A character doesn’t have to feel emotion to evoke it.
Still, the public conversation remains polarized. As my colleague Owais Lightwala, assistant professor in the School of Performance at Toronto Metropolitan University, puts it: “The conversation around AI right now is so binary that it limits our capacity for real thinking. What we need is to be obsessed with complexity.”
Synthetic actors aren’t inherently villains or saviours, Lightwala tells me, they’re a tool, a new medium. The challenge lies in how we build the infrastructures around them, such as rights, ownership and distribution.
He points out that while some celebrities see synthetic actors as job threats, most actors already struggle for consistent work. “We ask the one per cent how they feel about losing power, but what about the 99 per cent who never had access to that power in the first place?”
Too often missing from this debate is what these tools might make possible for the creators we rarely hear from. The current media landscape is already deeply inequitable. As Lightwala notes, most people never get the chance to realize their creative potential — not for lack of talent, but due to barriers like access, capital, mentorship and time.
Now, some of those barriers might finally lower. With AI tools, more people may get the opportunity to create.
Of course, that doesn’t mean AI will automatically democratize creativity. While tools are more available, attention and influence remain scarce.
Sarah Watling, co-founder and CEO of JaLi Research, a Toronto-based AI facial animation company, offers a more cautionary perspective. She argues that as AI becomes more common, we risk treating it like a utility, essential yet invisible.
In her view, the inevitable AI economy won’t be a creator economy, it will be a utility commodity. And “when things become utilities,” she warns, “they usually become monopolized.”
Where do we go from here?
We need to pivot away from reactionary fear narratives, like Lightwala suggests.
Instead of shutting down innovation, we need to continue to experiment. We need to use this moment, when public attention is focused on the rights of actors and the shape of culture, to rethink what was already broken in the industry and allow space for new creative modalities to emerge.
Platforms and studios must take the lead in setting transparent, fair policies for how synthetic content is developed, attributed and distributed. In parallel, we need to push creative institutions, unions and agencies to collaborate in the co-design of ethical and contractual guardrails now, before precedents get set in stone, putting consent, fair attribution and compensation at the centre.
And creators, for their part, must use these tools not just to replicate what came before, but to imagine what hasn’t been possible until now. That responsibility is as much creative as it is technical.
The future will be synthetic. Our task now is to build pathways, train talent, fuel imagination, and have nuanced, if difficult, conversations.
Because while technology shapes what’s possible, creators and storytellers have the power to shape what matters.
Ramona Pringle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Microplastics are the crumbs of our plastic world, tiny pieces that come from bigger items breaking apart or from products like synthetic clothing and packaging. They’re now everywhere. Scientists estimate there are about 51 trillion of these particles floating in the world’s surface waters, and low levels have even been found in South African tap water.
That’s worrying because these particles can carry chemicals and bad bacteria, get eaten by fish and other wildlife, and may end up in our bodies.
We’re water scientists who are looking for ways to solve this problem. In a recent study, we tested a practical fix: two “magnetic cleaning powders” that can attach onto microplastics in water; the combined clumps can then be pulled out using a magnet. These materials are called magnetic nanocomposites (think: very fine powders with special surfaces).
The idea is simple: mix a small dose of powder into the water, let it attract and attach to microplastics, and then use a strong magnet to remove the powder-plastic clusters, leaving cleaner water behind.
Around the world, researchers have tried many different methods to capture microplastics, but our study is among the first to show that magnetic nanocomposites can work effectively not only under laboratory conditions but also in real-world samples, including municipal wastewater and drinking water.
This is the first study to use these specific nanomaterials for microplastic removal, proving both their high efficiency and their practical potential. Most existing filters struggle to catch the smallest plastics, the ones most harmful to health and the environment. The next step is to test these powders on a larger scale and develop simple, affordable systems that households and water treatment plants can use.
How well do the powders work?
In our research we found that the powders were able to remove up to 96% of small polyethylene and 92% of polystyrene particles from purified water. When we tried the same approach in both drinking water and water coming out of a municipal wastewater treatment plant, the results were just as strong. In drinking water the removal was about 94% and in treated wastewater the removal was up to 92%.
Another finding from this study is that the size of the plastic particles matters. The smaller the microplastic, the easier it is for the powders to attach to it, because tiny particles can reach more of the powder’s special “sticky” surface. We saw very good results for small plastics (hundreds of micrometres), but bigger particles (3-5 millimetres) were hardly removed at all. This is because they don’t mix with the powder as well and there’s less surface for the powder to attach onto.
In everyday terms, these magnetic powders are excellent for the small microplastics that are hardest to catch with normal filters.
Now for the big question: why do the powders attach to plastic? Think of it as being like tiny magnets. The powder and the plastics have special surfaces. The powder has parts that are sticky for plastics. This stickiness happens because of different kinds of forces. The plastic and powders have opposite charges which pull them together or allow them to stick together.
The key point is that the powders are engineered or specifically made to grab onto plastics so that microplastics naturally cling to them in water.
Once the powders attach onto the microplastics, we use a strong magnet (magnetic force: 250kg) to pull the powder–plastic clumps out of the water. The plastics are then separated from the powder by washing and filtration, dried, and weighed. This allows us to check how much plastic was removed. The separated powders are regenerated and reused, while the plastics are safely discarded, preventing them from re-entering the water.
We also looked at real-world questions: can you reuse the powders? And are they safe? The powders themselves are made from safe, lab-engineered materials: tiny sheets of carbon and boron nitride (a material also found in cosmetics and coatings) that are coated with magnetic iron nanoparticles. This makes them stable in water, and easy to pull out with a magnet after they’ve captured the microplastics.
After three rounds of use, the tested powders were effective in removing plastics up to 80%. That means you don’t need a new batch of powder every time, which is important for keeping costs down. Treating 1,000 litres of water with this method costs about US$41 (R763), making it competitive with many existing treatment options.
For safety, we tested the filtered powder (the “filtrate”) on plant growth. The results showed minimal to no toxicity, as three different plants were able to grow well in the presence of the filtrate. This is a strong sign that the method is environmentally friendly when used as intended.
What does this study mean for households and cities?
In the short term, magnetic powders could be built into small cartridges or filter units that attach to household or community water systems, helping remove microplastics before the water is used for drinking or cooking.
But the bigger picture is just as important. Microplastics are not only a South African problem but are also a global pollutant that crosses borders through rivers, oceans, and even the air we breathe. Low-cost, scalable solutions such as magnetic powders can make a real difference in resource-limited settings, where advanced filtration systems are too expensive or impractical.
Looking ahead, further work will focus on scaling up the method, testing it under more diverse water conditions, and designing simple, affordable devices that households or treatment plants can adopt.
In short: this specialised magnetic powder can tackle a tiny pollutant with big consequences. With sensible engineering and careful recovery, magnetic nanocomposites offer a promising, practical path to clean water while protecting the ecosystem from microplastic pollution.
Riona Indhur has received the prestigious National Research Foundation (NRF) postdoctoral research fellowship (Scarce Skills).
The project was funded by the National research Foundation and Water Research Commission of South Africa
Source: The Conversation – Africa (2) – By Laura Ferguson, Associate Professor, Population and Public Health Sciences, University of Southern California
Globally, nearly half of the deaths of children under five years are linked to malnutrition. In Kenya, it’s the leading cause of illness and death among children.
Children with malnutrition typically show signs of recent and severe weight loss. They may also have swollen ankles and feet. Acute malnutrition among children is usually the result of eating insufficient food or having infectious diseases, especially diarrhoea.
Acute malnutrition weakens a child’s immune system. This can lead to increased susceptibility to infectious diseases like pneumonia. It can also cause more severe illness and an increased risk of death.
Currently, the Kenyan national response to malnutrition, implemented by the ministry of health, is based on historical trends of malnutrition. This means that if cases of malnutrition have been reported in a certain month, the ministry anticipates a repeat during a similar month in subsequent years. Currently, no statistical modelling guides responses, which has limited their accuracy.
The health ministry has collected monthly data on nutrition-related indicators and other health conditions for many years.
Our multi-disciplinary team set out to explore whether we could use this data to help forecast where, geographically, child malnutrition was likely to occur in the near future. We were aiming for a more accurate forecast than the existing method.
We developed a machine learning model to forecast acute malnutrition among children in Kenya. A machine learning model is a type of mathematical model that, once “trained” on an existing data set, can make predictions of future outcomes. We used existing data and improved forecasting capabilities by including complementary data sources, such as satellite imagery that provides an indicator of crop health.
We found that machine learning-based models consistently outperformed existing platforms used to forecast malnutrition rates in Kenya. And we found that models with satellite-based features worked even better.
Our results demonstrate the ability of machine learning models to more accurately forecast malnutrition in Kenya up to six months ahead of time from a variety of indicators.
If we have advance knowledge of where malnutrition is likely to be high, scarce resources can be allocated to these high-risk areas in a timely manner to try to prevent children from becoming malnourished.
How we did it
We used clinical data from the Kenya Health Information System. This included data on diarrhoea treatment and low birth weight. We collected data on children who visited a health facility who met the definition of being acutely malnourished, among other relevant clinical indicators.
Given that food insecurity is a key driver of acute malnutrition, we also incorporated data reflecting crop activity into our models. We used a NASA satellite to look at gross primary productivity, which measures the rate at which plants convert solar energy into chemical energy. This provides a coarse indicator of crop health and productivity. Lower average rates can be an early indication of food scarcity.
We tested several methods and models for forecasting malnutrition risk among children in Kenya using data collected from January 2019 to February 2024.
The gradient boosting machine learning model – trained on previous acute malnutrition outcomes and gross primary productivity measurements – turned out to be the most effective model for forecasting acute malnutrition among children.
This model can forecast where and at what prevalence level acute malnutrition among children is likely to occur in one month’s time with 89% accuracy.
All the models we developed performed well where the prevalence of acute child malnutrition was expected to be at more than 30%, for instance in northern and eastern Kenya, which have dry climates. However, when the prevalence was less than 15%, for instance in western and central Kenya, only the machine learning models were able to forecast with good accuracy.
This higher accuracy is achieved because the models use additional information on multiple clinical factors. They can, therefore, find more complex relationships.
Implications
Current efforts to predict acute malnutrition among children rely only on historical knowledge of malnutrition patterns. We found these forecasts were less accurate than our models.
Our models leverage historical malnutrition patterns, as well as clinical indicators and satellite-based indicators.
The forecasting performance of our models is also better than other similar data-based modelling efforts published by other researchers.
As resources for health and nutrition shrink, improved targeting to the areas of highest need is critical. Treating acute malnutrition can save a child’s life.
Prevention of malnutrition promotes children’s full psychological and physical development.
What needs to happen next
Making these data from diverse sources available through a dashboard could inform decision-making. Responders could get six months to intervene where they are most needed.
We have developed a prototype dashboard to create visualisations of what responders would be able to see based on our model’s subcounty-level forecasts. We are currently working with the Kenyan ministry of health and Amref Health Africa, a health development NGO, to ensure that the dashboard is available to local decision-makers and stakeholders. It is regularly updated with the most current data and new forecasts.
We are also working with our partners to refine the dashboard to meet the needs of the end users and promote its use in national decision-making on responses to acute malnutrition among children. We’re tracking the impacts of this work.
Throughout this process, it will be important to strengthen the capacity of our partners to manage, update and use the model and dashboard. This will promote local responsiveness, ownership and sustainability.
Scaling up
The Kenya Health Information System relies on the District Health Information System 2 (DHIS2). This is an open source software platform. It is currently used by over 80 low- and middle-income countries. The satellite data that we used in our models is also available in all of these countries.
If we can secure additional funding, we plan to expand our work geographically and to other areas of health. We’ve also made our code publicly available, which allows anyone to use it and replicate our work in other countries where child malnutrition is a public health challenge.
Furthermore, our model proves that DHIS2 data, despite challenges with its completeness and quality, can be used in machine learning models to inform public health responses. This work could be adapted to address public health issues beyond malnutrition, like changes in patterns of infectious diseases due to climate change.
This work was a collaboration between the University of Southern California’s Institute on Inequalities in Global Health and Center for Artificial Intelligence in Society, Microsoft, Amref Health Africa and the Kenyan ministry of health.
This work was supported, in part, by the Microsoft Corporation.
Bistra Dilkina received in-kind support from Microsoft AI for Good for this work.
On October 6 1995, at a scientific meeting in Florence, Italy, two Swiss astronomers made an announcement that would transform our understanding of the universe beyond our solar system. Michel Mayor and his PhD student Didier Queloz, working at the University of Geneva, announced they had detected a planet orbiting a star other than the Sun.
The star in question, 51 Pegasi, lies about 50 light years away in the constellation Pegasus. Its companion – christened 51 Pegasi b – was unlike anything written in textbooks about how we thought planets might look. This was a gas giant with a mass of at least half that of Jupiter, circling its star in just over four days. It was so close to the star (1/20th of Earth’s distance from the Sun, well inside Mercury’s orbit) that the planet’s atmosphere would be like a furnace, with temperatures topping 1,000°C.
The instrument behind the discovery was Elodie, a spectrograph that had been installed two years earlier at the Haute-Provence observatory in southern France. Designed by a Franco-Swiss team, Elodie split starlight into a spectrum of different colours, revealing a rainbow etched with fine dark lines. These lines can be thought of as a “stellar barcode”, providing details on the chemistry of other stars.
What Mayor and Queloz spotted was 51 Pegasi’s barcode sliding rhythmically back-and-forth in this spectrum every 4.23 days – a telltale signal that the star was being wobbled back and forth by the gravitational tug of an otherwise unseen companion amid the glare of the star.
After painstakingly ruling out other explanations, the astronomers finally decided that the variations were due to a gas giant in a close-in orbit around this Sun-like star. The front page of the Nature journal in which their paper was published carried the headline: “A planet in Pegasus?”
The discovery baffled scientists, and the question-mark on Nature’s front cover reflected initial scepticism. Here was a purported giant planet next to its star, with no known mechanism for forming a world like this in such a fiery environment.
While the signal was confirmed by other teams within weeks, reservations about the cause of the signal remained for almost three years before being finally ruled out. Not only did 51 Pegasi b become the first planet discovered orbiting a Sun-like star outside our Solar System, but it also represented an entirely new type of planet. The term “hot Jupiter” was later coined to describe such planets.
This discovery opened the floodgates. In the 30 years since, more than 6,000 exoplanets (the term for planets outside our Solar System) and exoplanet candidates have been catalogued.
Their variety is staggering. Not only hot but ultra-hot Jupiters with a dayside temperature exceeding 2,000 °C and orbits of less than a day. Worlds that orbit not one but two stars, like Tatooine from Star Wars. Strange “super-puff” gas giants larger than Jupiter but with a fraction of the mass. Chains of small rocky planets all piled up in tight orbits.
The discovery of 51 Pegasi b triggered a revolution and, in 2019, landed Mayor and Queloz a Nobel prize. We can now infer that most stars have planetary systems. And yet, of the thousands of exoplanets found, we have yet to find a planetary system that resembles our own.
The quest to find an Earth twin – a planet that truly resembles Earth in size, mass and temperature – continues to drive modern-day explorers like us to search for more undiscovered exoplanets. Our expeditions may not take us on death-defying voyages and treks like the past legendary explorers of Earth, but we do get to visit beautiful, mountain-top observatories often located in remote areas around the world.
We are members of an international consortium of planet hunters that built, operate and maintain the Harps-N spectrograph, mounted on the Telescopio Nazionale de Galileo on the beautiful Canary island of La Palma. This sophisticated instrument allows us to rudely interrupt the journey of starlight which may have been travelling unimpeded at speeds of 670 million miles per hour for decades or even millennia.
Each new signal has the potential to bring us closer to understanding how common planetary systems like our own may (or may not) be. In the background lies the possibility that one day, we may finally detect another planet like Earth.
The origins of exoplanet study
Up until the mid-1990s, our Solar System was the only set of planets humanity ever knew. Every theory about how planets formed and evolved stemmed from these nine, incredibly closely spaced data-points (which went down to eight when Pluto was demoted in 2006, after the International Astronomical Union agreed a new definition of a planet).
All of these planets revolve around just one star out of the estimated 10¹¹ (roughly 100 billion) in our galaxy, the Milky Way – which is in turn one of some 10¹¹ galaxies throughout the universe. So, trying to draw conclusions from the planets in our Solar System alone was a bit like aliens trying to understand human nature by studying students living together in one house. But that didn’t stop some of the greatest minds in history speculating on what lay beyond.
The ancient Greek philosopher Epicurus (341-270BC) wrote: “There is an infinite number of worlds – some like this world, others unlike it.” This view was not based on astronomical observation but his atomist theory of philosophy. If the universe was made up of an infinite number of atoms then, he concluded, it was impossible not to have other planets.
Epicurus clearly understood what this meant in terms of the potential for life developing elsewhere:
We must not suppose that the worlds have necessarily one and the same shape. Nobody can prove that in one sort of world there might not be contained – whereas in another sort of world there could not possibly be – the seeds out of which animals and plants arise and all the rest of the things we see.
In contrast, at roughly the same time, fellow Greek philosopher Aristotle (384-322 BC) was proposing his geocentric model of the universe, which had the Earth immobile at its centre with the Moon, Sun and known planets orbiting around us. In essence, the Solar System as Aristotle conceived it was the entire universe. In On the Heavens (350BC), he argued: “It follows that there cannot be more worlds than one.”
Such thinking that planets were rare in the universe persisted for 2,000 years. Sir James Jeans, one of the world’s top mathematicians and an influential physicist and astronomer at the time, advanced his tidal hypothesis of planet formation in 1916. According to this theory, planets were formed when two stars pass so closely that the encounter pulls streams of gas off the stars into space, which later condense into planets. The rareness of such close cosmic encounters in the vast emptiness of space led Jeans to believe that planets must be rare, or – as was reported in his obituary – “that the solar system might even be unique in the universe”.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
But by then, understanding of the scale of the universe was slowly changing. In the “Great Debate” of 1920, held at the Smithsonian Museum of Natural History in Washington DC, American astronomers Harlow Shapley and Heber Curtis clashed over whether the Milky Way was the entire universe, or just one of many galaxies. The evidence began to point to the latter, as Curtis had argued for. This realisation – that the universe contained not just billions of stars, but billions of galaxies each containing billions of stars – began to affect even the most pessimistic predictors of planetary prevalence.
In the 1940s, two things caused the scientific consensus to pivot dramatically. First, Jeans’ tidal hypothesis did not stand up to scientific scrutiny. The leading theories now had planet formation as a natural byproduct of star formation itself, opening up the potential for all stars to host planets.
Then in 1943, claims emerged of planets orbiting the stars 70 Ophiuchus and 61 Cygni c – two relatively nearby star systems visible to the naked eye. Both were later shown to be false positives, most likely due to uncertainties in the telescopic observations that were possible at the time – but nonetheless, it greatly influenced planetary thinking. Suddenly, billions of planets in the Milky Way was considered a genuine scientific possibility.
For us, nothing highlights this change in mindset more than an article written for the Scientific American in July 1943 by the influential American astronomer Henry Norris Russell. Whereas two decades earlier, Russell had predicted that planets “should be infrequent among the stars”, now the title of his article was: “Anthropocentrism’s Demise. New Discoveries Lead to the Probability that There Are Thousands of Inhabited Planets in our Galaxy”.
Strikingly, Russell was not merely making a prediction about any old planets, but inhabited ones. The burning question was: where were they? It would take another half-century to begin finding out.
The Harps-N spectrograph is mounted on the Telescopio Nazionale de Galileo (left) in La Palma, Canary Islands. lunamarina/Shutterstock
How to detect an exoplanet
When we observe myriad stars through La Palma’s Italian-built Galileo telescope using our Harps-N spectrograph, it is amazing to consider how far we have come since Mayor and Queloz announced their discovery of 51 Pegasi b in 1995. These days, we can effectively measure the masses of not just Jupiter-like planets, but even small planets thousands of light years away. As part of the Harps-N collaboration, we have had a front-row seat since 2012 in the science of small exoplanets.
Another milestone in this story came four years after the 51 Pegasi b discovery, when a Canadian PhD student at Harvard University, David Charbonneau, detected the transit of a known exoplanet. This was another hot Jupiter, known as HD209458b, also located in the Pegasus constellation, about 150 light years from Earth.
Transit refers to a planet passing in front of its star, from the perspective of the observer, momentarily making the star appear dimmer. As well as detecting exoplanets, the transit technique enables us to measure the radius of the planet by taking many brightness measurements of a star, then waiting for it to dim due to the passing planet. The extent of blocked starlight depends on the radius of the planet. For example, Jupiter would make the Sun 1% dimmer to alien observers, while for Earth, the effect would be a hundred times weaker.
In all, four times more exoplanets have now been discovered using this transit technique compared with the “barcode” technique, known as radial velocity, that the Swiss astronomers used to spot the first exoplanet 30 years ago. It is a technique that is still widely used today, including by us, as it can not only find a planet but also measure its mass.
A planet orbiting a star exerts a gravitational pull which causes that star to wobble back and forth – meaning it will periodically change its velocity with respect to observers on Earth. With the radial velocity technique, we take repeated measurements of the velocity of a star, looking to find a stable periodic wobble that indicates the presence of a planet.
These velocity changes are, however, extremely small. To put it in perspective, the Earth makes the Sun change its velocity by a mere 9cm per second – slower than a tortoise. In order to find planets with the radial velocity technique, we thus need to measure these small velocity changes for stars that are many many trillions of miles away from us.
The state-of-the-art instruments we use are truly an engineering feat. The latest spectrographs, such as Harps-N and also Espresso, can accurately measure velocity shifts of the order of tenths of centimetres per second – although still not sensitive enough to detect a true Earth twin.
But whereas this radial velocity technique is, for now, limited to ground-based observatories and can only observe one star at the time, the transit technique can be employed in space telescopes such as the French Corot (2006-14) and Nasa’s Kepler (2009-18) and Tess (2018-) missions. Between them, space telescopes have detected thousands of exoplanets in all their diversity, taking advantage of the fact we can measure stellar brightness more easily from space, and for many stars at the same time.
Despite the differences in detection success rate, both techniques continue to be developed. Applying both can give the radius and mass of a planet, opening up many more avenues for studying its composition.
To estimate possible compositions of our discovered exoplanets, we start by making the simplified assumption that small planets are, like Earth, made up of a heavy iron-rich core, a lighter rocky mantle, some surface water and a small atmosphere. Using our measurements of mass and radius, we can now model the different possible compositional layers and their respective thickness.
This is still very much a work in progress, but the universe is spoiling us with a wide variety of different planets. We’ve seen evidence of rocky worlds being torn apart and strange planetary arrangements that hint at past collisions. Planets have been found across our galaxy, from Sweeps-11b in its central regions (at nearly 28,000 light years away, one of the most distant ever discovered) to those orbiting our nearest stellar neighbour, Proxima Centauri, which is “only” 4.2 light years away.
Illustration of Proxima b, one of the exoplanets orbiting the nearest star to our Sun, Proxima Centauri. Catmando/Shutterstock
Searching for ‘another Earth’
In early July 2013, one of us (Christopher) was flying out to La Palma for my first “go” with the recently commissioned Harps-N spectrograph. Keen not to mess up, my laptop was awash with spreadsheets, charts, manuals, slides and other notes. Also included was a three-page document I had just been sent, entitled: Special Instructions for ToO (Target of Opportunity).
The first paragraph stated: “The Executive Board has decided that we should give highest priority to this object.” The object in question was a planetary candidate thought to be orbiting Kepler-78, a star a little cooler and smaller than our Sun, located about 125 light years away in the direction of the constellation Cygnus.
A few lines further down read: “July 4-8 run … Chris Watson” with a list of ten times to observe Kepler-78 – twice per night, each separated by a very specific four hours and 15 minutes. The name above mine was Didier Queloz’s (he hadn’t been awarded his Nobel prize yet, though).
This planetary candidate had been identified by the Kepler space telescope, which was tasked with searching a portion of the Milky Way to look for exoplanets as small as the Earth. In this case, it had identified a transiting planet candidate with an estimated radius of 1.16 (± 0.19) Earth radii – an exoplanet not that much larger than Earth had potentially been spotted.
I was in La Palma to attempt to measure its mass which, combined with the radius from Kepler, would allow the density and possible composition to be constrained. I wrote at the time: “Want 10% error on mass, to get a good enough bulk density to distinguish between Earth-like, iron-concentrated (Mercury), or water.”
In all, I took ten out of our team’s total of 81 exposures of Kepler-78 in an observing campaign lasting 97 days. During that time, we became aware of a US-led team who were also looking for this potential planet. In true scientific spirit, we agreed to submit our independent findings at the same time. On the specified date. Like a prisoner swap, the two teams exchanged results – which agreed. We had, within the uncertainties of our data, reached the same conclusion about the planet’s mass.
Its most likely mass came out as 1.86 Earth masses. At the time, this made Kepler-78b the smallest extrasolar planet with an accurately measured mass. The density was almost identical to that of Earth’s.
But that is where the similarities to our planet ended. Kepler-78b has a “year” that lasts only 8.5 hours, which is why I had been instructed to observe it every 4hr 15min – when the planet was at opposite sides of its orbit, and the induced “wobble” of the star would be at its greatest. We measured the star wobbling back and forth at about two metres per second – no more than a slow jog.
Kepler-78b’s short orbit meant its extreme temperature would cause all rock on the planet to melt. It may have been the most Earth-like planet found at the time in terms of its size and density, but otherwise, this hellish lava world was at the very extremes of our known planetary population.
Illustration of the Kepler-78b ‘lava world’ – similar in size and density to Earth. simoleonh/Shutterstock
In 2016, the Kepler space telescope made another landmark discovery: a system with at least five transiting planets around a Sun-like star, HIP 41378, in the Cancer constellation. What made it particularly exciting was the location of these planets. Where most transiting planets we have spotted are closer to their star than Mercury is to the Sun (due to our detection capabilities), this system has at least three planets beyond the orbital radius of Venus.
Having decided to use our Harps-N spectrograph to measure the masses of all five transiting planets, it became clear after more than a year of observing that one instrument would not be enough to analyse this challenging mix of signals. Other international teams came to the same conclusion and, rather than compete, we decided to come together in a global collaboration that holds strong to this day, with hundreds of radial velocities gathered over many years.
We now have firm masses and radii for most of the planets in the system. But studying them is a game of patience. With planets much further away from their host star, it takes much longer before there is a new transit event or the periodic wobble can be fully observed. We thus need to wait multiple years and gather lots of data to gain insight in this system.
The rewards are obvious, though. This is the first system that starts resembling our Solar System. While the planets are a bit larger and more massive than our rocky planets, their distances are very similar – helping us to understand how planetary systems form in the universe.
The holy grail for exoplanet explorers
After three decades of observing, a wealth of different planets have emerged. We started with the hot Jupiters, large gas giants close to their star that are among the easiest planets to find due to both deeper transits and larger radial velocity signals. But while the first tens of discovered exoplanets were all hot Jupiters, we now know these planets are actually very rare.
With instrumentation getting better and observations piling up, we have since found a whole new class of planets with sizes and masses between those of Earth and Neptune. But despite our knowledge of thousands of exoplanets, we still have not found systems truly resembling our solar system, nor planets truly resembling Earth.
It is tempting to conclude this means we are a unique planet in a unique system. While this still could be true, it is unlikely. The more reasonable explanation is that, for all our stellar technology, our capabilities of detecting such Earth-like planets are still fairly limited in a universe so mind-bogglingly vast.
The holy grail for many exoplanet explorers, including us, remains to find this true Earth twin – a planet with a similar mass and radius as Earth’s, orbiting a star similar to the Sun at a distance similar to how far we are from the Sun.
While the universe is rich in diversity and holds many planets unlike our own, discovering a true Earth twin would be the best place to start looking for life as we know it. Currently, the radial velocity method – as used to find the very first exoplanet – remains by far the best-placed method to find it.
Thirty years on from that Nobel-winning discovery, pioneering planetary explorer Didier Queloz is taking charge of the very first dedicated radial velocity campaign to go in search of an Earth-like planet.
A major international collaboration is building a dedicated instrument, Harps3, to be installed later this year at the Isaac Newton Telescope on La Palma. Given its capabilities, we believe a decade of data should be enough to finally discover our first Earth twin.
To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.
Christopher Watson receives funding from the Science and Technology Facilities Council (STFC).
Annelies Mortier receives funding from the Science and Technology Facilities council (STFC) and UK Research and Innovation (UKRI).
Nonetheless, the Blue Jays are still being heavily marketed as “Canada’s team” as they square off against the New York Yankees, America’s most storied baseball team.
Why do the Blue Jays frame themselves as not just Toronto’s team, but Canada’s? And is their current post-season run their biggest and most important opportunity in years to fully establish themselves as representing all of Canada?
Truly Canada’s team?
The Jays serving as Canada’s team may make sense since they’re the only Canadian team currently playing in Major League Baseball (MLB). But to some Canadians, positioning the Jays as the nation’s team may not sit well.
Despite playing north of the border and earning revenues in the weaker Canadian dollar, the Jays operate in one of MLB’s largest markets — Toronto — and can also market to fans across the country. That gives them the largest geographical market in professional baseball — an entire nation.
The Jays’ success so far in the post-season in this current political moment — as Trump is once again making veiled threats about making Canada the 51st state during tense trade negotiations — presents the Blue Jays with perhaps their best opportunity to fulfil their role as Canada’s team.
In a season defined by rivalry, politics and national pride, the Blue Jays are proving that even America’s pastime can become a canvas for Canadian nationalism.
Noah Eliot Vanderhoeven does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Moones Alamooti, Assistant Professor of Energy and Petroleum Engineering, University of North Dakota
The world’s largest geothermal power station is under construction in Utah.Business Wire via AP
As energy use rises and the planet warms, you might have dreamed of an energy source that works 24/7, rain or shine, quietly powering homes, industries and even entire cities without the ups and downs of solar or wind – and with little contribution to climate change.
The promise of new engineering techniques for geothermal energy – heat from the Earth itself – has attracted rising levels of investment to this reliable, low-emission power source that can provide continuous electricity almost anywhere on the planet. That includes ways to harness geothermal energy from idle or abandoned oil and gas wells. In the first quarter of 2025, North American geothermal installations attracted US$1.7 billion in public funding – compared with $2 billion for all of 2024, which itself was a significant increase from previous years, according to an industry analysis from consulting firm Wood Mackenzie.
As an exploration geophysicist and energy engineer, I’ve studied geothermal systems’ resource potential and operational trade-offs firsthand. From the investment and technological advances I’m seeing, I believe geothermal energy is poised to become a significant contributor to the energy mix in the U.S. and around the world, especially when integrated with other renewable sources.
A May 2025 assessment by the U.S. Geological Survey found that geothermal sources just in the Great Basin, a region that encompasses Nevada and parts of neighboring states, have the potential to meet as much as 10% of the electricity demand of the whole nation – and even more as technology to harness geothermal energy advances. And the International Energy Agency estimates that by 2050, geothermal energy could provide as much as 15% of the world’s electricity needs.
For generations, Maori people in New Zealand, and other people elsewhere around the world, have made use of the Earth’s heat, as in hot springs, where these people are cooking food in the hot water. Wolfgang Kaehler/LightRocket via Getty Images
Why geothermal energy is unique
Geothermal energy taps into heat beneath the Earth’s surface to generate electricity or provide direct heating. Unlike solar or wind, it never stops. It runs around the clock, providing consistent, reliable power with closed-loop water systems and few emissions.
Geothermal is capable of providing significant quantities of energy. For instance, Fervo Energy’s Cape Station project in Utah is reportedly on track to deliver 100 megawatts of baseload, carbon-free geothermal power by 2026. That’s less than the amount of power generated by the average coal plant in the U.S., but more than the average natural gas plant produces.
There are several ways to get energy from deep within the Earth.
Hydrothermal systems tap into underground hot water and steam to generate electricity. These resources are concentrated in geologically active areas where heat, water and permeable rock naturally coincide. In the U.S., that’s generally California, Nevada and Utah. Internationally, most hydrothermal energy is in Iceland and the Philippines.
A drilling rig sits outside a home in White Plains, N.Y., where a geothermal heat pump is being installed. AP Photo/Julia Nikhinson
Enhanced geothermal systems effectively create electricity-generating hydrothermal processes just about anywhere on the planet. In places where there is not enough water in the ground or where the rock is too dense to move heat naturally, these installations drill deep holes and inject fluid into the hot rocks, creating new fractures and opening existing ones, much like hydraulic fracturing for oil and gas production.
A system like this uses more than one well. In one, it pumps cold water down, which collects heat from the rocks and then is pumped back up through another well, where the heat drives turbines. In recent years, academic and corporate research has dramatically improved drilling speed and lowered costs.
Ground source heat pumps do not require drilling holes as deep, but instead take advantage of the fact that the Earth’s temperature is relatively stable just below the surface, even just 6 or8 feet down (1.8 to 2.4 meters) – and it’s hotter hundreds of feet lower.
These systems don’t generate electricity but rather circulate fluid in underground pipes, exchanging heat with the soil, extracting warmth from the ground in winter and transferring warmth to the ground in summer. These systems are similar but more efficient thanair-source heat pumps, sometimes called minisplits, which are becoming widespread across the U.S. for heating and cooling. Geothermal heat pump systems can serve individual homes, commercial buildings and even neighborhood or business developments.
Enhanced geothermal systems can be built almost anywhere and can take advantage of existing wells to save the time and money of drilling new holes deep into the ground. U.S. Geological Survey
And converting abandoned oil and gas wells for enhanced geothermal systems could significantly increase the amount of energy available and its geographic spread.
Those projects include repurposing idle oil or gas wells, which is relatively straightforward: Engineers identify wells that reach deep, hot rock formations and circulate water or another fluid in a closed loop to capture heat to generate electricity or provide direct heating. This method does not require drilling new wells, which significantly reduces setup costs and environmental disruption and accelerates deployment.
Despite its challenges, geothermal energy’s reliability, low emissions and scalability make it a vital complement to solar and wind – and a cornerstone of a stable, low-carbon energy future.
Moones Alamooti does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Seasonal allergies – triggered by pollen – appear to make deaths by suicide more likely. Our findings, published in the Journal of Health Economics, show that minor physical health conditions like mild seasonal allergies, previously thought not to be an immediate trigger of suicide, are indeed a risk factor.
To evaluate the link between seasonal allergies and suicide, my co-authors and I combined daily pollen measurements with daily suicide counts across 34 U.S. metropolitan areas.
Because both pollen and suicide are sensitive to weather conditions, we carefully accounted for temperature, rainfall and wind. We also controlled for differences in local climate and plant life, since pollen levels vary by region, and for seasonal averages that might otherwise obscure results. This allowed us to compare suicide counts on days with unexpectedly high pollen to days with little or none in the same county.
The results were striking. Relative to days with no or low levels of pollen, we found that deaths by suicide rose by 5.5% when pollen levels are moderate and 7.4% when levels are high. The increase was even larger among people with a known history of mental health conditions or treatment. We also showed that on high-pollen days, residents of affected areas experience more depressive symptoms and exhaustion.
Our analysis suggests that allergies exacerbate existing vulnerabilities, pushing some people toward crisis. We suspect that sleep disruption is the link between allergies and suicide rates.
Symptoms include sneezing, congestion, itchy eyes and scratchy throat. Most people experiencing these symptoms feel sluggish during the day and sleep poorly at night. Allergy sufferers might not realize, however, that these symptoms reduce alertness and cognitive functioning – some of the factors that can worsen mental health and increase vulnerability to suicidal thoughts and behaviors.
Suicide rates have been growing steadily in the past two decades, by 37% between 2000 and 2018. According to the Centers for Disease Control and Prevention, more than 49,000 Americans died by suicide in 2022, and over 616,000 visited emergency departments for self-harm injuries.
That means more people will experience stronger allergy symptoms, with ripple effects not only for physical health but also for sleep, mood and mental well-being.
Despite the scale of the problem, there are no national systems in the U.S. to consistently measure and communicate pollen levels. Most communities lack reliable forecasts and alert systems that would allow vulnerable people to take precautions. This gap limits both prevention and research.
More broadly, people should be aware that during peak allergy season, reduced alertness, sleep disruptions and mood fluctuations may place an increased burden on their mental health, in addition to the allergy symptoms.
In terms of policy, improving pollen monitoring and public communication could help people anticipate high-risk days. Such infrastructure would also support further research, particularly in rural areas where data is currently lacking. Our next step, supported by the American Foundation for Suicide Prevention, is to examine the impact of pollen on rural communities.
The Research Brief is a short take on interesting academic work.
Shooshan Danagoulian receives funding from American Foundation of Suicide Prevention.
Scientists have found eggs of the Aedes aegypti mosquito in the UK for the first time – a mosquito that spreads many tropical diseases.
The eggs were recently discovered in a trap at a freight depot near Heathrow airport and confirmed by DNA testing to be Ae aegypti. The discovery, led by the UK Health Security Agency, also reported further findings of Aedes albopictus, the “Asian tiger” mosquito, at a site in Kent in summer 2024. Both species are invasive and thrive in warm, humid conditions.
These Aedes mosquitoes matter because they can spread viruses such as dengue, chikungunya and Zika. Outbreaks of these diseases, once confined to the tropics, are now appearing in Europe.
In 2024, Italy recorded over 200 locally acquired dengue cases, mainly in the Marche region, while France and Spain also reported domestic dengue transmission. Chikungunya has become another European concern, with France reporting nearly 500 locally transmitted cases in 2025. Zika has not yet taken hold in Europe, but the same mosquito species could carry it if conditions allow.
Two related viruses, West Nile and Usutu, are also spreading further north across Europe. West Nile virus has caused outbreaks in birds, horses and people across Europe, and has now been detected in the UK for the first time.
In summer 2023, scientists found West Nile virus genetic material in wild mosquitoes from samples collected in Nottinghamshire. Usutu, which mainly infects birds, was first detected in London blackbirds in 2020 and has been found in birds or mosquitoes every year since, making it now endemic to the UK.
Both viruses belong to the same family as Japanese encephalitis, and although they primarily circulate in birds and mosquitoes, they can also incidentally infect humans. They also tend to move together. Usutu often establishes first, with West Nile following as temperatures rise.
The UK Health Security Agency notes that West Nile’s range has recently expanded “to more northerly and western areas of Europe”. Together, these findings show how climate change is shifting mosquito-borne diseases northwards.
Laboratory studies have confirmed that native British mosquitoes could transmit these viruses under local UK-climate conditions. Research has shown several species can become infected and even pass the virus on at typical summer temperatures.
For instance, common native Culex mosquitoes from England were found capable of transmitting Usutu in their saliva at just 19°C. In the same study, Culex pipiens and Culiseta annulata were able to transmit the UK Usutu strain, suggesting the virus could spread northwards.
Another experiment found that the salt-marsh mosquito Ochlerotatus (Aedes) detrituscan transmit West Nile at 21°C, but not dengue or chikungunya. Combined, these results demonstrate that native UK mosquitoes are able to carry and transmit viruses like West Nile and Usutu if the right climate conditions occur.
The pattern is clear: climate change and global travel are together loading the dice. Warmer summers, milder winters and heavier rainfall are making the UK more welcoming to these insects.
Climate models already predict that Ae albopictus could become established in southern England within the next few decades. At the same time, more people and goods are travelling between the UK and regions where these diseases are endemic, bringing both mosquitoes and infections with them.
The UK Health Security Agency recorded hundreds of imported dengue and chikungunya cases last year. Each one a potential spark if the right mosquitoes are present.
The Animal and Plant Health Agency, a UK government agency, warns that this northward jump of mosquito-borne diseases is “primarily driven by movement of people and global climate change”.
In plain terms, the UK is warming into range for these tropical “vectors” and the viruses they carry. Already, Ae albopictus breeds widely across continental Europe, while local dengue and chikungunya outbreaks are appearing further north each year. West Nile and Usutu are following a similar path.
The UK’s surveillance network, coordinated by the Health Security Agency with universities and local authorities, is already monitoring sites most at risk of mosquito introductions. This coordinated approach is designed to catch incursions early and keep Britain ahead of a rapidly shifting global disease map.
The combination of a changing climate, international travel and the ability of these insects to thrive means both invasive mosquito species and the viruses they carry are edging closer to establishing in the UK.
The continuing surveillance and early detection will be crucial to catch incursions before they spread. As Britain’s summers grow warmer and wetter, the insects and diseases once confined to the tropics are finding a new home – even in today’s not-so-chilly UK.
Marcus Blagrove currently receives research funding from UKRI (cross council), BBSRC, MRC, NERC, DEFRA, The Leverhulme Trust, and The Pandemic Institute.
Artificial intelligence technology has begun to transform higher education, raising a new set of profound questions about the role of universities in society. A string of high-profile corporate partnerships reflect how universities are embracing AI technology.
As a social scientist who studies educational technology and organizational partnerships, I view these collaborations as part of a decades-long shift toward the “corporatization” of higher education – where universities have become increasingly market-driven, aligning their priorities, culture and governance structures with industry partners.
I see the rise of generative AI as accelerating this trend, which risks undermining higher education’s autonomy and public service mission. Examining the underlying organizational forces that shape the future of higher education can shed light on how AI challenges universities’ traditional principles – and how they might resist corporate influence.
The rise of corporate partnerships
Over the past 50 years, private sector support for university research has increased tenfold, outpacing overall growth in higher education research spending. A pivotal shift came in 1980, when universities gained the right to retain intellectual property from federally funded research. This made commercialization of university research far easier. Over time, corporate involvement pushed university research toward commercial needs and increasingly exposed universities to the profit motive.
As colleges continue to close at record rates, the imperative to attract tuition dollars and research grants increasingly dictates institutional priorities. I argue that universities risk sidelining research that serves the public interest by looking toward corporate funding and partnerships to fill the gaps.
In my view, the shift away from public-good scholarship to monetizable content and services shaped by external industry partners jeopardizes the academic freedom and intellectual stewardship that once anchored the mission of higher education. For example, under financial constraints, university administrators may be inclined to overlook glaring value misalignments between their public mission and the commercial objectives of AI firms.
The forces driving universities’ AI initiatives
At many universities, AI adoption and the turn toward corporate collaborations are driven by more than economic vulnerabilities. The broad range of partnerships with AI companies across higher education can provide insight into the deeper dynamics at work.
The California State University system aims to become the first and largest ‘AI-powered university system,’ motivated in part to prepare students for careers in an AI-driven economy. Myung J. Chun via Getty Images
This underscores how AI partnerships are not guided by market incentives alone. Before universities grew into multibillion-dollar businesses, their decision-making was primarily driven by markers of intellectual prestige, such as scholarly excellence and faculty reputation. Universities largely held a monopoly over knowledge production and served as the primary gatekeepers of intellectual legitimacy, until the digital revolution dramatically decentralized access to knowledge and its production. Universities now coexist in – and increasingly compete with – a crowded, complex ecosystem of companies and organizations that produce original research.
Generative AI represents a powerful new mode of knowledge production and synthesis, which further threatens to upend traditional forms of scholarship. Confronted with challenges to their authority, universities may attempt to preserve their elite intellectual status by rushing into partnerships with AI companies eager to capture the higher education market.
My interpretation is that economic pressures and the pursuit of prestige may be converging to reinforce a technocratic approach to higher education, where university decision-making is primarily guided by performance metrics and corporate-style governance rather than the public interest.
The recent surge in AI partnerships puts in plain view the growing dominance of market forces in higher education. As universities continue to adopt AI technologies, the consequences for intellectual freedom, democratic decision-making and commitment to the public good will become an increasingly pressing question.
Research support was provided by undergraduate research assistant Mehra Marzbani, whose contributions are gratefully acknowledged.
In today’s hot housing market, winning a bidding war can feel like a triumph. But my research shows it often comes with a catch: Homebuyers who win bidding wars tend to experience a “winner’s curse,” systematically overpaying for their new homes.
I’m a real estate economist, and my colleagues and I analyzed nearly 14 million home sales in 30 U.S. states over roughly two decades. We found that people who paid more than the asking price for their homes – a reliable sign of a bidding war – were more likely to default on their mortgages and saw significantly weaker returns.
How much weaker? On average, homebuyers who won bidding wars saw annual returns that were about 1.3 percentage points lower than those who didn’t, we found. We specifically looked at “unlevered” returns – basically, the returns you’d get if you bought the home outright with cash, without factoring in a mortgage.
Since the typical homeowner in our sample held a property for 6.3 years before selling it, this translates to about an 8.2% overpayment. Bidding-war winners were also 1.9 percentage points likelier to default.
Perhaps that loss would be worth it to someone who absolutely loves the property – but we found that homebuyers who purchase after a bidding war are also faster to resell. This suggests their overpayment is based less on enduring affection and more on bidding-war fever.
We also found that the effects of the winner’s curse – lower home appreciation and higher default rates – are stronger in places where bidding wars are more common. One example is my hometown of Rochester, New York, which has become a bidding-war hot spot in recent years.
Who bears the brunt? Lower-income, Black and Hispanic buyers are more likely to overpay in bidding wars, we found, making them more likely to suffer from the winner’s curse. This suggests that hot housing markets can worsen inequality.
Why it matters
While housing is the largest single form of wealth Americans own, past research on the winner’s curse mostly dealt with land auctions and company mergers – not the nation’s roughly 76 million owner-occupied, single-family homes. Our work is the first to show the direct evidence of the winner’s curse in residential housing markets.
This matters now because the housing market is cooling. Those who bought in the post-pandemic housing market and listed their homes in 2025 are already facing the risk of selling at a loss. Because this risk falls disproportionately on Black and Hispanic homebuyers, it could further widen the wealth gap.
By one measure, foreclosures are up 18% year over year. If the brunt of these losses falls on lower-income or otherwise vulnerable homeowners, the result could be an increase in housing insecurity and homelessness.
The good news is that the winner’s curse may be preventable. Better resources to prepare first-time homebuyers and comprehensive financial education related to mortgages and debt could help.
What still isn’t known
It’s possible more transparent bidding processes – or even formal auction systems for popular homes – could better inform prospective buyers and help them stave off the temptation of overpayment. Should the U.S. require real estate brokers or banks to caution their clients to think twice before going above the asking price? Or would that be unfair to sellers? Experimental research on these points would be useful.
Finally, our research focuses on the U.S. housing market. Whether the winner’s curse afflicts buyers in other countries remains an open question.
The Research Brief is a short take on interesting academic work.
Soon Hyeok Choi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.