What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators

Source: The Conversation – Canada – By Ramona Pringle, Director, Creative Innovation Studio; Associate Professor, RTA School of Media, Toronto Metropolitan University

Imagine an actor who never ages, never walks off set or demands a higher salary.

That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agencies. Her synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.

But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites.

The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human.

All agree Tilly isn’t human

Ironically, at the centre of this polarizing debate is a rare moment of agreement: all sides acknowledge that Tilly is not human.

Her creator, Eline Van der Velden, the CEO of AI production company Particle6, insists that Norwood was never meant to replace a real actor. Critics agree, albeit in protest. SAG-AFTRA, the union representing actors in the U.S., responded with:

“It’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion, and from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience.”

Their position is rooted in recent history: In 2023, actors went on strike over AI. The resulting agreement secured protections around consent and compensation.

So if both sides insist Tilly isn’t human, the controversy, then, isn’t just about what Tilly is, it’s about what she represents.

Complexity as a starting point

Norwood represents more than novelty. She’s emblematic of a larger reckoning with how rapidly artificial intelligence is reshaping our lives and the creative sector. The velocity of change is dizzying, and now the question is how do we shape the hybrid world we’ve already entered?

It can feel disorienting trying to parse ethics, rights and responsibilities while being bombarded by newness. Especially when that “newness” comes in a form that unnerves us: a near-human likeness that triggers long-standing cultural discomfort.

Indeed, Norwood may be a textbook case of the “uncanny valley,” a term coined by Japanese roboticist Masahiro Mori to describe the unease people feel when something looks almost human, but not quite.

But if all sides agree that Tilly isn’t human, what happens when audiences still feel something real while watching her on screen? If emotional resonance and storytelling are considered uniquely human traits, maybe the threat posed by synthetic actors has been overstated. On the other hand, who hasn’t teared up in a Pixar film? A character doesn’t have to feel emotion to evoke it.

Still, the public conversation remains polarized. As my colleague Owais Lightwala, assistant professor in the School of Performance at Toronto Metropolitan University, puts it: “The conversation around AI right now is so binary that it limits our capacity for real thinking. What we need is to be obsessed with complexity.”

Synthetic actors aren’t inherently villains or saviours, Lightwala tells me, they’re a tool, a new medium. The challenge lies in how we build the infrastructures around them, such as rights, ownership and distribution.

He points out that while some celebrities see synthetic actors as job threats, most actors already struggle for consistent work. “We ask the one per cent how they feel about losing power, but what about the 99 per cent who never had access to that power in the first place?”

Too often missing from this debate is what these tools might make possible for the creators we rarely hear from. The current media landscape is already deeply inequitable. As Lightwala notes, most people never get the chance to realize their creative potential — not for lack of talent, but due to barriers like access, capital, mentorship and time.

Now, some of those barriers might finally lower. With AI tools, more people may get the opportunity to create.

Of course, that doesn’t mean AI will automatically democratize creativity. While tools are more available, attention and influence remain scarce.

Sarah Watling, co-founder and CEO of JaLi Research, a Toronto-based AI facial animation company, offers a more cautionary perspective. She argues that as AI becomes more common, we risk treating it like a utility, essential yet invisible.

In her view, the inevitable AI economy won’t be a creator economy, it will be a utility commodity. And “when things become utilities,” she warns, “they usually become monopolized.”

Where do we go from here?

We need to pivot away from reactionary fear narratives, like Lightwala suggests.

Instead of shutting down innovation, we need to continue to experiment. We need to use this moment, when public attention is focused on the rights of actors and the shape of culture, to rethink what was already broken in the industry and allow space for new creative modalities to emerge.

Platforms and studios must take the lead in setting transparent, fair policies for how synthetic content is developed, attributed and distributed. In parallel, we need to push creative institutions, unions and agencies to collaborate in the co-design of ethical and contractual guardrails now, before precedents get set in stone, putting consent, fair attribution and compensation at the centre.

And creators, for their part, must use these tools not just to replicate what came before, but to imagine what hasn’t been possible until now. That responsibility is as much creative as it is technical.

The future will be synthetic. Our task now is to build pathways, train talent, fuel imagination, and have nuanced, if difficult, conversations.
Because while technology shapes what’s possible, creators and storytellers have the power to shape what matters.

The Conversation

Ramona Pringle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators – https://theconversation.com/what-ai-generated-tilly-norwood-reveals-about-digital-culture-ethics-and-the-responsibilities-of-creators-266564

We tested if a specialised magnetic powder could remove microplastics from drinking water: the answer is yes

Source: The Conversation – Africa (2) – By Riona Indhur, Postdoctoral Research Fellow, Durban University of Technology

Microplastics are the crumbs of our plastic world, tiny pieces that come from bigger items breaking apart or from products like synthetic clothing and packaging. They’re now everywhere. Scientists estimate there are about 51 trillion of these particles floating in the world’s surface waters, and low levels have even been found in South African tap water.

That’s worrying because these particles can carry chemicals and bad bacteria, get eaten by fish and other wildlife, and may end up in our bodies.

We’re water scientists who are looking for ways to solve this problem. In a recent study, we tested a practical fix: two “magnetic cleaning powders” that can attach onto microplastics in water; the combined clumps can then be pulled out using a magnet. These materials are called magnetic nanocomposites (think: very fine powders with special surfaces).

The idea is simple: mix a small dose of powder into the water, let it attract and attach to microplastics, and then use a strong magnet to remove the powder-plastic clusters, leaving cleaner water behind.

Around the world, researchers have tried many different methods to capture microplastics, but our study is among the first to show that magnetic nanocomposites can work effectively not only under laboratory conditions but also in real-world samples, including municipal wastewater and drinking water.

This is the first study to use these specific nanomaterials for microplastic removal, proving both their high efficiency and their practical potential. Most existing filters struggle to catch the smallest plastics, the ones most harmful to health and the environment. The next step is to test these powders on a larger scale and develop simple, affordable systems that households and water treatment plants can use.

How well do the powders work?

In our research we found that the powders were able to remove up to 96% of small polyethylene and 92% of polystyrene particles from purified water. When we tried the same approach in both drinking water and water coming out of a municipal wastewater treatment plant, the results were just as strong. In drinking water the removal was about 94% and in treated wastewater the removal was up to 92%.

Another finding from this study is that the size of the plastic particles matters. The smaller the microplastic, the easier it is for the powders to attach to it, because tiny particles can reach more of the powder’s special “sticky” surface. We saw very good results for small plastics (hundreds of micrometres), but bigger particles (3-5 millimetres) were hardly removed at all. This is because they don’t mix with the powder as well and there’s less surface for the powder to attach onto.

In everyday terms, these magnetic powders are excellent for the small microplastics that are hardest to catch with normal filters.

Now for the big question: why do the powders attach to plastic? Think of it as being like tiny magnets. The powder and the plastics have special surfaces. The powder has parts that are sticky for plastics. This stickiness happens because of different kinds of forces. The plastic and powders have opposite charges which pull them together or allow them to stick together.

The key point is that the powders are engineered or specifically made to grab onto plastics so that microplastics naturally cling to them in water.

Once the powders attach onto the microplastics, we use a strong magnet (magnetic force: 250kg) to pull the powder–plastic clumps out of the water. The plastics are then separated from the powder by washing and filtration, dried, and weighed. This allows us to check how much plastic was removed. The separated powders are regenerated and reused, while the plastics are safely discarded, preventing them from re-entering the water.

We also looked at real-world questions: can you reuse the powders? And are they safe? The powders themselves are made from safe, lab-engineered materials: tiny sheets of carbon and boron nitride (a material also found in cosmetics and coatings) that are coated with magnetic iron nanoparticles. This makes them stable in water, and easy to pull out with a magnet after they’ve captured the microplastics.

After three rounds of use, the tested powders were effective in removing plastics up to 80%. That means you don’t need a new batch of powder every time, which is important for keeping costs down. Treating 1,000 litres of water with this method costs about US$41 (R763), making it competitive with many existing treatment options.

For safety, we tested the filtered powder (the “filtrate”) on plant growth. The results showed minimal to no toxicity, as three different plants were able to grow well in the presence of the filtrate. This is a strong sign that the method is environmentally friendly when used as intended.

What does this study mean for households and cities?

In the short term, magnetic powders could be built into small cartridges or filter units that attach to household or community water systems, helping remove microplastics before the water is used for drinking or cooking.

But the bigger picture is just as important. Microplastics are not only a South African problem but are also a global pollutant that crosses borders through rivers, oceans, and even the air we breathe. Low-cost, scalable solutions such as magnetic powders can make a real difference in resource-limited settings, where advanced filtration systems are too expensive or impractical.

Looking ahead, further work will focus on scaling up the method, testing it under more diverse water conditions, and designing simple, affordable devices that households or treatment plants can adopt.

In short: this specialised magnetic powder can tackle a tiny pollutant with big consequences. With sensible engineering and careful recovery, magnetic nanocomposites offer a promising, practical path to clean water while protecting the ecosystem from microplastic pollution.

The Conversation

Riona Indhur has received the prestigious National Research Foundation (NRF) postdoctoral research fellowship (Scarce Skills).

The project was funded by the National research Foundation and Water Research Commission of South Africa

ref. We tested if a specialised magnetic powder could remove microplastics from drinking water: the answer is yes – https://theconversation.com/we-tested-if-a-specialised-magnetic-powder-could-remove-microplastics-from-drinking-water-the-answer-is-yes-264058

Child malnutrition in Kenya: AI model can forecast rates six months before they become critical

Source: The Conversation – Africa (2) – By Laura Ferguson, Associate Professor, Population and Public Health Sciences, University of Southern California

Globally, nearly half of the deaths of children under five years are linked to malnutrition. In Kenya, it’s the leading cause of illness and death among children.

Children with malnutrition typically show signs of recent and severe weight loss. They may also have swollen ankles and feet. Acute malnutrition among children is usually the result of eating insufficient food or having infectious diseases, especially diarrhoea.

Acute malnutrition weakens a child’s immune system. This can lead to increased susceptibility to infectious diseases like pneumonia. It can also cause more severe illness and an increased risk of death.

Currently, the Kenyan national response to malnutrition, implemented by the ministry of health, is based on historical trends of malnutrition. This means that if cases of malnutrition have been reported in a certain month, the ministry anticipates a repeat during a similar month in subsequent years. Currently, no statistical modelling guides responses, which has limited their accuracy.

The health ministry has collected monthly data on nutrition-related indicators and other health conditions for many years.

Our multi-disciplinary team set out to explore whether we could use this data to help forecast where, geographically, child malnutrition was likely to occur in the near future. We were aiming for a more accurate forecast than the existing method.

We developed a machine learning model to forecast acute malnutrition among children in Kenya. A machine learning model is a type of mathematical model that, once “trained” on an existing data set, can make predictions of future outcomes. We used existing data and improved forecasting capabilities by including complementary data sources, such as satellite imagery that provides an indicator of crop health.

We found that machine learning-based models consistently outperformed existing platforms used to forecast malnutrition rates in Kenya. And we found that models with satellite-based features worked even better.

Our results demonstrate the ability of machine learning models to more accurately forecast malnutrition in Kenya up to six months ahead of time from a variety of indicators.

If we have advance knowledge of where malnutrition is likely to be high, scarce resources can be allocated to these high-risk areas in a timely manner to try to prevent children from becoming malnourished.

How we did it

We used clinical data from the Kenya Health Information System. This included data on diarrhoea treatment and low birth weight. We collected data on children who visited a health facility who met the definition of being acutely malnourished, among other relevant clinical indicators.

Given that food insecurity is a key driver of acute malnutrition, we also incorporated data reflecting crop activity into our models. We used a NASA satellite to look at gross primary productivity, which measures the rate at which plants convert solar energy into chemical energy. This provides a coarse indicator of crop health and productivity. Lower average rates can be an early indication of food scarcity.

We tested several methods and models for forecasting malnutrition risk among children in Kenya using data collected from January 2019 to February 2024.

The gradient boosting machine learning model – trained on previous acute malnutrition outcomes and gross primary productivity measurements – turned out to be the most effective model for forecasting acute malnutrition among children.

This model can forecast where and at what prevalence level acute malnutrition among children is likely to occur in one month’s time with 89% accuracy.

All the models we developed performed well where the prevalence of acute child malnutrition was expected to be at more than 30%, for instance in northern and eastern Kenya, which have dry climates. However, when the prevalence was less than 15%, for instance in western and central Kenya, only the machine learning models were able to forecast with good accuracy.

This higher accuracy is achieved because the models use additional information on multiple clinical factors. They can, therefore, find more complex relationships.

Implications

Current efforts to predict acute malnutrition among children rely only on historical knowledge of malnutrition patterns. We found these forecasts were less accurate than our models.

Our models leverage historical malnutrition patterns, as well as clinical indicators and satellite-based indicators.

The forecasting performance of our models is also better than other similar data-based modelling efforts published by other researchers.

As resources for health and nutrition shrink, improved targeting to the areas of highest need is critical. Treating acute malnutrition can save a child’s life.

Prevention of malnutrition promotes children’s full psychological and physical development.

What needs to happen next

Making these data from diverse sources available through a dashboard could inform decision-making. Responders could get six months to intervene where they are most needed.

We have developed a prototype dashboard to create visualisations of what responders would be able to see based on our model’s subcounty-level forecasts. We are currently working with the Kenyan ministry of health and Amref Health Africa, a health development NGO, to ensure that the dashboard is available to local decision-makers and stakeholders. It is regularly updated with the most current data and new forecasts.

We are also working with our partners to refine the dashboard to meet the needs of the end users and promote its use in national decision-making on responses to acute malnutrition among children. We’re tracking the impacts of this work.

Throughout this process, it will be important to strengthen the capacity of our partners to manage, update and use the model and dashboard. This will promote local responsiveness, ownership and sustainability.

Scaling up

The Kenya Health Information System relies on the District Health Information System 2 (DHIS2). This is an open source software platform. It is currently used by over 80 low- and middle-income countries. The satellite data that we used in our models is also available in all of these countries.

If we can secure additional funding, we plan to expand our work geographically and to other areas of health. We’ve also made our code publicly available, which allows anyone to use it and replicate our work in other countries where child malnutrition is a public health challenge.

Furthermore, our model proves that DHIS2 data, despite challenges with its completeness and quality, can be used in machine learning models to inform public health responses. This work could be adapted to address public health issues beyond malnutrition, like changes in patterns of infectious diseases due to climate change.

This work was a collaboration between the University of Southern California’s Institute on Inequalities in Global Health and Center for Artificial Intelligence in Society, Microsoft, Amref Health Africa and the Kenyan ministry of health.

The Conversation

This work was supported, in part, by the Microsoft Corporation.

Bistra Dilkina received in-kind support from Microsoft AI for Good for this work.

ref. Child malnutrition in Kenya: AI model can forecast rates six months before they become critical – https://theconversation.com/child-malnutrition-in-kenya-ai-model-can-forecast-rates-six-months-before-they-become-critical-261075

Our quest to find a truly Earth-like planet in deep space

Source: The Conversation – Global Perspectives – By Christopher Watson, Professor, Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast

Nasa animation depicting the first 5,000 exoplanets to have been discovered, up to March 2022. M. Russo and A. Santaguida/Nasa-JPL

On October 6 1995, at a scientific meeting in Florence, Italy, two Swiss astronomers made an announcement that would transform our understanding of the universe beyond our solar system. Michel Mayor and his PhD student Didier Queloz, working at the University of Geneva, announced they had detected a planet orbiting a star other than the Sun.

The star in question, 51 Pegasi, lies about 50 light years away in the constellation Pegasus. Its companion – christened 51 Pegasi b – was unlike anything written in textbooks about how we thought planets might look. This was a gas giant with a mass of at least half that of Jupiter, circling its star in just over four days. It was so close to the star (1/20th of Earth’s distance from the Sun, well inside Mercury’s orbit) that the planet’s atmosphere would be like a furnace, with temperatures topping 1,000°C.

The instrument behind the discovery was Elodie, a spectrograph that had been installed two years earlier at the Haute-Provence observatory in southern France. Designed by a Franco-Swiss team, Elodie split starlight into a spectrum of different colours, revealing a rainbow etched with fine dark lines. These lines can be thought of as a “stellar barcode”, providing details on the chemistry of other stars.

What Mayor and Queloz spotted was 51 Pegasi’s barcode sliding rhythmically back-and-forth in this spectrum every 4.23 days – a telltale signal that the star was being wobbled back and forth by the gravitational tug of an otherwise unseen companion amid the glare of the star.

After painstakingly ruling out other explanations, the astronomers finally decided that the variations were due to a gas giant in a close-in orbit around this Sun-like star. The front page of the Nature journal in which their paper was published carried the headline: “A planet in Pegasus?”

The discovery baffled scientists, and the question-mark on Nature’s front cover reflected initial scepticism. Here was a purported giant planet next to its star, with no known mechanism for forming a world like this in such a fiery environment.

While the signal was confirmed by other teams within weeks, reservations about the cause of the signal remained for almost three years before being finally ruled out. Not only did 51 Pegasi b become the first planet discovered orbiting a Sun-like star outside our Solar System, but it also represented an entirely new type of planet. The term “hot Jupiter” was later coined to describe such planets.

Diagram showing 51 Pegasi b to be 50% larger than Jupiter, and 51 Pegasi to be 23% larger than the Sun.

NASA/JPL-Caltech

This discovery opened the floodgates. In the 30 years since, more than 6,000 exoplanets (the term for planets outside our Solar System) and exoplanet candidates have been catalogued.

Their variety is staggering. Not only hot but ultra-hot Jupiters with a dayside temperature exceeding 2,000 °C and orbits of less than a day. Worlds that orbit not one but two stars, like Tatooine from Star Wars. Strange “super-puff” gas giants larger than Jupiter but with a fraction of the mass. Chains of small rocky planets all piled up in tight orbits.

The discovery of 51 Pegasi b triggered a revolution and, in 2019, landed Mayor and Queloz a Nobel prize. We can now infer that most stars have planetary systems. And yet, of the thousands of exoplanets found, we have yet to find a planetary system that resembles our own.




Read more:
Nobel Prize in Physics: how the first exoplanet around a sun-like star was discovered


The quest to find an Earth twin – a planet that truly resembles Earth in size, mass and temperature – continues to drive modern-day explorers like us to search for more undiscovered exoplanets. Our expeditions may not take us on death-defying voyages and treks like the past legendary explorers of Earth, but we do get to visit beautiful, mountain-top observatories often located in remote areas around the world.

We are members of an international consortium of planet hunters that built, operate and maintain the Harps-N spectrograph, mounted on the Telescopio Nazionale de Galileo on the beautiful Canary island of La Palma. This sophisticated instrument allows us to rudely interrupt the journey of starlight which may have been travelling unimpeded at speeds of 670 million miles per hour for decades or even millennia.

Each new signal has the potential to bring us closer to understanding how common planetary systems like our own may (or may not) be. In the background lies the possibility that one day, we may finally detect another planet like Earth.

The origins of exoplanet study

Up until the mid-1990s, our Solar System was the only set of planets humanity ever knew. Every theory about how planets formed and evolved stemmed from these nine, incredibly closely spaced data-points (which went down to eight when Pluto was demoted in 2006, after the International Astronomical Union agreed a new definition of a planet).

All of these planets revolve around just one star out of the estimated 10¹¹ (roughly 100 billion) in our galaxy, the Milky Way – which is in turn one of some 10¹¹ galaxies throughout the universe. So, trying to draw conclusions from the planets in our Solar System alone was a bit like aliens trying to understand human nature by studying students living together in one house. But that didn’t stop some of the greatest minds in history speculating on what lay beyond.

The ancient Greek philosopher Epicurus (341-270BC) wrote: “There is an infinite number of worlds – some like this world, others unlike it.” This view was not based on astronomical observation but his atomist theory of philosophy. If the universe was made up of an infinite number of atoms then, he concluded, it was impossible not to have other planets.

Epicurus clearly understood what this meant in terms of the potential for life developing elsewhere:

We must not suppose that the worlds have necessarily one and the same shape. Nobody can prove that in one sort of world there might not be contained – whereas in another sort of world there could not possibly be – the seeds out of which animals and plants arise and all the rest of the things we see.

In contrast, at roughly the same time, fellow Greek philosopher Aristotle (384-322 BC) was proposing his geocentric model of the universe, which had the Earth immobile at its centre with the Moon, Sun and known planets orbiting around us. In essence, the Solar System as Aristotle conceived it was the entire universe. In On the Heavens (350BC), he argued: “It follows that there cannot be more worlds than one.”

Such thinking that planets were rare in the universe persisted for 2,000 years. Sir James Jeans, one of the world’s top mathematicians and an influential physicist and astronomer at the time, advanced his tidal hypothesis of planet formation in 1916. According to this theory, planets were formed when two stars pass so closely that the encounter pulls streams of gas off the stars into space, which later condense into planets. The rareness of such close cosmic encounters in the vast emptiness of space led Jeans to believe that planets must be rare, or – as was reported in his obituary – “that the solar system might even be unique in the universe”.


The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


But by then, understanding of the scale of the universe was slowly changing. In the “Great Debate” of 1920, held at the Smithsonian Museum of Natural History in Washington DC, American astronomers Harlow Shapley and Heber Curtis clashed over whether the Milky Way was the entire universe, or just one of many galaxies. The evidence began to point to the latter, as Curtis had argued for. This realisation – that the universe contained not just billions of stars, but billions of galaxies each containing billions of stars – began to affect even the most pessimistic predictors of planetary prevalence.

In the 1940s, two things caused the scientific consensus to pivot dramatically. First, Jeans’ tidal hypothesis did not stand up to scientific scrutiny. The leading theories now had planet formation as a natural byproduct of star formation itself, opening up the potential for all stars to host planets.

Then in 1943, claims emerged of planets orbiting the stars 70 Ophiuchus and 61 Cygni c – two relatively nearby star systems visible to the naked eye. Both were later shown to be false positives, most likely due to uncertainties in the telescopic observations that were possible at the time – but nonetheless, it greatly influenced planetary thinking. Suddenly, billions of planets in the Milky Way was considered a genuine scientific possibility.

For us, nothing highlights this change in mindset more than an article written for the Scientific American in July 1943 by the influential American astronomer Henry Norris Russell. Whereas two decades earlier, Russell had predicted that planets “should be infrequent among the stars”, now the title of his article was: “Anthropocentrism’s Demise. New Discoveries Lead to the Probability that There Are Thousands of Inhabited Planets in our Galaxy”.

Strikingly, Russell was not merely making a prediction about any old planets, but inhabited ones. The burning question was: where were they? It would take another half-century to begin finding out.

View of two hi-tech telescopes with the sea beyond.
The Harps-N spectrograph is mounted on the Telescopio Nazionale de Galileo (left) in La Palma, Canary Islands.
lunamarina/Shutterstock

How to detect an exoplanet

When we observe myriad stars through La Palma’s Italian-built Galileo telescope using our Harps-N spectrograph, it is amazing to consider how far we have come since Mayor and Queloz announced their discovery of 51 Pegasi b in 1995. These days, we can effectively measure the masses of not just Jupiter-like planets, but even small planets thousands of light years away. As part of the Harps-N collaboration, we have had a front-row seat since 2012 in the science of small exoplanets.

Another milestone in this story came four years after the 51 Pegasi b discovery, when a Canadian PhD student at Harvard University, David Charbonneau, detected the transit of a known exoplanet. This was another hot Jupiter, known as HD209458b, also located in the Pegasus constellation, about 150 light years from Earth.

Transit refers to a planet passing in front of its star, from the perspective of the observer, momentarily making the star appear dimmer. As well as detecting exoplanets, the transit technique enables us to measure the radius of the planet by taking many brightness measurements of a star, then waiting for it to dim due to the passing planet. The extent of blocked starlight depends on the radius of the planet. For example, Jupiter would make the Sun 1% dimmer to alien observers, while for Earth, the effect would be a hundred times weaker.

In all, four times more exoplanets have now been discovered using this transit technique compared with the “barcode” technique, known as radial velocity, that the Swiss astronomers used to spot the first exoplanet 30 years ago. It is a technique that is still widely used today, including by us, as it can not only find a planet but also measure its mass.

A planet orbiting a star exerts a gravitational pull which causes that star to wobble back and forth – meaning it will periodically change its velocity with respect to observers on Earth. With the radial velocity technique, we take repeated measurements of the velocity of a star, looking to find a stable periodic wobble that indicates the presence of a planet.

These velocity changes are, however, extremely small. To put it in perspective, the Earth makes the Sun change its velocity by a mere 9cm per second – slower than a tortoise. In order to find planets with the radial velocity technique, we thus need to measure these small velocity changes for stars that are many many trillions of miles away from us.

The state-of-the-art instruments we use are truly an engineering feat. The latest spectrographs, such as Harps-N and also Espresso, can accurately measure velocity shifts of the order of tenths of centimetres per second – although still not sensitive enough to detect a true Earth twin.

But whereas this radial velocity technique is, for now, limited to ground-based observatories and can only observe one star at the time, the transit technique can be employed in space telescopes such as the French Corot (2006-14) and Nasa’s Kepler (2009-18) and Tess (2018-) missions. Between them, space telescopes have detected thousands of exoplanets in all their diversity, taking advantage of the fact we can measure stellar brightness more easily from space, and for many stars at the same time.

Despite the differences in detection success rate, both techniques continue to be developed. Applying both can give the radius and mass of a planet, opening up many more avenues for studying its composition.

To estimate possible compositions of our discovered exoplanets, we start by making the simplified assumption that small planets are, like Earth, made up of a heavy iron-rich core, a lighter rocky mantle, some surface water and a small atmosphere. Using our measurements of mass and radius, we can now model the different possible compositional layers and their respective thickness.

This is still very much a work in progress, but the universe is spoiling us with a wide variety of different planets. We’ve seen evidence of rocky worlds being torn apart and strange planetary arrangements that hint at past collisions. Planets have been found across our galaxy, from Sweeps-11b in its central regions (at nearly 28,000 light years away, one of the most distant ever discovered) to those orbiting our nearest stellar neighbour, Proxima Centauri, which is “only” 4.2 light years away.

Illustration of the exoplanet Proxima b
Illustration of Proxima b, one of the exoplanets orbiting the nearest star to our Sun, Proxima Centauri.
Catmando/Shutterstock

Searching for ‘another Earth’

In early July 2013, one of us (Christopher) was flying out to La Palma for my first “go” with the recently commissioned Harps-N spectrograph. Keen not to mess up, my laptop was awash with spreadsheets, charts, manuals, slides and other notes. Also included was a three-page document I had just been sent, entitled: Special Instructions for ToO (Target of Opportunity).

The first paragraph stated: “The Executive Board has decided that we should give highest priority to this object.” The object in question was a planetary candidate thought to be orbiting Kepler-78, a star a little cooler and smaller than our Sun, located about 125 light years away in the direction of the constellation Cygnus.

A few lines further down read: “July 4-8 run … Chris Watson” with a list of ten times to observe Kepler-78 – twice per night, each separated by a very specific four hours and 15 minutes. The name above mine was Didier Queloz’s (he hadn’t been awarded his Nobel prize yet, though).

This planetary candidate had been identified by the Kepler space telescope, which was tasked with searching a portion of the Milky Way to look for exoplanets as small as the Earth. In this case, it had identified a transiting planet candidate with an estimated radius of 1.16 (± 0.19) Earth radii – an exoplanet not that much larger than Earth had potentially been spotted.

I was in La Palma to attempt to measure its mass which, combined with the radius from Kepler, would allow the density and possible composition to be constrained. I wrote at the time: “Want 10% error on mass, to get a good enough bulk density to distinguish between Earth-like, iron-concentrated (Mercury), or water.”

In all, I took ten out of our team’s total of 81 exposures of Kepler-78 in an observing campaign lasting 97 days. During that time, we became aware of a US-led team who were also looking for this potential planet. In true scientific spirit, we agreed to submit our independent findings at the same time. On the specified date. Like a prisoner swap, the two teams exchanged results – which agreed. We had, within the uncertainties of our data, reached the same conclusion about the planet’s mass.

Its most likely mass came out as 1.86 Earth masses. At the time, this made Kepler-78b the smallest extrasolar planet with an accurately measured mass. The density was almost identical to that of Earth’s.

But that is where the similarities to our planet ended. Kepler-78b has a “year” that lasts only 8.5 hours, which is why I had been instructed to observe it every 4hr 15min – when the planet was at opposite sides of its orbit, and the induced “wobble” of the star would be at its greatest. We measured the star wobbling back and forth at about two metres per second – no more than a slow jog.

Kepler-78b’s short orbit meant its extreme temperature would cause all rock on the planet to melt. It may have been the most Earth-like planet found at the time in terms of its size and density, but otherwise, this hellish lava world was at the very extremes of our known planetary population.

Illustration of the exoplanet Kepler-78b
Illustration of the Kepler-78b ‘lava world’ – similar in size and density to Earth.
simoleonh/Shutterstock

In 2016, the Kepler space telescope made another landmark discovery: a system with at least five transiting planets around a Sun-like star, HIP 41378, in the Cancer constellation. What made it particularly exciting was the location of these planets. Where most transiting planets we have spotted are closer to their star than Mercury is to the Sun (due to our detection capabilities), this system has at least three planets beyond the orbital radius of Venus.

Having decided to use our Harps-N spectrograph to measure the masses of all five transiting planets, it became clear after more than a year of observing that one instrument would not be enough to analyse this challenging mix of signals. Other international teams came to the same conclusion and, rather than compete, we decided to come together in a global collaboration that holds strong to this day, with hundreds of radial velocities gathered over many years.

We now have firm masses and radii for most of the planets in the system. But studying them is a game of patience. With planets much further away from their host star, it takes much longer before there is a new transit event or the periodic wobble can be fully observed. We thus need to wait multiple years and gather lots of data to gain insight in this system.

The rewards are obvious, though. This is the first system that starts resembling our Solar System. While the planets are a bit larger and more massive than our rocky planets, their distances are very similar – helping us to understand how planetary systems form in the universe.

The holy grail for exoplanet explorers

After three decades of observing, a wealth of different planets have emerged. We started with the hot Jupiters, large gas giants close to their star that are among the easiest planets to find due to both deeper transits and larger radial velocity signals. But while the first tens of discovered exoplanets were all hot Jupiters, we now know these planets are actually very rare.

With instrumentation getting better and observations piling up, we have since found a whole new class of planets with sizes and masses between those of Earth and Neptune. But despite our knowledge of thousands of exoplanets, we still have not found systems truly resembling our solar system, nor planets truly resembling Earth.

It is tempting to conclude this means we are a unique planet in a unique system. While this still could be true, it is unlikely. The more reasonable explanation is that, for all our stellar technology, our capabilities of detecting such Earth-like planets are still fairly limited in a universe so mind-bogglingly vast.

The holy grail for many exoplanet explorers, including us, remains to find this true Earth twin – a planet with a similar mass and radius as Earth’s, orbiting a star similar to the Sun at a distance similar to how far we are from the Sun.

While the universe is rich in diversity and holds many planets unlike our own, discovering a true Earth twin would be the best place to start looking for life as we know it. Currently, the radial velocity method – as used to find the very first exoplanet – remains by far the best-placed method to find it.

Thirty years on from that Nobel-winning discovery, pioneering planetary explorer Didier Queloz is taking charge of the very first dedicated radial velocity campaign to go in search of an Earth-like planet.

A major international collaboration is building a dedicated instrument, Harps3, to be installed later this year at the Isaac Newton Telescope on La Palma. Given its capabilities, we believe a decade of data should be enough to finally discover our first Earth twin.

Unless we are unique after all.


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

The Conversation

Christopher Watson receives funding from the Science and Technology Facilities Council (STFC).

Annelies Mortier receives funding from the Science and Technology Facilities council (STFC) and UK Research and Innovation (UKRI).

ref. Our quest to find a truly Earth-like planet in deep space – https://theconversation.com/our-quest-to-find-a-truly-earth-like-planet-in-deep-space-266550

Toronto Blue Jays: Amid Canada-U.S. tensions, ‘Canada’s team’ takes a run at America’s pastime

Source: The Conversation – Canada – By Noah Eliot Vanderhoeven, PhD Candidate, Political Science, Western University

Amid threats from United States President Donald Trump to make Canada the 51st state, the Toronto Blue Jays’ season started with protocols aimed at avoiding booing during the American national anthem and the removal of someone wearing a “Canada is not for sale hat” at the ballpark.

Nonetheless, the Blue Jays are still being heavily marketed as “Canada’s team” as they square off against the New York Yankees, America’s most storied baseball team.

Why do the Blue Jays frame themselves as not just Toronto’s team, but Canada’s? And is their current post-season run their biggest and most important opportunity in years to fully establish themselves as representing all of Canada?

Truly Canada’s team?

The Jays serving as Canada’s team may make sense since they’re the only Canadian team currently playing in Major League Baseball (MLB). But to some Canadians, positioning the Jays as the nation’s team may not sit well.

After all, for baseball fans in Québec, memories of the now-defunct Montreal Expos still loom large.

For fans closer to the Windsor-Detroit border, the Detroit Tigers are a more proximate and accessible team.

Finally, some British Columbia MLB enthusiasts — despite the trips Blue Jays fans make to take over T-Mobile Park when the Blue Jays play the Seattle Marinersstill opt to support the Mariners since the team is so much closer than the Blue Jays are in Toronto.

What all this means is that to some Canadian baseball fans, the Blue Jays aren’t really Canada’s team — they’re just Toronto’s.

Huge market

It’s unsurprising that the Toronto Blue Jays organization, owned by Rogers Communications — “proud owner of Canada’s team” — is intent on framing the squad this way because it provides a substantial financial boon. The Jays benefit greatly from being Canada’s team by compelling baseball fans from across the country to attend their games, and most importantly, to watch them on television.

Despite playing north of the border and earning revenues in the weaker Canadian dollar, the Jays operate in one of MLB’s largest markets — Toronto — and can also market to fans across the country. That gives them the largest geographical market in professional baseball — an entire nation.

This massive audience contributes to equally massive television ratings, even at a time when most MLB teams are struggling for regional television revenues. Being “Canada’s team” has also allowed the Blue Jays to spend competitively over the past 10 years and operate a Top 5 payroll, as they have in 2025, alongside other teams in huge markets like Los Angeles and New York.

Cross-border trash-talking

As the series with the Yankees continues, Prime Minister Mark Carney met with Trump to discuss trade, tariffs and security. The meeting, held just days after Trump made yet another veiled annexation threat, reportedly went well.

But the ongoing backdrop of tense relations between the U.S. and Canada is perhaps echoed by some of the commentary about both teams.

Early in the season, the Yankees’ play-by-play man, Michael Kay, called Toronto “not a first-place team” despite the Blue Jays having just passed the Yankees for first place in the American League East.

In September, Jays colour-commentator and former catcher, Buck Martinez, said that the Yankees were “not a good team.”

Also in September, a Baltimore Orioles television analyst, Brian Roberts, questioned how well Canadians understood baseball, leading to the Blue Jays themselves defending the baseball intelligence of their fans.

There was even a popular hoax online about Trump not inviting the Blue Jays to the White House should they win the World Series — an invite he’s extended to many championship teams in American sports leagues.

Stoking Canadian nationalism

Ultimately, the Blue Jays ended up winning the American League East, guaranteeing the Jays a home-field advantage against the Yankees. Blue Jays players and their manager, John Schneider, have spoken of the intense atmosphere Blue Jays fans create for their opponents and how the team draws on the support of the entire nation of Canada.

The Jays’ success so far in the post-season in this current political moment — as Trump is once again making veiled threats about making Canada the 51st state during tense trade negotiations — presents the Blue Jays with perhaps their best opportunity to fulfil their role as Canada’s team.

In a season defined by rivalry, politics and national pride, the Blue Jays are proving that even America’s pastime can become a canvas for Canadian nationalism.

The Conversation

Noah Eliot Vanderhoeven does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Toronto Blue Jays: Amid Canada-U.S. tensions, ‘Canada’s team’ takes a run at America’s pastime – https://theconversation.com/toronto-blue-jays-amid-canada-u-s-tensions-canadas-team-takes-a-run-at-americas-pastime-266882

Geothermal energy has huge potential to generate clean power – including from used oil and gas wells

Source: The Conversation – USA (2) – By Moones Alamooti, Assistant Professor of Energy and Petroleum Engineering, University of North Dakota

The world’s largest geothermal power station is under construction in Utah. Business Wire via AP

As energy use rises and the planet warms, you might have dreamed of an energy source that works 24/7, rain or shine, quietly powering homes, industries and even entire cities without the ups and downs of solar or wind – and with little contribution to climate change.

The promise of new engineering techniques for geothermal energy – heat from the Earth itself – has attracted rising levels of investment to this reliable, low-emission power source that can provide continuous electricity almost anywhere on the planet. That includes ways to harness geothermal energy from idle or abandoned oil and gas wells. In the first quarter of 2025, North American geothermal installations attracted US$1.7 billion in public funding – compared with $2 billion for all of 2024, which itself was a significant increase from previous years, according to an industry analysis from consulting firm Wood Mackenzie.

As an exploration geophysicist and energy engineer, I’ve studied geothermal systems’ resource potential and operational trade-offs firsthand. From the investment and technological advances I’m seeing, I believe geothermal energy is poised to become a significant contributor to the energy mix in the U.S. and around the world, especially when integrated with other renewable sources.

A May 2025 assessment by the U.S. Geological Survey found that geothermal sources just in the Great Basin, a region that encompasses Nevada and parts of neighboring states, have the potential to meet as much as 10% of the electricity demand of the whole nation – and even more as technology to harness geothermal energy advances. And the International Energy Agency estimates that by 2050, geothermal energy could provide as much as 15% of the world’s electricity needs.

Two people stand near a large container of shucked corn while steam billows from a pool of water behind them.
For generations, Maori people in New Zealand, and other people elsewhere around the world, have made use of the Earth’s heat, as in hot springs, where these people are cooking food in the hot water.
Wolfgang Kaehler/LightRocket via Getty Images

Why geothermal energy is unique

Geothermal energy taps into heat beneath the Earth’s surface to generate electricity or provide direct heating. Unlike solar or wind, it never stops. It runs around the clock, providing consistent, reliable power with closed-loop water systems and few emissions.

Geothermal is capable of providing significant quantities of energy. For instance, Fervo Energy’s Cape Station project in Utah is reportedly on track to deliver 100 megawatts of baseload, carbon-free geothermal power by 2026. That’s less than the amount of power generated by the average coal plant in the U.S., but more than the average natural gas plant produces.

But the project, estimated to cost $1.1 billion, is not complete. When complete in 2028, the station is projected to deliver 500 megawatts of electricity. That amount is 100 megawatts more than its original goal without additional drilling, thanks to various technical improvements since the project broke ground.

And geothermal energy is becoming economically competitive. By 2035, according to the International Energy Agency, technical advances could mean energy from enhanced geothermal systems could cost as little as $50 per megawatt-hour, a price competitive with other renewable sources.

Types of geothermal energy

There are several ways to get energy from deep within the Earth.

Hydrothermal systems tap into underground hot water and steam to generate electricity. These resources are concentrated in geologically active areas where heat, water and permeable rock naturally coincide. In the U.S., that’s generally California, Nevada and Utah. Internationally, most hydrothermal energy is in Iceland and the Philippines.

Some hydrothermal facilities, such as Larderello in Italy, have operated for over a century, proving the technology’s long-term viability. Others in New Zealand and the U.S. have been running since the late 1950s and early 1960s.

A large yellow vehicle with a tall tower on it stands in front of a house.
A drilling rig sits outside a home in White Plains, N.Y., where a geothermal heat pump is being installed.
AP Photo/Julia Nikhinson

Enhanced geothermal systems effectively create electricity-generating hydrothermal processes just about anywhere on the planet. In places where there is not enough water in the ground or where the rock is too dense to move heat naturally, these installations drill deep holes and inject fluid into the hot rocks, creating new fractures and opening existing ones, much like hydraulic fracturing for oil and gas production.

A system like this uses more than one well. In one, it pumps cold water down, which collects heat from the rocks and then is pumped back up through another well, where the heat drives turbines. In recent years, academic and corporate research has dramatically improved drilling speed and lowered costs.

Ground source heat pumps do not require drilling holes as deep, but instead take advantage of the fact that the Earth’s temperature is relatively stable just below the surface, even just 6 or 8 feet down (1.8 to 2.4 meters) – and it’s hotter hundreds of feet lower.

These systems don’t generate electricity but rather circulate fluid in underground pipes, exchanging heat with the soil, extracting warmth from the ground in winter and transferring warmth to the ground in summer. These systems are similar but more efficient than air-source heat pumps, sometimes called minisplits, which are becoming widespread across the U.S. for heating and cooling. Geothermal heat pump systems can serve individual homes, commercial buildings and even neighborhood or business developments.

Direct-use applications also don’t generate electricity but rather use the geothermal heat directly. Farmers heat greenhouses and dry crops; aquaculture facilities maintain optimal water temperatures; industrial operations use the heat to dehydrate food, cure concrete or other energy-intensive processes. Worldwide, these applications now deliver over 100,000 megawatts of thermal capacity. Some geothermal fluids contain valuable minerals; lithium concentrations in the groundwater of California’s Salton Sea region could potentially supply battery manufacturers. Federal judges are reviewing a proposal to do just that, as well as legal challenges to it.

Researchers are finding new ways to use geothermal resources, too. Some are using underground rock formations to store energy as heat when consumer demand is low and use it to produce electricity when demand rises.

Some geothermal power stations can adjust their output to meet demand, rather than running continuously at maximum capacity.

Geothermal sources are also making other renewable-energy projects more effective. Pairing geothermal energy with solar and wind resources and battery storage are increasing the reliability of above-ground renewable power in Texas, among other places.

And geothermal energy can power clean hydrogen production as well as energy-intensive efforts to physically remove carbon dioxide from the atmosphere, as is happening in Iceland.

A diagram shows pipes extending down from the surface of the ground, pushing cold water into hot rocks below, and drawing hot water back up.
Enhanced geothermal systems can be built almost anywhere and can take advantage of existing wells to save the time and money of drilling new holes deep into the ground.
U.S. Geological Survey

Geothermal potential in the US and worldwide

Currently, the U.S. has about 3.9 gigawatts of installed geothermal capacity, mostly in the West. That’s about 0.4% of current U.S. energy production, but the amount of available energy is much larger, according to federal and international engineering assessments.

And converting abandoned oil and gas wells for enhanced geothermal systems could significantly increase the amount of energy available and its geographic spread.

One example is happening in Beaver County, in the southwestern part of Utah. Once a struggling rural community, it now hosts multiple geothermal plants that are being developed to both demonstrate the potential and to supply electricity to customers as far away as California.

Those projects include repurposing idle oil or gas wells, which is relatively straightforward: Engineers identify wells that reach deep, hot rock formations and circulate water or another fluid in a closed loop to capture heat to generate electricity or provide direct heating. This method does not require drilling new wells, which significantly reduces setup costs and environmental disruption and accelerates deployment.

There are as many as 4 million abandoned oil and gas wells across the U.S., some of which could shift from being fossil fuel infrastructure into opportunities for clean energy.

Challenges and trade-offs

Geothermal energy is not without technical, environmental and economic hurdles.

Drilling is expensive, and conventional systems need specific geological conditions. Enhanced systems, using hydraulic fracturing, risk causing earthquakes.

Overall emissions are low from geothermal systems, though the systems can release hydrogen sulfide, a corrosive gas that is toxic to humans and can contribute to respiratory irritation. But modern geothermal plants use abatement systems that can capture up to 99.9% of hydrogen sulfide before it enters the atmosphere.

And the systems do use water, though closed-loop systems can minimize consumption.

Building geothermal power stations does require significant investment, but its ability to deliver energy over the long term can offset many of these costs. Projects like those undertaken by Fervo Energy show that government subsidies are no longer necessary for a project to get funded, built and begin generating energy.

Despite its challenges, geothermal energy’s reliability, low emissions and scalability make it a vital complement to solar and wind – and a cornerstone of a stable, low-carbon energy future.

The Conversation

Moones Alamooti does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Geothermal energy has huge potential to generate clean power – including from used oil and gas wells – https://theconversation.com/geothermal-energy-has-huge-potential-to-generate-clean-power-including-from-used-oil-and-gas-wells-266555

Seasonal allergies may increase suicide risk – new research

Source: The Conversation – USA (3) – By Shooshan Danagoulian, Associate Professor of Economics, Wayne State University

The study found that deaths by suicide rose by up to 7.4% on high-pollen days. Grace Cary/Moment via Getty Images

Seasonal allergies – triggered by pollen – appear to make deaths by suicide more likely. Our findings, published in the Journal of Health Economics, show that minor physical health conditions like mild seasonal allergies, previously thought not to be an immediate trigger of suicide, are indeed a risk factor.

To evaluate the link between seasonal allergies and suicide, my co-authors and I combined daily pollen measurements with daily suicide counts across 34 U.S. metropolitan areas.

Because both pollen and suicide are sensitive to weather conditions, we carefully accounted for temperature, rainfall and wind. We also controlled for differences in local climate and plant life, since pollen levels vary by region, and for seasonal averages that might otherwise obscure results. This allowed us to compare suicide counts on days with unexpectedly high pollen to days with little or none in the same county.

The results were striking. Relative to days with no or low levels of pollen, we found that deaths by suicide rose by 5.5% when pollen levels are moderate and 7.4% when levels are high. The increase was even larger among people with a known history of mental health conditions or treatment. We also showed that on high-pollen days, residents of affected areas experience more depressive symptoms and exhaustion.

Our analysis suggests that allergies exacerbate existing vulnerabilities, pushing some people toward crisis. We suspect that sleep disruption is the link between allergies and suicide rates.

Why it matters

More than 80 million Americans experience seasonal allergies each year.

Symptoms include sneezing, congestion, itchy eyes and scratchy throat. Most people experiencing these symptoms feel sluggish during the day and sleep poorly at night. Allergy sufferers might not realize, however, that these symptoms reduce alertness and cognitive functioning – some of the factors that can worsen mental health and increase vulnerability to suicidal thoughts and behaviors.

Suicide rates have been growing steadily in the past two decades, by 37% between 2000 and 2018. According to the Centers for Disease Control and Prevention, more than 49,000 Americans died by suicide in 2022, and over 616,000 visited emergency departments for self-harm injuries.

Although socioeconomic and demographic factors are the most important predictors of suicide, much less is known about its short-term triggers. Our study adds to growing evidence that the environment – including something as natural as pollen – can influence mental health risks.

This issue is likely to become more urgent as the climate changes. Rising temperatures lengthen pollen seasons and increase pollen volume. Over the past two decades, pollen seasons have grown in both intensity and duration, and projections suggest they will continue to worsen.

That means more people will experience stronger allergy symptoms, with ripple effects not only for physical health but also for sleep, mood and mental well-being.

3-dimensional illustration of a variety of pollen grain types being transported through the air.
Higher temperatures from climate change contribute to more pollen in the air for longer periods of time during pollen seasons.
Christoph Burgstedt/iStock via Getty Images Plus

What we still don’t know

Despite the scale of the problem, there are no national systems in the U.S. to consistently measure and communicate pollen levels. Most communities lack reliable forecasts and alert systems that would allow vulnerable people to take precautions. This gap limits both prevention and research.

Our study focused on metropolitan areas where pollen and death counts were available, but we cannot yet generalize to rural areas. That is a concern because rural communities often face greater shortages in mental health care and pharmacy access – and have seen rising suicide rates over the past decade.

What’s next

For people who are already receiving mental health care, recognizing and treating seasonal allergies is a key part of self-care.

Over-the-counter medications can be highly effective at reducing symptoms.

More broadly, people should be aware that during peak allergy season, reduced alertness, sleep disruptions and mood fluctuations may place an increased burden on their mental health, in addition to the allergy symptoms.

In terms of policy, improving pollen monitoring and public communication could help people anticipate high-risk days. Such infrastructure would also support further research, particularly in rural areas where data is currently lacking. Our next step, supported by the American Foundation for Suicide Prevention, is to examine the impact of pollen on rural communities.

The Research Brief is a short take on interesting academic work.

The Conversation

Shooshan Danagoulian receives funding from American Foundation of Suicide Prevention.

ref. Seasonal allergies may increase suicide risk – new research – https://theconversation.com/seasonal-allergies-may-increase-suicide-risk-new-research-266459

First evidence in the UK of breeding aegypti mosquito – the main spreader of dengue, chikungunya and Zika

Source: The Conversation – UK – By Marcus Blagrove, Senior Lecturer in Intregrative Virology, University of Liverpool

Thammanoon Khamchalee/Shutterstock.com

Scientists have found eggs of the Aedes aegypti mosquito in the UK for the first time – a mosquito that spreads many tropical diseases.

The eggs were recently discovered in a trap at a freight depot near Heathrow airport and confirmed by DNA testing to be Ae aegypti. The discovery, led by the UK Health Security Agency, also reported further findings of Aedes albopictus, the “Asian tiger” mosquito, at a site in Kent in summer 2024. Both species are invasive and thrive in warm, humid conditions.

These Aedes mosquitoes matter because they can spread viruses such as dengue, chikungunya and Zika. Outbreaks of these diseases, once confined to the tropics, are now appearing in Europe.

In 2024, Italy recorded over 200 locally acquired dengue cases, mainly in the Marche region, while France and Spain also reported domestic dengue transmission. Chikungunya has become another European concern, with France reporting nearly 500 locally transmitted cases in 2025. Zika has not yet taken hold in Europe, but the same mosquito species could carry it if conditions allow.

Two related viruses, West Nile and Usutu, are also spreading further north across Europe. West Nile virus has caused outbreaks in birds, horses and people across Europe, and has now been detected in the UK for the first time.

In summer 2023, scientists found West Nile virus genetic material in wild mosquitoes from samples collected in Nottinghamshire. Usutu, which mainly infects birds, was first detected in London blackbirds in 2020 and has been found in birds or mosquitoes every year since, making it now endemic to the UK.

Both viruses belong to the same family as Japanese encephalitis, and although they primarily circulate in birds and mosquitoes, they can also incidentally infect humans. They also tend to move together. Usutu often establishes first, with West Nile following as temperatures rise.

The UK Health Security Agency notes that West Nile’s range has recently expanded “to more northerly and western areas of Europe”. Together, these findings show how climate change is shifting mosquito-borne diseases northwards.

Laboratory studies have confirmed that native British mosquitoes could transmit these viruses under local UK-climate conditions. Research has shown several species can become infected and even pass the virus on at typical summer temperatures.

For instance, common native Culex mosquitoes from England were found capable of transmitting Usutu in their saliva at just 19°C. In the same study, Culex pipiens and Culiseta annulata were able to transmit the UK Usutu strain, suggesting the virus could spread northwards.

Another experiment found that the salt-marsh mosquito Ochlerotatus (Aedes) detritus can transmit West Nile at 21°C, but not dengue or chikungunya. Combined, these results demonstrate that native UK mosquitoes are able to carry and transmit viruses like West Nile and Usutu if the right climate conditions occur.

London Heathrow, Terminal 5 interior.
Eggs of the Egyptian mosquito, Aedes aegypti, were found at Heathrow.
Alexandre Rotenberg/Shutterstock.com

More welcoming

The pattern is clear: climate change and global travel are together loading the dice. Warmer summers, milder winters and heavier rainfall are making the UK more welcoming to these insects.

Climate models already predict that Ae albopictus could become established in southern England within the next few decades. At the same time, more people and goods are travelling between the UK and regions where these diseases are endemic, bringing both mosquitoes and infections with them.

The UK Health Security Agency recorded hundreds of imported dengue and chikungunya cases last year. Each one a potential spark if the right mosquitoes are present.

The Animal and Plant Health Agency, a UK government agency, warns that this northward jump of mosquito-borne diseases is “primarily driven by movement of people and global climate change”.

In plain terms, the UK is warming into range for these tropical “vectors” and the viruses they carry. Already, Ae albopictus breeds widely across continental Europe, while local dengue and chikungunya outbreaks are appearing further north each year. West Nile and Usutu are following a similar path.

The UK’s surveillance network, coordinated by the Health Security Agency with universities and local authorities, is already monitoring sites most at risk of mosquito introductions. This coordinated approach is designed to catch incursions early and keep Britain ahead of a rapidly shifting global disease map.

The combination of a changing climate, international travel and the ability of these insects to thrive means both invasive mosquito species and the viruses they carry are edging closer to establishing in the UK.

The continuing surveillance and early detection will be crucial to catch incursions before they spread. As Britain’s summers grow warmer and wetter, the insects and diseases once confined to the tropics are finding a new home – even in today’s not-so-chilly UK.

The Conversation

Marcus Blagrove currently receives research funding from UKRI (cross council), BBSRC, MRC, NERC, DEFRA, The Leverhulme Trust, and The Pandemic Institute.

ref. First evidence in the UK of breeding aegypti mosquito – the main spreader of dengue, chikungunya and Zika – https://theconversation.com/first-evidence-in-the-uk-of-breeding-aegypti-mosquito-the-main-spreader-of-dengue-chikungunya-and-zika-266767

Why higher ed’s AI rush could put corporate interests over public service and independence

Source: The Conversation – USA (2) – By Chris Wegemer, Postdoctoral researcher, University of California, Los Angeles

A new AI research center opening in North Carolina: Colleges and universities are embracing AI technology, often through corporate partnerships. North Carolina Central University via Getty Images

Artificial intelligence technology has begun to transform higher education, raising a new set of profound questions about the role of universities in society. A string of high-profile corporate partnerships reflect how universities are embracing AI technology.

The University of Florida began assembling one of the fastest university supercomputers through a collaboration with Nvidia encompassing AI infrastructure, research support and curriculum development. Princeton launched the New Jersey AI Hub with Microsoft, CoreWeave and the state government, which will house AI startups on university-owned land under a Princeton director. Meanwhile, the California State University system partnered with OpenAI to provide ChatGPT Edu to all students and faculty, branding itself as “the first AI-powered university system in the United States.”

As a social scientist who studies educational technology and organizational partnerships, I view these collaborations as part of a decades-long shift toward the “corporatization” of higher education – where universities have become increasingly market-driven, aligning their priorities, culture and governance structures with industry partners.

I see the rise of generative AI as accelerating this trend, which risks undermining higher education’s autonomy and public service mission. Examining the underlying organizational forces that shape the future of higher education can shed light on how AI challenges universities’ traditional principles – and how they might resist corporate influence.

The rise of corporate partnerships

Over the past 50 years, private sector support for university research has increased tenfold, outpacing overall growth in higher education research spending. A pivotal shift came in 1980, when universities gained the right to retain intellectual property from federally funded research. This made commercialization of university research far easier. Over time, corporate involvement pushed university research toward commercial needs and increasingly exposed universities to the profit motive.

But partnerships haven’t just brought in money for universities; they’ve reinforced a shift toward closer alignment with industry. Universities expanded dramatically in the second half of the 20th century to meet companies’ demand for skilled labor, further coupling higher education to market incentives.

After decades of growth, however, university enrollment peaked in 2010, partly due to demographics, and the decline is projected to continue. Meanwhile, competition from training programs offered by tech companies has been growing, and federal funding has been slashed under President Donald Trump.

As colleges continue to close at record rates, the imperative to attract tuition dollars and research grants increasingly dictates institutional priorities. I argue that universities risk sidelining research that serves the public interest by looking toward corporate funding and partnerships to fill the gaps.

In my view, the shift away from public-good scholarship to monetizable content and services shaped by external industry partners jeopardizes the academic freedom and intellectual stewardship that once anchored the mission of higher education. For example, under financial constraints, university administrators may be inclined to overlook glaring value misalignments between their public mission and the commercial objectives of AI firms.

The forces driving universities’ AI initiatives

At many universities, AI adoption and the turn toward corporate collaborations are driven by more than economic vulnerabilities. The broad range of partnerships with AI companies across higher education can provide insight into the deeper dynamics at work.

Differences in AI partnerships are emerging around long-standing divides between types of institutions. Stanford’s Institute for Human-Centered AI can be interpreted as an attempt to steer global discourse on ethical AI while preserving human-led research as a marker of elite prestige. Meanwhile, AI initiatives at institutions with a strong focus on teaching and accessibility, such as California State University and Arizona State University, appear to prioritize efficiency in learning outcomes and workforce development.

A student at California State University campus walks past a library.
The California State University system aims to become the first and largest ‘AI-powered university system,’ motivated in part to prepare students for careers in an AI-driven economy.
Myung J. Chun via Getty Images

This underscores how AI partnerships are not guided by market incentives alone. Before universities grew into multibillion-dollar businesses, their decision-making was primarily driven by markers of intellectual prestige, such as scholarly excellence and faculty reputation. Universities largely held a monopoly over knowledge production and served as the primary gatekeepers of intellectual legitimacy, until the digital revolution dramatically decentralized access to knowledge and its production. Universities now coexist in – and increasingly compete with – a crowded, complex ecosystem of companies and organizations that produce original research.

Generative AI represents a powerful new mode of knowledge production and synthesis, which further threatens to upend traditional forms of scholarship. Confronted with challenges to their authority, universities may attempt to preserve their elite intellectual status by rushing into partnerships with AI companies eager to capture the higher education market.

My interpretation is that economic pressures and the pursuit of prestige may be converging to reinforce a technocratic approach to higher education, where university decision-making is primarily guided by performance metrics and corporate-style governance rather than the public interest.

A purposeful path forward

The evolution of higher education in response to AI has brought long-standing debates about the purpose of universities to the forefront of public discourse. Decades of corporatization has helped fuel widespread “mission sprawl” and conflicting institutional goals across higher education. Consistent with organizational theory, ambiguity about universities’ role in society could lead many institutions to become increasingly susceptible to corporate co-optation, political interference and eventual collapse.

Although partnerships between universities and corporations can advance research and support students, corporate norms and academic principles are inherently distinct. And at many universities the process through which differences in institutional values are surfaced and reconciled is unclear, especially as AI initiatives have often sidestepped democratic faculty governance.

The recent surge in AI partnerships puts in plain view the growing dominance of market forces in higher education. As universities continue to adopt AI technologies, the consequences for intellectual freedom, democratic decision-making and commitment to the public good will become an increasingly pressing question.

Research support was provided by undergraduate research assistant Mehra Marzbani, whose contributions are gratefully acknowledged.

The Conversation

Chris Wegemer is affiliated with UCLA.

ref. Why higher ed’s AI rush could put corporate interests over public service and independence – https://theconversation.com/why-higher-eds-ai-rush-could-put-corporate-interests-over-public-service-and-independence-260902

Winning a bidding war isn’t always a win, research on 14 million home sales shows

Source: The Conversation – USA (2) – By Soon Hyeok Choi, Assistant Professor of Real Estate Finance, Rochester Institute of Technology

In today’s hot housing market, winning a bidding war can feel like a triumph. But my research shows it often comes with a catch: Homebuyers who win bidding wars tend to experience a “winner’s curse,” systematically overpaying for their new homes.

I’m a real estate economist, and my colleagues and I analyzed nearly 14 million home sales in 30 U.S. states over roughly two decades. We found that people who paid more than the asking price for their homes – a reliable sign of a bidding war – were more likely to default on their mortgages and saw significantly weaker returns.

How much weaker? On average, homebuyers who won bidding wars saw annual returns that were about 1.3 percentage points lower than those who didn’t, we found. We specifically looked at “unlevered” returns – basically, the returns you’d get if you bought the home outright with cash, without factoring in a mortgage.

Since the typical homeowner in our sample held a property for 6.3 years before selling it, this translates to about an 8.2% overpayment. Bidding-war winners were also 1.9 percentage points likelier to default.

Perhaps that loss would be worth it to someone who absolutely loves the property – but we found that homebuyers who purchase after a bidding war are also faster to resell. This suggests their overpayment is based less on enduring affection and more on bidding-war fever.

We also found that the effects of the winner’s curse – lower home appreciation and higher default rates – are stronger in places where bidding wars are more common. One example is my hometown of Rochester, New York, which has become a bidding-war hot spot in recent years.

Who bears the brunt? Lower-income, Black and Hispanic buyers are more likely to overpay in bidding wars, we found, making them more likely to suffer from the winner’s curse. This suggests that hot housing markets can worsen inequality.

Why it matters

While housing is the largest single form of wealth Americans own, past research on the winner’s curse mostly dealt with land auctions and company mergers – not the nation’s roughly 76 million owner-occupied, single-family homes. Our work is the first to show the direct evidence of the winner’s curse in residential housing markets.

This matters now because the housing market is cooling. Those who bought in the post-pandemic housing market and listed their homes in 2025 are already facing the risk of selling at a loss. Because this risk falls disproportionately on Black and Hispanic homebuyers, it could further widen the wealth gap.

By one measure, foreclosures are up 18% year over year. If the brunt of these losses falls on lower-income or otherwise vulnerable homeowners, the result could be an increase in housing insecurity and homelessness.

The good news is that the winner’s curse may be preventable. Better resources to prepare first-time homebuyers and comprehensive financial education related to mortgages and debt could help.

What still isn’t known

It’s possible more transparent bidding processes – or even formal auction systems for popular homes – could better inform prospective buyers and help them stave off the temptation of overpayment. Should the U.S. require real estate brokers or banks to caution their clients to think twice before going above the asking price? Or would that be unfair to sellers? Experimental research on these points would be useful.

Finally, our research focuses on the U.S. housing market. Whether the winner’s curse afflicts buyers in other countries remains an open question.

The Research Brief is a short take on interesting academic work.

The Conversation

Soon Hyeok Choi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Winning a bidding war isn’t always a win, research on 14 million home sales shows – https://theconversation.com/winning-a-bidding-war-isnt-always-a-win-research-on-14-million-home-sales-shows-266723